Still Looking Good While Testing: Automated Testing With a Visual Regression Service Part II

If you’ve followed our blog regularly, you’ve probably read our post on using visual regression testing tools and services to better test your applications’ front end look and feel.  If not, take a few minutes to read through our previous post on this topic.

Now that you’re up to speed, let’s take what we did in our previous post to the next level.

In this post, we’re going to show how you can alter your test configuration and code to test mobile browsers through a given testing service (browserstack) as well as get better reporting out of our tests natively and through options available to us in Jenkins.

Adjusting the Fuzz Factor:

If you run your test multiple times across several browsers with the visual regression service  using default settings, you may notice a handful of exception images cropping up in the diff directory which look nearly identical.

browser-fight

Browser fight!!!

 

 

 

 

 

 

 

If you happen to be testing across several browsers (using Browserstack or a similar service) which potentially have these browsers hosted in real or virtual environments with widely differing screen resolutions, this may impact how the visual-regression service performs its image diff comparison.

To alleviate this, either ensure your various browser hosts are able to display the browser in the same resolution or add a slight increase in your test’s mismatch tolerance.

As mentioned in our previous post, you can do this by editing the values under the ‘visual regression service’ object in your project’s wdio.conf file.  For example:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.25,
  }),

Setting the mismatch tolerance value to 0.25 in the above snipped would allow the regression service a 25% margin of error when checking screen shots against any reference images it has captured.

Better Test Results:

Also mentioned in the previous post, one of the drawbacks to given examples of using the visual-regression service in our tests is that there is little feedback being returned other than the output of our image comparison.

head-in-sand

Having fun with that lack of feedback?

 

 

 

 

 

However, with a bit of extra code, we can make usable assertion statements in our test executions once a screen comparison event is generated.

The key is the checkElement() method which is really a WebdriverIO method that is enhanced by the visual-regression service.  This method returns an object that contains some meta data about the check being requested for the provided argument.

We can assign the method call to a new variable, which, once we ‘deserialize’ the object into something readable (e.g. a JSON string) we can then leverage Chai or some other framework to use assertions to make our tests more descriptive.

Here’s an example:

it('should test some things visually', () => {
  ...
  let returnedContents = JSON.stringify(browser.checkElement('<my web element>');
  console.log(returnedContents);
  assert.includes('<deserialized text to assert for', returnedContents);
});

In the above code snippet, near the end of the test, we are calling ‘checkElement()’ to do a visual comparison of the contents of the given web element selector, then converting the object returned by ‘checkElement()’ to a string.  Afterward, we are asserting there is some text/string content within the stringified object our comparison returned.

In the case of the text assertion, we would want to assert a successful match message is contained within the returned object.  This is because the ‘checkElement()’ method, while it may return data that indicates a test failure, on its own will not trigger an exception that would appropriately fail our test should an image comparison mismatch occur.

Adding Mobile

going-mobile

Oooh – someone downloaded the pancake app!

 

 

 

 

 

Combining WebdriverIO’s framework along with the visual-regression service and a browser test service like Browserstack, we can create tests that run against real browsers.  To do this, we will need to make some changes to our WebdriverIO config.  Try the following:

  1. Make a copy of your wdio.conf.js
  2. Name the copy wdio.mobile.conf.js
  3. Edit your package.json file
  4. Copy the ‘test’ key/value pair under ‘scripts’ and past it to a new line, rename it ‘test:mobile’
  5. Point the test:mobile script to the wdio.mobile.conf.js config file and save your changes

Next, we need to edit the contents of the wdio.mobile.conf.js script to run tests only against mobile devices.  The reason for adding a whole, new test config and script declaration is that due to the way mobile devices behave, there are some settings to declare for mobile browser testing with WebdriverIO and Browserstack which are incompatible with running tests against desktop browsers.

Edit the code block at the top of the mobile config file, changing it to the following:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');
function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserOrientation = context.meta.orientation;
    return path.join(basePath,
    `${testName}_${browserName}_v${browserVersion}_${browserOrientation}_mobile.png`);
  };
}

Note that we’ve removed the declarations for height and width dimensions.  As Browserstack allows us to test on actual mobile devices, defining viewport constraints are not only unnecessary, but it will result in our tests failing to execute (the height and width dimension object can’t be passed to webdriver as part of a configuration for a real, mobile device).

Next, update the visual-regression service’s orientation configuration near the bottom of the mobile config files to the following:

orientations: ['portrait', 'landscape'],

Since applications using responsive design can sometime break when moving from one orientation to another on mobile devices, we’ll want to run out tests in both orientation modes.

Stating this behavior in our configuration above will trigger our tests to automatically switch orientations and retest.

Finally, we’ll need to update our browser capability settings in this config file:

capabilities: [
{
  device: 'iPhone 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPhone 8',
},
{
  device: 'iPad 6th',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPad',
},
{
  device: 'Samsung Galaxy S9',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Galaxy S9',
},
{
  device: 'Samsung Galaxy Note 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Note 8',
},
]

In the code above, the ‘realMobile’ descriptor is necessary to run tests against modern mobile devices through Browserstack.  For more information on this, see Browserstack’s documentation.

Once your change is saved, try running your tests on mobile by doing:

npm run test:mobile

Examine the image outputs of your tests and compare them to the results from your desktop test run.  Now you should be able to run tests for your app’s UI across a wide range of browsers (mobile and desktop).

Taking Things to Jenkins

You could include the project we’ve put together so far into the codebase of your app (just copy the dev dependencies from your package.json your specs and configs into the existing project).

Another option is to use this as a standalone testing tool to apply some base screen-shot based verification of your app.

In the following example, we’ll go through the steps you can to set up a project like this as a resource to use in Jenkins to test a given application and send the results to members of your team.

Consider the value of being able to generate a series of targeted screen shots of elements of your app per build and send them to team members like a product manager or UX designer.  This isn’t necessarily ‘heavy lifting’ from a dev ops standpoint but automating any sort of feedback for your app will trend toward delivering a better app, faster.

Assuming you have your project hosted in Github, you have administrative access to Jenkins and that your app is hosted in an environment Jenkins can access (your organization’s Amazon S3 bucket, hosted from a Docker image, etc.):

I. Create a new Jenkins job

jenkins-proj-name

 

 

 

 

* Create a new Jenkins task within the same space the job that builds and ships your web app (should one exist).  Assign it a name.

II. Configure the job to pull from your image testing app from Github

jenkins-set-repo

 

 

 

 

 

* Go to the source control configuration portion of the job.  Enter the info for your repository for the test app you’ve built.

III. Set the project to build periodically

jenkins-set-trigger

 

 

 

 

 

 

* Within the build settings of the job, choose the build periodically option and configure your job’s frequency.

Ideally, you would set this job to trigger once the task that builds and ‘ships’ your app completes successfully (thus, executing your tests every time a new version of the app is published).

Alternatively, to run periodically based on a given time, then enter something like ‘* 23 * * *’ into Jenkins’ cron configuration field to set your job to run once every day (in this case, at 11 PM, relative to your server’s configuration).

IV. Build your shell execution script

jenkins-shell-script

 

 

 

 

 

 

 

* Create a build step and for your task and choose the shell script option.  You can embed your npm test script execution here.

V. Collect your artifacts

From the list of post build options, choose the ‘archive artifacts’ option.  In the available text field, enter the path to the generated screen shots you wish to capture.  Note that this path may differ depending on how your Jenkins server is configured.  If you’re having trouble pinpointing the exact path to your artifacts, echo the ‘pwd’ command in your shell script step to have the job list your working directory in the job’s console output then work from there.

jenkins-archive

VI. Sending email notifications

jenkins-email-config

 

 

 

 

 

 

* Lastly, choose from the post build options menu the advanced email notification option.

Enter an email or list of email addresses you wish to contact once this job completes.  Fill out your email settings (subject line, body, etc.) and be sure to enter the path to the screen resources you will wish to attach to a given email.

Save your changes and you are ready to go!

This job will run on a regular basis, execute a given set of UI specific tests and send screen capture information to inquiring team members who would want to know.

You can create more nuanced runs but using Jenkins’ clone feature to copy this job, altering to run your mobile-specific tests, diversifying your test runs.

Further Reading

If you’re looking to dive further into visual regression testing, there are several code examples online worth reviewing.

Additionally, there are other visual regression testing services worth investigating such as Percy.IO, Screener.IO and Fluxguard.

Looking Good While Testing: Automated Testing With a Visual Regression Service

A lot of (virtual) ink has been spilled on this blog about automated testing (no, really). This post is another in a series of dives into different automated testing tools and how you can use them to deliver a better, higher-quality web application.

Here, we’re going to focus on tools and services specific to ‘visual regression testing‘ – specifically cross-browser visual testing of an application front end.

What?

By visual regression testing, we mean regression testing applied specifically to an app’s appearance may have changed across browsers or over time as opposed to functional behavior.

Why?

One of the most common starting points in testing a web app is to simply fire up a given browser, navigate to the app in your testing environment, and note any discrepancy in appearance (“oh look, the login button is upside down. Who committed this!?”).

american_psycho

A spicy take on how to enforce code quality – we won’t be going here.

 

 

 

 

 

 

 

The strength of visual regression testing is that you’re testing the application against a very humanistic set of conditions (how does the application look to the end user versus how it should look). The drawback is that doing this is generally time consuming and tedious.  But that’s what we have automation for!

hamburger_ad_vs_reality

How our burger renders in Chrome vs how it renders in IE 11…

How?

A wise person once said, ‘In software automation, there is no such thing as a method call for “if != ugly, return true”‘.

For the most part – this statement is true. There really isn’t a ‘silver bullet’ for the software automation problem of fully automating testing the appearance of your web application across a given browser support matrix.  At least not with some caveats.

The methods and tools for doing so can run afoul of at least one of the following:

  • They’re Expensive (in terms of time, money or both)
  • They’re Fragile (tests emit false negatives, can be unreliable)
  • They’re Limited (covers only a subset of supported browsers)
faberge_egg

Sure, you can support delicate tools. Just keep in mind the total cost.

 

Tools

We’re going to show how you can quickly set up a set of tools using WebdriverIO, some simple JavaScript test code and the visual-regression-service module to create snapshots of your app front end and perform a test against its look and feel.

Setup

Assuming you already have a web app ready for testing in your choice of environment (hopefully not in production) and that you are familiar with NodeJS, let’s get down to writing our testing solution:

interesting_man_tests_code

This meme is so old, I can’t even…

 

 

 

 

 

 

 

 

1. From the command line, create a new project directory and do the following in order:

  • ‘npm init’ (follow the initialization prompts – feel free to use the defaults and update them later)
  • ‘npm install –save-dev webdriverio’
  • ‘npm install –save-dev wdio-visual-regression-service’
  • ‘npm install –save-dev chai’

2. Once you’ve finished installing your modules, you’ll need to configure your instance of WebdriverIO. You can do this manually by creating the file ‘wdio.conf.js’ and placing it in your project root (refer to the WebdriverIO developers guide on what to include in your configuration file) or you can use the wdio automated configuration script.

3. To quickly configure your tools, kick off the automated configuration script by running ‘npm run wdio’ from your project root directory.  During the configuration process, be sure to select the following (or include this in your wdio.conf.js file if you’re setting things up manually):

  • Under frameworks, be sure to enable Mocha (we’ll use this to handle things like assertions)
  • Under services be sure to enable the following:
    • visual-regression
    • Browserstack (we’ll leverage Browserstack to handle all our browser requests from WebdriverIO)

Note that in this case, we won’t install the selenium standalone service or any local testing binaries like Chromedriver. The purpose of this exercise is to quickly package together some tools with a very small footprint that can handle some high-level regression testing of any given web app front end.

Once you have completed the configuration script, you should have a wdio.conf.js file in your project configured to use WebdriverIO and the visual-regression service.

Next, we need to create a test.

Writing a Test

First, make a directory within your project’s main source called tests/. Within that directory, create a file called homepage.js.

Set the contents of the file to the following:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
});

it('should look as expected', () => {
browser.checkElement('#header');
});
});

That’s it. Within the single test function, we are calling a method from the visual-regression service, ‘checkElement()’. In our code, we are providing the ID, ‘header’ as an argument but you should replace this argument with the ID or CSS selector of a container element on the page you wish to check.

When executed, WebdriverIO will open the root URL path it is provided for our web application then will execute its check element comparison operation. This will generate a series of reference screen shots of the application. The regression service will then generate screen shots of the app in each browser it is configured to test with and provide a delta between these screens and the reference image(s).

A More Complex Test:

You may have a need to articulate part of your web application before you wish to execute your visual regression test. You also may need to execute the checkElement() function multiple times with multiple arguments to fully vet your app front end’s look and feel in this manner.

Fortunately, since we are simply inheriting the visual regression service’s operations through WebdriverIO, we can combine WebriverIO-based method calls within our tests to manipulate and verify our application:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
  });

  it('should look as expected', () => {
    browser.waitForVisible('#header');
    browser.checkElement('#header');
  });

  it('should look normal after I click the button, () => {
    browser.waitForVisible('.big_button');
    browser.click('.big_button');
    browser.waitForVisible('#main_content');
    browser.checkElement('#main_content');
  });

  it('should have a footer that looks normal too, () => {
    browser.scroll('#footer');
    browser.checkElement('#footer');
  });
});

Broad vs. Narrow Focus:

One of several factors that can add to the fragility of a visual test like this is attempting to account for minor changes in your visual elements. This can be a lot to bite off and chew at once.

Attempting to check the content of a large and heavily populated container (e.g. the body tag) is likely going to contain so many possible variations across browsers that your test will always throw an exception. Conversely, attempting to narrow your tests’ focus to something very marginal (e.g. selecting a single instance of a single button) may never be touched by code implemented by front end developers and thus, you may be missing crucial changes to your app UI.

flight_search_results

This is waaay too much to test at once.

 

 

 

 

 

 

 

The visual-regression service’s magic is in that it allows you to target testing to specific areas of a given web page or path within your app – based on web selectors that can be parsed by Webdriver.

price_link

And this is too little…

 

 

 

 

Ideally, you should be choosing web selectors with a scope of content that is not too large nor too small but in between. A test that focuses on comparing content of a specific div tag that contains 3-4 widgets will likely deliver much more value than one that focuses on the selector of a single button or a div that contains 30 widgets or assorted web elements.

Alternately, some of your app front end may be generated by templating or scaffolding that never received updates and is siloed away from code that receives frequent changes from your team. In this case, marshalling tests around these aspects may result in a lot of misspent time.

menu_bar

But this is just about right!

 

 

Choose your area of focus accordingly.

Back to the Config at Hand:

Before we run our tests, let’s make a few updates to our config file to make sure we are ready to roll with our initial homepage verification script.

First, we will need to add some helper functions to facilitate screenshot management. At the very top of the config file, add the following code block:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');

function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserViewport = context.meta.viewport;
    var browserWidth = browserViewport.width;
    var browserHeight = browserViewport.height;
 
    return path.join(basePath, `${testName}_${browserName}_v${browserVersion}_${browserWidth}x${browserHeight}.png`);
  };
}

This function will be utilized to build the paths for our various screen shots we will be taking during the test.

As stated previously, we are leveraging Browserstack with this example to minimize the amount of code we need to ship (given we would like to pull this project in as a resource in a Jenkins task) while allowing us greater flexibility in which browsers we can test with. To do this, we need to make sure a few changes in our config file are in place.

Note that if you are using a different browser provisioning services (SauceLabs, Webdriver’s grid implementation), see WebdriverIO’s online documentation for how to set you wdio configuration for your respective service here).

Open your wdio.conf.js file and make sure this block of code is present:

user: process.env.BSTACK_USERNAME,
key: process.env.BSTACK_KEY,
host: 'hub.browserstack.com',
port: 80,

This allows us to pass our browser stack authentication information into our wdio script via the command line.

Next, let’s set up which browsers we wish to test with. This is also done within our wdio config file under the ‘capabilities’ object. Here’s an example:

capabilities: [
{
  browserName: 'chrome',
  os: 'Windows',
  project: 'My Project - Chrome',
  'browserstack.local': false,
},
{
  browserName: 'firefox',
  os: 'Windows',
  project: 'My Project - Firefox',
  'browserstack.local': false,
},
{
  browserName: 'internet explorer',
  browser_version: 11,
  project: 'My Project - IE 11',
  'browserstack.local': false,
},
],

Where to Put the Screens:

While we are here, be sure you have set up your config file to specifically point to where you wish to have your screen shots copied to. The visual-regression service will want to know the paths to 4 types of screenshots it will generate and manage:

lots_a_screens

Yup… Too many screens

 

 

 

 

 


References: This directory will contain the reference images the visual-regression service will generate on its initial run. This will be what our subsequent screen shots will be compared against.

Screens: This directory will contain the screen shots generated per browser type/view by tests.

Errors: If a given test fails, an image will be captured of the app at the point of failure and stored here.

Diffs: If there is a given comparison performed by the visual-regression service between an element from a browser execution and the reference images which results in a discrepancy, a ‘heat-map’ image of the difference will be capture and stored here. Consider the content of this directory to be your test exceptions.

Things Get Fuzzy Here:

fozzy

Fuzzy… Not Fozzy

 

 

 

 

 

 

 

 

Finally, before kicking off our tests, we need to enable our visual-regression service instance within our wdio.conf.js file. This is done by adding a block of code to our config file that instructs the service on how to behave. Here is an example of the code block taken from the WebdriverIO developer guide:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.20,
  }),
  viewportChangePause: 300,
  viewports: [{ width: 320, height: 480 }, { width: 480, height: 320 }, { width: 1024, height: 768 }],
  orientations: ['portrait'],
},

Place this code block within the ‘services’ object in your file and edit it as needed. Pay attention to the following attributes and adjust them based on your testing needs:

‘viewports’:
This is a JSON object that provides width/height pairs to test the application at. This is very handy if you have an app that has specific responsive design constraints. For each pair, the test will be executed per browser – resizing the browser for each set of dimensions.

‘orientations’: This allows you to configure the tests to execute using portrait and/or landscape view if you happen to be testing in a mobile browser (default orientation is portrait).

‘viewportChangePause’: This value pauses the test in milliseconds at each point the service is instructed to change viewport sizes. You may need to throttle this depending on app performance across browsers.

‘mismatchTolerance’: Arguably the most important setting there. This floating-point value defines the ‘fuzzy factor’ which the service will use to determine at what point a visual difference between references and screen shots should fail. The default value of 0.10 indicates that a diff will be generated if a given screen shot differs, per pixel from the reference by 10% or more. The greater the value the greater the tolerance.

Once you’ve finished modifying your config file, lets execute a test.

Running Your Tests:

Provided your config file is set to point to the root of where your test files are located within the project, edit your package.json file and modify the ‘test’ descriptor in the scripts portion of the file.

Set it to the following:

‘./node_modules/.bin/wdio /wdio.desktop.conf.js’

To run your test, from the command line, do the following:

‘BSTACK_USERNAME= BSTACK_KEY= npm run test — –baseUrl=’

Now, just sit back and wait for the test results to roll in. If this is the first time you are executing these tests, the visual-regression service can fail while trying to capture initial references for various browsers via Browserstack. You may need to increase your test’s global timeout initially on the first run or simply re-run your tests in this case.

Reviewing Results:

If you’re used to your standard JUnit or Jest-style test execution output, you won’t necessarily similar test output here.

If there is a functional error present during a test (an object you are attempting to inspect isn’t available on screen) a standard Webdriver-based exception will be generated. However, outside of that, your tests will pass – even if a discrepancy is visually detected.

However, examine your screen shot folder structure we mentioned earlier. Note the number of files that have been generated. Open a few of them to view what has been capture through IE 11 vs. Chrome while testing through Browserstack.

Note that the files have names appended to them descriptive of the browser and viewport dimensions they correspond to.

example1

Example of a screen shot from a specific browser

 

 

 

Make note if the ‘Diff’ directory has been generated. If so, examine its contents. These are your test results – specifically, your test failures.

example2

Example of a diff’ed image

 

 

 

There are plenty of other options to explore with this basic set of tooling we’ve set up here. However, we’re going to pause here and bask in the awesomeness of being able to perform this level of browser testing with just 5-10 lines of code.

Is there More?

This post really just scratches the surface of what you can do with a set of visual regression test tools.  There are many more options to use these tools such as enabling mobile testing, improving error handling and mating this with your build tools and services.

We hope to cover these topics in a bit more depth in a later post.   For now, if you’re looking for additional reading, feel free to check out a few other related posts on visual regression testing here, here and here.

Auditing your Web App for Accessibility with Lighthouse

If you’ve followed our blog for some time, you’ve likely encountered posts detailing how to engage in various kinds of software testing, from performance to data-driven to security and more.

This post continues that trend with a focus on testing your site for accessibility.

What is Accessibility?

All Access Badge

 

 

 

 

 

If you are unfamiliar with the concept, accessibility, within the realm of web applications is a term that covers a wide range of concerns. The primary focus is codifying and helping to reinforce design standards that ensure the web is accessible to all – regardless of people’s ability to consume and interact with it.

This is a very large topic and its definition is beyond the scope of this blog post’s intent. If you’re looking for a starting point however, our friends at W3C.org can help you with that: https://www.w3.org/standards/webdesign/accessibility

If you want to cut to the chase, there are two major accessibility standards you should be aware of (and are generally considered the path toward compliance). Both of these are supported and covered by the W3C as international standards and can be reviewed in full here:

You can also visit this link for some related coding examples.

Why Accessibility?

Why should everyone be concerned with accessibility design issues?  The short answer can be found from this quite from Sir Tim Berners-Lee, via the W3C:

“The power of the Web is in its universality.
Access by everyone regardless of disability is an essential aspect.”

A more complex answer is that, as the web reaches more and more global ubiquity, we as software creators need to take into account the range of needs to access the web from all users. This is simply fair play as web technology has matured and become evermore essential to our daily lives. There are also legal ramifications for not taking accessibility into account. Here are some examples:

How Accessible is my Site?

If you’ve closely followed the design principles of WCAG 2.0, then your web application front end should be compliant with most modern accessibility software such as screen readers. However it’s difficult to verify your app’s end-user experience in this regard without actually sitting down with a browser, your app, some third-party tools and applying some old-fashioned manual testing.

The challenge with this – you’re likely already thinking – is that this is time consuming and labor intensive. If you are on a small software team or have a very tight deadline (or both), this level of manual testing might be a hard (but necessary) pill for your product team to swallow.

There are 3rd party services that can perform this level of testing for you. A quick google search can get you started down this path. However, if hiring a 3rd party is either logistically or financially out of reach, there are plenty of software tools that can help take the sting out of accessibility verification.

Chrome is your Friend (Really)?

As of this blog post, Google’s Chrome browser currently accounts for about 1/2 of all worldwide browser usage on the web. Chrome however is not only popular, it also has a lot of helpful tools under the hood to aid in web development.

In fact, you can use your basic, desktop version of Chrome to perform an accessibility audit of a given site you happen to have open in your browser. To try this trick out, simply do the following:

  • Open your browser to a given site you wish to audit
  • Open the Chrome developer tools console
  • Under the developer console tabs, (e.g. Network, Elements, Console, etc.) search for and click on the Audits tab.
  • Click the Perform Audit option, select Accessibility and click Run Audit to begin.
chrome a11y audit screenshot

Look what we found under the hood!

 

 

 

 

The audit tool in this case will scan the contents of the site at the current URL you have open for markup patterns that support accessibility standards (e.g. proper element naming conventions, css settings, etc.) and flag patterns within the markup on the page that violate these standards. These of course are based on the WCAG and AIRA conventions mentioned at the beginning of this post.

This is a powerful but very easy to access and run set of tools for web developers (you can perform several kinds of audits, not just ones for accessibility). So easy in fact, you might be thinking that there must be a way to program these tools (and you would be right).

Illuminating Accessibility with Lighthouse!

Once you started the audit within Chrome, you might have noticed a message stating that something called Lighthouse was starting up, prior to the audit being executed.

Lighthouse in fact, is the engine that performs the audit. It’s also available as a library dependency or even its own stand-alone project.

Lighthouse can be incorporated into your project front end or build cycle in several ways. One of those is to use the Lighthouse as a standalone analyzation tool which you can use to verify the front end of your web app post deployment (think of this as another set of functional tests with a different, specific kind of output – in this case, an accessibility report.

You can get Lighthouse running locally via command line quickly by doing the following (assuming you have NodeJS installed):

1. Open a new terminal and do, npm install -g lighthouse
2. Once the latest version of lighthouse is installed, do, lighthouse "url of choice" -GA —output html —output-path ./report.html

Once the above command executes and finishes, you can open your Lighthouse report.html locally in your web browser:

Lighthouse Report

Lighthouse Report

 

 

 

 

 

 

Drill down into the different segments of the report to see what best practices you are following for accessibilityy (or performance, SEO or other web standards) and where you should target to improve your app.

Jenkins Configuration

Jenkins configuration for Lighthouse

 

 

 

 

To include this report within your CI build process, just add the above Node command line statements to a shell script attached to your job of choice and make sure to archive the results (if configuring this in a Jenkins task, your job config will probably look something like this:)

Accessibility Linting?

By default, the report Lighthouse provides is expansive and may take a while to read through.

Fortunately, there are several options to move this form of testing further upstream. In this case, we are going to look at a small, open source project called Lightcrawler.

Lightcrawler allows you to call specific audit aspects of Lighthouse from within your NodeJS project. The benefit of doing this is that you can more specifically target what you are auditing and perform said audits before you ship your project through to the rest of your CI process.

You can install Lightcrawler into your project by simply doing npm install —save-dev lightcrawler.

Next, given your Node app is configured with a script that will generate and run the app locally, simply add this to your “scripts” portion of your app’s package.json file:

”audit:a11y”: "./node_modules/.bin/lightcrawler --url=http://localhost:8080 --config lighthouse-config.json",

With Lightcrawler you can configure an accessibility audit in a variety of ways that can meet your project needs. For the purpose of this post, we will configure Lightcrawler to run only tests classified within an accessibility audit, and to run all tests of that type currently available. Here is our configuration file:

{
  "extends": "lighthouse:default",
  "settings": {
    "crawler": {
      "maxDepth": 1,
      "maxChromeInstances": 5
    },
    "onlyCategories": [
      "Accessibility"
    ],
    "onlyAudits": [
      "accesskeys",
      "aria-allowed-attr",
      "aria-required-attr",
      "aria-required-parent",
      "aria-required-childres",
      "aria-roles",
      "aria-valid-attr-value",
      "audio-caption",
      "button-name",
      "bypass",
      "color-contrast",
      "definition-list",
      "dlitem",
      "document-title",
      "duplicate-id",
      "frame-title",
      "html-has-lang",
      "html-lang-valid",
      "image-alt",
      "input-image-alt",
      "lable",
      "layout-table",
      "link-name",
      "list",
      "list-item",
      "meta-refresh",
      "meta-viewport",
      "object-alt",
      "tabindex",
      "td-headers-attr",
      "th-has-data-cells",
      "valid-lang",
      "video-caption",
      "video-description"
    ]
  }
}

You can copy the JSON above and save it as lightcrawler.conf.js within your project directory and direct your script to the config file’s path.

Test out your newly added audit script but running npm run audit:a11y. If everything is in working order, you should receive a command line output detailing the number of audits executed by Lightcrawler and error output for any given audit tests that may have failed (and more importantly, why).

Lightcrawler Console Output

Lightcrawler Console Output

 

 

 

This may look similar to linting report output from something like Prettier or ESLint.  Similar to the way Prettier or ESLint are used in pre-commit hooks within your project, you could add a Lightcrawler script to your pre-commit hook in order to better safeguard your application code from possible accessibility violations.

Beyond Auditing and Linting!

This should give you some ideas as to how you can reduce the overall effort required to deliver your apps’ web front end while still maintaining accessibility.  By no means are the handful of tips presented here deep enough to fully cover everything under the accessibility umbrella.  That task will require significant time and effort but, in keeping with the web’s initial intent of inclusivity and connectivity, one we as software developers should strive toward.

Getting Started with Dependency Security and NodeJS

Internet security is a topic that receives more attention every day.  If you’re reading this article in early 2018, issues like Meltdown, Specter and the Equifax breach are no doubt fresh in your mind.

Cybersecurity is a massive concern and can seem overwhelming.  Where do you start?  Where do you go?  What do you do if you’re a small application team with limited resources?  How can you better engineer your projects with security in mind?

Tackling the full gestalt of this topic, including OWASP and more is beyond the scope of this article. However, if you’re involved in a small front end team, and if you leverage services like AWS, have gone through the OWASP checklist and are wondering, ‘what now?’, this article is for you (especially if you are developing in NodeJS, which we’ll focus on in this article).

Let’s Talk About Co(de) Dependency:

A tiger and her baby pigs

We’re talking about claiming dependents right?

 

 

 

 

 

 

 

 

 

 

 

One way we can quickly impact the security of our applications is through better dependency management.  Whether you develop in Python, JavaScript, Ruby or even compiled languages like Java and C#, library packages and modules are part of your workflow.

Sure, it’s 3rd party code but that doesn’t mean it can’t have a major impact on your project (everyone should remember the Leftpad ordeal from just a few years ago).

As it turns out, your app’s dependencies can be your app’s greatest vulnerability.

Managing Code you Didn’t Write:

Below, we’ll outline some steps you can take to tackle at least one facet of secure development – dependency management.  In this article, we’ll cover how you can implement a handful of useful tools into a standard NodeJS project to protect, remediate and warn against potential security issues you may introduced into your application via its dependency stack.

Your code has issues

We’ve all had to deal with other people’s problematic code at one time or another

 

 

 

 

 

 

 

 

 

 

Using Dependency-Check:

Even if you’re not using NodeJS, if you are building any project that inherits one or more libraries, you should have some form of dependency checking in place for that project.

For NodeJS apps, Dependency-Check is the easiest, lowest-hanging fruit you can likely reach for to better secure your development process.

Dependency-Check is a command-line tool that can scan your project and warn you of any modules being included in your app’s manifest but not actually being utilized in any functioning code (e.g. modules listed in your package.json file but never ‘required’ in any class within your app).  After all, why import code that you do not require?

Installation is a snap.  From your project directory you can:

Npm install –g dependency-check

dependency-check package.json –unused

If you have any unused packages, Dependency-Check will warn you via a notice such as:

Fail! Modules in package.json not used in code: chai, chai-http, mocha

Armed with a list of any unused packages in hand, you can prune your application before pushing it to production.  This tool can be triggered within your CI process per commit to master or even incorporated into your project’s pre-commit hooks to scan your applications further upstream.

RetireJS:

‘What you require, you must retire’ as the saying goes – at least as it does when it comes to RetireJS.  Retire is a suite of tools that can be used for a variety of application types to help identify dependencies your app has incorporated that have known security vulnerabilities.

Retire is suited for JavaScript based projects in general, not just the NodeJS framework.

For the purpose of this article, and since we are dealing primarily with the command line, we’ll be working with the CLI portion of Retire.

npm install -g retire

then from the root of your project just do:

retire 

If you are scanning a NodeJS project’s dependencies only, you can narrow the scan by using the following arguments:

retire  —nopdepath 

By default, the tool will return something like the following, provided you have vulnerabilities RetireJS can identify:

ICanHandlebarz/test/jquery-1.4.4.min.js 
↳ jquery 1.4.4.min has known vulnerabilities: 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969 
http://research.insecurelabs.org/jquery/test/ 
&lt;a href="http://bugs.jquery.com/ticket/11290"&gt;
http://bugs.jquery.com/ticket/11290</a>

Retire scans your app and compares packages used within it with an updated online database of packages/package versions that have been identified by the security world as having exploitable code.

Note the scan output above identified that the version of jquery we have imported via the ICanHandlebarz module has a bug that is exploitable by hackers (note the link to CVE-2011-4969 – the exploit in question).

Armed with this list, we can identify any unnecessary dependencies within our app, as well as any exploits that could be utilized given the code we are currently importing and using.

NSP:

There are a multitude of package scanning utilities for NodeJS apps out on the web. Among them, NSP (Node Security Platform) is arguably the most popular.

Where the previously covered Retire is a dependency scanner suited for JavaScript projects in general, NSP, as the name implies, is specifically designed for Node applications.

The premise is identical: this command line tool can scan your project’s package manifest and identify any commonly known web exploits that can be leveraged given the 3rd party packages you have included in your app.

Utilizing NSP and Retire may sound redundant but, much like a diagnosing a serious condition via medical professional, it’s often worth seeking a second opinion.  It’s also equally easy to get up and running with NSP:

npm install -g nsp

nsp check —reporter &lt;Report type (e.g. HTML)&gt;

Running the above within the root of your node application will generate a report in your chosen output format

Again, wiring this up into a CI job should be straightforward.  You can even perform back-to-back scans using both Retire and NSP.

Snyk:

Yes, this is yet another dependency scanner – and like NSP, another one that specifically NodeJS oriented but with an added twist: Snyk.

Snyk home page

Snyk home page

 

 

 

 

 

 

 

 

 

With Retire and NSP, we can quickly and freely generate a list of vulnerabilities our app is capable of having leveraged against it.  However, what of remediation?  What if you have a massive Node project that you may not be able to patch or collate dependency issues on quickly?  This is where Snyk comes in handy.

Snyk can generate detailed and presentable reports (perfect for team members who may not be elbow deep in your project’s code).  The service also provides other features such as automated email notifications and monitoring for your app’s dependency issues.

Typical Snyk report

Typical Snyk report

 

 

 

 

 

 

 

 

 

 

 

Cost: Now, these features sound great (they also sound expensive).  Snyk is a free service, depending on your app’s size.  For small projects or open source apps, Snyk is essentially free.  For teams with larger needs, you will want to consult their current pricing.

To install and get running with Snyk, first visit https://snyk.io and register for a user account (if you have a private or open source project in github or bitbucket, you’ll want to register your Snyk account with your code management tool’s account.

Next, from your project root in the command line console:

npm install -g snyk

snyk –auth

Read through the console prompt.  Once you receive a success message, your project is now ready to report to your Snyk account.  Next:

Snyk test

Once Snyk is finished you will be presented with a URL will direct you to a report containing all vulnerable dependency information Snyk was able to find regarding your project.

The report is handy in that it not only identifies vulnerabilities and the packages they’re associated with but also steps for remediation (e.g. – update a given package to a specific version, remove it, etc.).

Snyk has a few more tricks up its sleeve as well.  If you wish to dive headlong into securing your application. Simply run:

Snyk wizard

This will rescan your application in an interactive wizard mode that will walk you through each vulnerable package and prompt you for actions to take for each vulnerable package.

Note, it is not recommended to use Snyk’s wizard mode when dealing with large scale applications due to it being very time consuming.

It will look like this:

Snyk monitor output

Snyk monitor output

 

 

 

 

Additionally, you can utilize Snyk’s monitoring feature which, in addition to scanning your application, will send email notifications to you or your team when there are updates to vulnerable packages associated with your project.

Putting it all together in CI:

Of course we’re going to arrange these tools into some form a CI instance.  And why not?  Given we have a series of easy-to-implement command line tools, adding these to part of our project’s build process should be straight forward.

Below is an example of a shell script we added as a build step in a Jenkins job to install and run each of these scan tools as well as output some of their responses to artifacts our job can archive:

#Installs our tools
npm install -g snyk
npm install -g dependency-check
npm install -g retire

#Runs dependency-check and outputs results to a text file
dependency-check ./package.json >> depcheck_results.txt

#Runs retire and outputs results to a text file
retire --nodepath node_modules >> retire_results.txt

#Authenticates use with Snyk
snyk auth $SNYK_TOKEN 

#Runs Snyk's monitor task and outputs results to a test file 
snyk monitor >> snyk_raw_monitor.txt

#Uses printf, grep and cut CLI tools to retrieve the Snyk report URL
#and saves that URL to a text file
printf http: >> snykurl.txt
grep https snyk_raw_monitor.txt | cut -d ":" -f 2 >> snykurl.txt
cat snykurl.txt

#Don't forget the set your post-job task in your CI service to save the above .txt
#files as artifacts for our security check run.

Note – depending on how you stage your application’s build lifecycle in your CI service, you may want to break the above script up into separate scripts within the job or separate jobs entirely. For example, you may want to scan your app using Dependency-Check for every commit but save your scan of your application using NSP or Snyk for only when the application is built nightly or deployed to your staging environment.  In which case, you can divide this script accordingly.

Note on Snyk:

In order to use Snyk in your CI instance, you will need to reference your Snyk authentication key when executing this tool (see the API key reference in the script).

This key can be passed as a static string to the job.  Your auth key for Snyk can be obtained by logging in at Snyk.io and retrieving your API key from the account settings page after clicking on the My Account link from Snyk’s main menu.

Next Steps:

This is only the tip of the iceberg when it comes to security and web application development.  Hopefully, this article has given you some ideas and hits of where to start in writing more secure software.  For more info on how to secure your Node-based apps, here’s some additional reading to check out:

Creating a Realtime Reactive App for a collaborative domain

Sampling is a Bazaarvoice product that allows consumers to join communities and claim a limited amount of free products. In return consumers provide honest & authentic product reviews for the products they sample. Products are released to consumers for reviews at the same time. This causes a rush to claim these products. This is an example of a collaborative domain problem, where many users are trying to act on the same data (as discussed in Eric Evens book Domain-Driven design).

 

Bazaarvoice ran a 2 day hackathon twice a year. Employees are free to use this time to explore any technologies or ideas they are interested in. From our hackathon events Bazaarvoice has developed significant new features and products like our advertising platform and our personalization capabilities. For the Bazaarvoice 2017.2 hackathon, the Belfast team demonstrated a solution to this collaborative domain problem using near real-time state synchronisation.    

Bazaarvoice uses React + Redux for our front end web development. These libraries use the concepts of unidirectional data flows and immutable state management. These mean there is always one source of truth, the store, and there is no confusion about how to mutate the application state. Typically, we use the side effect library redux-thunk to synchronise state between server and client via HTTP API calls. The problem here is that, the synchronisation one way, it is not reactive. The client can tell the server to mutate state, but not vice versa. In a collaborative domain where the data is changing all the time, near-real time synchronisation is critical to ensure a good UX.

To solve this we decided to use Google’s Firebase platform. This solution provided many features that work seamlessly together, such as OAuth authentication, CDN hosting and Realtime DB. One important thing to note about Firebase, it’s a backend as a service, there was no backend code in this project.

The Realtime Database provides a pub/sub model on nodes of the database, this allows clients to be always up-to-date with the latest state. With Firebase Realtime DB there is an important concept not to be overlooked, data can only be accessed by it’s key (point query).

You can think of the database as a cloud-hosted JSON tree. Unlike a SQL database, there are no tables or records. When you add data to the JSON tree, it becomes a node in the existing JSON structure with an associated key (Reference)

Hackathon Goals

  1. Realtime configurable UI
  2. Realtime campaign administration and participation
  3. Live Demo to the whole company for both of the above

Realtime configurable UI

During the hackathon demo we demonstrated updating the app’s style and content via the administration portal, this would allow clients to style the app to suite their branding. These updates were pushed in real time to 50+ client devices from Belfast to Austin (4,608 miles away). 

The code to achieve this state synchronisation across clients was deceptively easy!

Given the nature of react, once a style config update was received, every device just ‘reacted’.

 

Realtime campaign administration and participation

In the demo, we added 40 products to the live campaign. This pushed 40 products to the admin screen and to the 50+ mobile app. Participants were then instructed to claim items.

Admin view

Member view

All members were authenticated via OAuth providers (Facebook, Github or Gmail).

To my surprise the live demo went without a hitch. I’m pleased to add…. my team won the hackathon for the ‘Technical’ category.

Conclusion

Firebase was a pleasure to work with, everything worked as expected and it performed brilliantly in the live demo….even on their free tier. The patterns used in Firebase are a little unconventional for the more orthodox engineers, but if your objective is rapid development, Firebase is unparalleled by any other platform. Firebase Realtime Database produced a great UX for running Sampling campaigns. While Firebase will not be used in the production product, it provided great context for discussions on the benefits and possibilities of realtime data synchronisation.

Some years ago, I would have maintained that web development was the wild west of software engineering; solutions were being developed without discipline and were lacking a solid framework to build on. It just didn’t seem to have any of the characteristics I would associate with good engineering practices. Fast forward to today, we now have a wealth of tools, libraries and techniques that make web development feel sane.

In recent years front-end developers have embraced concepts like unidirectional dataflows, immutability, pure functions (no hidden side-effects), asynchronous coding and concurrency over threading. Im curious to see if these same concepts gain popularity in backend development as Node.js continues to grow as backend language.

Event Stream Modeling

Recently, during a holiday lull, I decided to look at another way of modeling event stream data (for the purposes of anomaly detection).

I’ve dabbled with (simplistic) event stream models before but this time I decided to take a deeper look at Twitter’s anomaly detection algorithm [1], which in turn is based (more or less) on a 1990 paper on seasonal-trend decomposition [2].

To round it all off, and because of my own personal preference for on-line algorithms with minimal storage and processing requirements, I decided to blend it all with my favorite on-line fading-window statistics math.  On-line algorithms operate with minimal state and no history, and are good choices for real-time systems or systems with a surplus of information flowing through them (https://en.wikipedia.org/wiki/Online_algorithm)

The Problem

The high-level overview of the problem is this: we have massive amounts of data in the form of web interaction events flowing into our system (half a billion page views on Black Friday, for example), and it would be nice to know when something goes wrong with those event streams. In order to recognize when something is wrong we have to be able to model the event stream to know what is right.

A typical event stream consists of several signal components:  a long-term trend (slow increase or decrease in activity over a long time scale, e.g. year over year), cyclic activity (regular change in behavior over a smaller time scale, e.g. 24 hours), noise (so much noise), and possible events of interest (which can look a lot like noise).

This blog post looks at extracting the basic trend, cycle, and noise components from a signal.

Note: The signals processed in this experiment are based on real historical data, but are not actual, current event counts.

The Benefit

The big payback of having a workable model of your system is that when you compare actual system behavior against expected behavior, you can quickly identify problems — significant spikes or drops in activity, for example.  These could be from DDOS attacks, broken client code, broken infrastructure, and so forth — all of which are better to detect sooner than later.

Having accurate models can also let you synthesize test data in a more realistic manner.

The Basic Model

Trend

A first approximation of modeling the event stream comes in the form of a simple average across the data, hour-by-hour:

The blue line is some fairly well-behaved sample data (events per hour) and the green lines are averages across different time windows. Note how the fastest 1-day average takes on the shape of the event stream but is phase-shifted; it doesn’t give us anything useful to test against. The slowest 21-day average gives us a decent trend line for the data, so if we wanted to do a rough de-trending of the signal we could subtract this mean from the raw signal.  Twitter reported better results using the median value for de-trending but since I’m not planning on de-trending the signal in this exploration, and median calculations don’t fit into my lightweight on-line philosophy, I’m going to stay with the mean as trend.

Cycle

While the fast 1-day average takes on the shape of the signal, it is phase shifted and is not a good model of the signal’s cyclic nature. The STL seasonal-trend technique(see [2]) models the cyclic component of the signal (“seasonal” in the paper, but our “season” is a 24-hour span) by creating multiple averages of the signal, one per cycle sub-series.  What this means for our event data is that there will be one average for event counts from 1:00am, another average for 2:00am, and so forth, for a total of 24 running averages:

The blue graph in Figure 2a shows the raw data, with red triangles at “hour 0″… these are all averaged together to get the “hour 0” averages in the green graph in Figure 2b.  The green graph is the history of all 24 hourly sub-cycle averages.  It is in-phase with and closely resembles the raw data, which is easier to see in Figure 2c where the signal and cyclic averages are super-imposed.

A side effect of using a moving-window average is that the trend information is automatically incorporated into the cyclic averages, so while we can calculate an overall average, we don’t necessarily need to.

Noise

The noise component is the signal minus the cyclic model and the seasonal trend. Since our online cyclic model incorporates trending already, we can simple subtract the two signals from Figure 2c to give us the noise component, shown in Figure 3:

Some Refinements

Now that we have a basic model in place we can look at some refinements.

Signal Compression

One problem with the basic model is that a huge spike in event counts can unduly distort the underlying cyclic model. A mild version of this can be seen in Figure 4a, where the event spike is clearly reflected in the model after the spike:

By taking a page out of the audio signal compression handbook, we can squeeze our data to better fit within a “standard of deviation”. There are many different ways to compress a signal, but the one illustrated here is arc-tangent compression that limits the signal to PI/2 times the limit factor (in this case, 4 times the standard deviation of the signal trend):

There is still a spike in the signal but its impact on the cyclic model is greatly reduced, so post-event comparisons are made against a more accurate baseline model.

Event Detection

Now we have a signal, a cyclic model of the signal, and some statistical information about the signal such as its standard deviation along the hourly trend or for each sub-cycle.

Throwing it all into a pot and mixing, we choose error boundaries that are a combination of the signal model and the sub-cycle standard deviation and note any point where the raw signal exceeds the expected bounds:

Event Floor

You may note in Figure 5 how the lower bound goes negative.  This is silly – you can’t have negative event counts. Also, for a weak signal where the lower bound tends to be negative, it would be hard to notice when the signal fails completely – as it’s still in bounds.

It can be useful to enforce an event floor so we can detect the failure of a weak signal. This floor is arbitrarily chosen to be 3% of the cyclic model (though it could also be a hard constant):

The End

Playing with data is a ton of fun, so all of the Python code that generated these examples (plus a range of sample data) can be found in GitHub here:

https://github.com/bazaarvoice/event_monitor_test

References

For your reading pleasure:

[1] Automatic Anomaly Detection in the Cloud Via Statistical Learning

[2] STL: A Seasonal-Trend Decomposition Procedure Based on Loess

Telling Your Data To “Back Off!” (or How To Effectively Use Streams)

Foreword

Our Curations engineering team makes heavy use of serverless architecture. While this typically gives us the benefit of reduced costs, flexibility, and rapid development, it also requires us to ensure that our processes will run within the tight memory and lifecycle constraints of serverless instances.

In this article, I will describe an actual case where a scheduled job had started to fail, the discovery of the root cause and the refactor that resolved the issue. I will assume you have at least a rudimentary knowledge of Node JS and Amazon Web Services.

Content Export

Months previously, we built a scheduled job using AWS Lambdas that would kick off an export of our social content every 24 hours. In a nutshell, its purpose was to:

  1. Query for social content documents stored in a MongoDB server.
  2. Transform each document to a row in CSV format.
  3. Write out the CSV rows to a file in S3.

The resulting CSV files were meant for ingestion into an analytics tool to measure web site conversion impact.

It all worked fine for months. Then recently, this job began to fail.

Drowning in Content

For your viewing pleasure, here is an excerpted, condensed section of the main code:

function runExport(query) {
  MongoClient.connect(MONGODB_URL)
    .then((db) => {
      let exportCount = 0;
      db
        .collection('content')
        .find(query)
        .forEach(
          // iterator function to perform on each document
          (content) => {
            // transform content doc to a csv row, then push to S3 writer
            s3.write(csvTransform(content));
            exportCount++;
          },
          // done function, no more documents
          () => {
            db.close();
            s3.close();
            logger.info(`documents exported: ${exportCount}`);
          }
        );
    });
}

Each document from the database query is transformed into a CSV row, then pushed to our S3 writer which buffered all content until an s3.close(). At this point, the CSV data would be flushed out to the S3 destination bucket.

It was obvious from this implementation that heap usage would grow unbounded, as documents from the database would be loaded into resident memory as CSV data. As we aggregated content over the previous months, the data pushed into our export process began to generate frequent “out of heap memory” errors in our logs.

Memory Profile

To gain visibility into the memory usage, I wrote a quick and dirty MemProfiler class whose purpose was to sample memory usage using process.memoryUsage(), collecting the maximum heap used over the process lifetime.

class MemProfiler {
  constructor(collectInterval) {
    this.heapUsed = {
      min : null,
      max : null,
    };
    if (collectInterval) {
      this.start(collectInterval);
    }
  }

  start(collectInterval = 500) {
    this.interval = setInterval(() => this.sample(), collectInterval);
  }

  stop() {
    if (this.interval) {
      clearInterval(this.interval);
    }
  }

  reset() {
    this.stop();
    this.heapUsed = {
      min : null,
      max : null,
    };
  }

  sample() {
    const mem = process.memoryUsage();
    if (this.heapUsed.min === null || mem.heapUsed < this.heapUsed.min) {
      this.heapUsed.min = mem.heapUsed;
    }
    if (this.heapUsed.max === null || mem.heapUsed > this.heapUsed.max) {
      this.heapUsed.max = mem.heapUsed;
    }
  }
}

I then added the MemProfiler to the runExport code:

function runExport(query) {
  memProf.start();
  MongoClient.connect(MONGODB_URL)
    .then((db) => {
      let exportCount = 0;
      db
        .collection('content')
        .find(query)
        .forEach(
          (content) => {
            // transform content doc to a csv row, then push to S3 writer
            s3.write(csvTransform(content));
            exportCount++;
            memProf.sample();
          },
          () => {
            db.close();
            s3.close();
            logger.info(`documents exported: ${exportCount}`);
            console.log('heapUsed:');
            console.log(` max: ${memProf.heapUsed.max/1024/1024} MB`);
            console.log(` min: ${memProf.heapUsed.min/1024/1024} MB`);
          }
        );
    });
}

Running the export against a small set of 16,900 documents yielded this result:

documents exported: 16900
heapUsed:
max: 30.32764434814453 MB
min: 8.863059997558594 MB

Then running against a larger set of 34,000 documents:

documents exported: 34000
heapUsed:
max: 36.204505920410156 MB
min: 8.962921142578125 MB

Testing with additional documents increased the heap memory usage linearly as we expected.

Enter the Stream

The necessary code rewrite had one goal in mind: ensure the memory profile was constant, whether a total of one document was processed or a million. Since this was a batch process, we could trade off memory usage for processing time.

Let’s reiterate the primary operations:

  1. Fetch a document
  2. Transform the document to a CSV row
  3. Write the CSV row to S3

If I may present an analogy, we have widgets moving down the assembly line from one station to the next. The conveyor belt on which the widgets are transported is moving at a constant rate, which means the throughput at each station is constant. We could increase the speed of that conveyor belt, but only as fast as can be handled by the slowest station.

In Node JS, streams are the assembly line stations, and pipes are the conveyor belt. Let’s examine the three primary types of streams:

  1. Readable streams: This is typically an upstream data source that feeds data to a writable or transform stream. In our case, this is the MongoDB find query.
  2. Transform streams: This typically takes upstream data, performs a calculation or transformation operation on it and feeds it out downstream. In our case, this would take a MongoDB document and transform it to a CSV row.
  3. Writable streams: This is typically the terminating point of a data flow. In our case this is the writing of a CSV row to S3.

Readable Stream

Conveniently, the MongoDB Node JS driver’s find() function returns a Cursor object that extends the Node JS Readable stream. How convenient!

const streamContentReader = db.collection('content').find(query);

Transform Stream

All we need to do is extend the Transform stream class, and override the _transform() method to transform the content to a CSV row, push it out the other end, and signal upstream that it is ready for the next content.

class CsvTransformer extends Transform {
  constructor(options) {
    super(options);
  }

  _transform(content, encoding, callback) {
    // write row
    const { _id, authoredAt, client, channel, mediaType, permalink , text, textLanguage } = content;
    const csvRow = `${_id},${authoredAt},${client},${channel},${mediaType},${permalink}${text},${textLanguage}\n`;
    this.push(csvRow);
    // signal upstream that we are ready to take more data
    callback();
  }
}

Writable Stream

Let’s purposely mock the S3 Writer for now. It does nothing here, but realistically, we would buffer up incoming data, then flush out the buffer in chunks over the network for best throughput.

class S3Writer extends Writable {
  constructor(options) {
    super(options);
  }

  _write(data, encoding, callback) {
    // TODO: probably use a fixed size buffer
    // that we write to and then flush out to S3 using
    // multipart data upload
    callback();
  }
}

Pipe ‘Em Up

We now create the three streams:

const streamContentReader = db.collection('content').find(query);
const streamCsvTransformer = new CsvTransformer({ objectMode: true });
const streamS3Writer = new S3Writer();

Streams alone do nothing. What makes them useful is connecting the output from one stream to the input of another. In stream terminology, this is called piping, and is accomplished using the stream’s pipe method:

streamContentReader.pipe(streamCsvTransformer).pipe(streamS3Writer)

Although the native pipe method is perfectly functional, I highly recommend using the pump library. In addition to being more readable by passing in an array of streams to pipe together, we can also invoke then() when the pipe has finished, as well as handle close or error events emitted by the streams:

pump([ streamContentReader, streamCsvTransformer, streamS3Writer ])
  .then(() => {
    console.log(`documents exported: ${exportCount}`);
  })
  .catch((err) => console.error(err));

CSV and S3 Write Streams

I purposely did not implement the S3Writer above, because npm is a treasure trove of solutions! The s3-write-stream library will take care of buffering, using multipart upload and handling retries, so we don’t have to get our hands too dirty. And wouldn’t you know it, there is also the csv-write-stream library that will generate properly escaped CSV rows.

As an exercise, you the reader may want to try using the s3-write-stream and csv-write-stream. Trust me, once you get the hang of streaming, you will enjoy it!

Back Off!

Let’s touch back on the issue of memory usage.

What if a write stream becomes blocked for a period of time? Maybe we are writing data to a third-party service over HTTP and network congestion or service throttling is slowing the write rate. The readable stream will happily pump data in as fast as it can. Won’t that cause increased heap usage as all that data gets backed up and buffered?

The quick answer is “no”. In the implementation of the _write() method of your writable stream, when done with the data chunk (after writing to a file for example), call callback(). This signals to the incoming stream that it is ready to receive the next chunk. You can also provide a highWaterMark to the writable stream constructor to give a specific number of objects to buffer before pausing the incoming stream. This is all handled internally by the Node JS streams implementation. No more unbounded buffering!

For a deep dive into the concept of backpressure control, read this great article on Backpressuring in Streams.

Stream ‘Em If You Got ’em!

Our team is now in the process of utilizing Node JS streams whenever we read in volumes of data, process that data, then write out the results. We are now seeing great gains in both stability and maintainability!

References

Node JS Streams APIhttps://nodejs.org/api/stream.html

Article: Backpressuring in Streams: https://nodejs.org/en/docs/guides/backpressuring-in-streams/

MongoDB Cursor Streamhttp://mongodb.github.io/node-mongodb-native/2.2/api/Cursor.html

Pump Modulehttps://www.npmjs.com/package/pump

CSV Write Stream Module: https://www.npmjs.com/package/csv-write-stream

S3 Write Stream Module: https://www.npmjs.com/package/s3-write-stream

 

Testing for Application Front End Performance with Web Page Test

taco-meter

‘Taco-meter’ – a device used to measure how quickly you can obtain tacos from your current location

If you’ve followed Bazaarvoice’s R&D blog, you’ve probably read some of our posts on web application performance testing with tools like Jmeter here and here.

In this post, we’ll continue our dive into web app performance, this time, focusing on testing front end applications.

API Response Time vs App Usability:

Application UI testing in general can be challenging as the cost of doing so (in terms of both time, labor and money) can become quite expensive.

In our previous posts highlighted above, we touched on testing RESTful APIs – which revolves around sending a set of requests to a target, measuring their response times and checking for errors.

For front-end testing, you could do the same – automate requesting the application’s front-end resources and then measuring their collective response times.  However, there are additional considerations for front end testing that we must account for – such as cross-browser issues, line speed constraints (i.e. broadband vs. a mobile 3G) as well as client side loading and execution (especially important for JavaScript).

Tools like Jmeter can address some of these concerns, though you may be limited in what can be measured.  Plugins for Jmeter that allow Webdriver API access exist but can be difficult to configure, especially in a CI environment (read through these threads from Stack Overflow on getting headless Webdriver to work in Jenkins for example).

There are 3rd party services available that are specialized for UI-focused load tests (e.g. Blazemeter) but these options can sometimes be expensive.  This may be too much for some teams in terms of time and money – especially if you’re looking for a solution for introductory, exploratory testing as opposed to something larger in scale.

Webpagetest.org:

Webpagetest.org is an open source project backed by multiple partners such as Google, Akamai and Fastly, among others.  You can check out the project’s github page here.

Webpagetest.org is a publicly available service.  Its main goal is to provide performance information for web applications, allowing designers and engineers to measure and improve application speed.

Let’s check out Webpagetest.org’s public facing page – note the URL form field.  We can try out the service’s basic reporting features by pasting the URL of a website or application into the field and submitting the form.  In this case, let’s test the performance of Bazaarvoice’s home page.

Before submitting the form however, note some of the additional options made available to us:

  • Browser type
  • Location
  • Number of tests
  • Connection
webpagetest.org form

Webpagetest.org’s initial submission form

This service allows us to configure our tests to simulate user requests from a slow, mobile, 3G device in the US to a desktop client with a fiber optic connection in Singapore (just to name a few).

Go ahead and submit the test form.  Once the test resolves, you’ll first notice the waterfall snapshot Webpagetest.org returns.  This is a reading of the application’s resources (images, JavaScript execution, external API calls, the works).  It should look very familiar if you have debugged a web site using a browser’s developer tools console.

webpagetest waterfall

Webpagetest.org’s waterfall report

This waterfall readout gives us a visual timeline of all the assets our web page must load for it to be active for the end user, in the order they are loaded, while recording the time taken to load and/or execute each element.

From here, we can determine which resources which have the longest load times, the largest physical foot prints and give us a clue as to how we might better optimize the application or web page.

Webpagetest.org’s content breakdown chart

In addition to our waterfall chart, our test results can provide any more data points, including but not limited to:

  • Content breakdown by asset type
  • CPU utilization
  • Bandwidth utilization
  • Screen shots and video capture of page load

Armed with this information from our test execution, you can begin to see where we might decide to make adjustment to our site to increase client-side performance.

Getting Programmatic:

Webpagetest.org’s web UI provides some great, easy-to-use features.  However, we can automated these tests using Webpagetest.org’s API.

In our next couple of steps, we’ll demonstrate how to do just that using Jenkins, some command line tools and Webpagetest.org’s public API, which you can read more about here.

Before Going Forward:

There are a few limitations or considerations regarding using Webpagetest.org and its API:

  • All test results are publicly viewable
  • API usage requires you to submit an email address to generate an API key
  • API requests are limited to 200 test runs per day
  • The maximum number of concurrent test runs allowed is 9

If test result confidentiality or scalability is a major concern, you may want to skip to the end of this article where we discuss using private instances of Webpagetest.org’s service.

Requirements:

 As mentioned above, we need an API key from Webpagetest.org. Just visit https://www.webpagetest.org/getkey.php and submit the form made available there.  You’ll then receive an email containing your API key.

Once you have your API key – lets set up a test using as CI service.  For this, like in our other test tutorials, we’ll be using Jenkins.

For this walkthrough, we’ll use the following:

  • Webpagetest’s public API
  • Jenkins
  • NodeJS (be sure your Jenkins instance has at least 1 server capable of using NodeJS 6 or above)
  • Webpagetest (a node module that provides a command-line wrapper for their API)
  • Jq-cli (a node module that provides JQ – a JSON query tool)
  • Wget-improved (a node implementation of the wget CLI tool – skip this if your Jenkins servers already have this binary installed).

Setting Up the Initial Job:

First, let’s get the Jenkins job up and running and make sure we can use it to reach our primary testing tool.

Do the following:

  • Log into Jenkins
  • Navigate to a project folder you wish your job to reside in
  • Click New Item
  • Give the job a name, select Freestyle Project from the menu and click OK

Now that the job is created, let’s set a few basic parameters:

  • Check ‘Discard old builds’
  • Set the Max # of Builds value to 100
  • Check the ‘Provide Node & npm bin/ folder to PATH’ option then choose your desired Node version (if this option is unavailable, see your system administrator).
jenkins-node

Specifying a NodeJs version for Jenkins to use

Next, let’s create a temporary build step to verify our job can reach Webpagetest.org

  • Click the ‘Add Build Step’ menu button
  • Select ‘Execute shell’
  • In the text box that appears, past in the following Curl command:
    •  curl http://webpagetest.org -s –f 
  • Save changes to your job and click the build option on the main job settings screen

Once the job has executed, click on the build icon and select the console output option.  Review the console output captured by Jenkins.  If you see HTML output captured at the end of the job, congratulations, your Jenkins instance can talk to Webpagetest.org (and its API).

Now we are ready to configure our actual job.

The Task at Hand:  

 The next steps will walk you through creating a test script.  The purpose of this script is as follows:

  • Install a few command line tools in our Jenkins instance when executed
  • Use these tools to execute a test using Webpagetest.org’s API
  • Determine when our test execution is complete
  • Collect and preserve our results in the following manner:
    • HAR (HTTP Archive)The entire test result data in JSON format
    • A PNG of our application’s waterfall
  • Filter our results for a specific element’s statistic we wish to know about

Adding Tools:

Click on the configure link for your job and scroll to the shell execution window.  Remove your curl command and copy the following into the window:

npm install -g webpagetest
npm install -g jq-cli-wrapper
npm install -g wget-improved

If you know that your Jenkins environments comes deployed with wget already installed, you can skip the last npm install command above.

Once this portion of the script executes, we’ll now have a command line tool to execute tests against Webpagetest’s API.  This wrapper makes it somewhat easier to work with the API, reducing the amount of syntax we’ll need for this script.

Next in our script, we can declare this command that will kick off our test:

webpagetest test $SITE -k $APIKEY -r $RUNS &gt; test_exec.json
&lt;span style="font-size: 1rem;"&gt;testId=$(cat test_exec.json | jq .data.testId)
&lt;/span&gt;testId="${testId%\"}"
testId="${testId#\"}"
echo $testId

This script block calls the API, passing it some variables (which we’ll get to in a minute) then saves the results to a JSON file.

Next, we use the command line tools cat and jq to sort our response file, assigning the testId key value from that file to a shell variable.  We then use some string manipulation to get rid of a few extraneous quotation marks in our variable.

Finally, we print the testId variable’s contents to the Jenkins console log.  The testId variable in this case should now be set to a unique string ID assigned to our test case by the API.  All future API requests for this test will require this ID.

Adding Parameters:

Before we go further, let’s address the 3 variables we mentioned above.  Using shell variables in the confines of the Jenkins task will give us a bit more flexibility in how we can execute our tests.  To finish setting this part up, do the following:

  • In the job configuration screen, scroll to and check the ‘This build is parameterized’ option.
  • From the dropdown menu, select String Parameter
  • In the name field, enter SITE
  • For the default field, enter a URL you wish to test (e.g. http://www.bazaarvoice.com)
  • Repeat this process for the APIKEY and RUNS variables
  • Set the RUNS parameter to a value of 1
  • Set the default value for the APIKEY parameter to your Webpagetest.org API key.

Save changes to your job and try running it.  See if it passes.  If you view the console output form the run, there’s probably not much you’ll find useful.  We have more work to do.

site-parameter

Your site parameter should look like this

Your runs parameter should look like this

And your key parameter should looks like this

Waiting in Line:

The next portion of our script is probably the trickiest part of this exercise.  When executing a test with the Webpagetest.org web form, you probably noticed that as your test run executes, your test is placed into a queue.

waiting

Cue the Jeopardy! theme…

Depending current API usage, we won’t be able to tell how long we may be waiting in line for our test execution to finish.  Nor will we know exactly how long a given test will take.

To account for that, next we’ll have to modify our script to poll the API for our test’s status until our test is complete.  Luckily, Webpagetest.org’s API provides a ‘status’ endpoint we can utilize for this.

Add this to your script:

webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)

Using our API wrapper for webpage test and jq, we’re posting a request to the API’s status endpoint, with our test ID variable passed as an argument.  The response then is saved to the file, status.json.

Next, we use cat and jq to filter the content of status.json, returning the contents of the ‘testsCompleted’ (which is a child of the ‘data’ node) within that file.  This gets assigned to a new variable – testStatus.

Still Waiting in Line:

 

long line

Just like going to the DMV

The response returned when polling the status endpoint contains a key called ‘status’ with possible values set to ‘In progress’ or ‘Completed’.  This seems like the perfect argument we would want to check to monitor our test status.However, in practice, this key-value pair can be set prematurely – returning a value of ‘Completed’ when not necessarily every run within our test set has finished.  This can result in our script attempting to retrieve an unfinished test result – which would be partial or empty.

After some trial and error, it turns out that reading the ‘testsCompleted’ key from the status response is a more accurate read as to the status of our tests.  This key contains an integer value equal to the number of test runs you specified that have completed execution.

Add this to your script:

while [ "$testStatus" != "$RUNS" ]
do
sleep 10
webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)
done

This loop compares the number of tests completed with the number of runs we specified via our Jenkins string variable, $RUNS.

While those two variables are not equal, we request our test’s status, overwrite the status.json file with our new response, filter that file again with jq and assign our updated ‘testsCompleted’ value to our shell variable.

Note that the 10 second sleep command within the loop is optional.  We added that as to not overload the API with requests every second for our test’s status.

Once the condition is met, we know that the number of tests runs we requested should be the same number completed. We can now move on.

Getting The Big Report:

Once we have passed our while loop, we can assume that our test has completed.  Now we simply need to retrieve our results:

webpagetest har $testId -o results.har

webpagetest results $testId -o results.json

Webpagetest’s API provides multiple options to retrieve result information for any given test.  In this case, we are asking the API to give us the test results via HAR format.

Note – if you wish to deliver your test results in HAR format, you will need a 3rd party application or service to view .har files.  There are several options available including stand-alone applications such as Charles Proxy and Har Viewer.

Additionally, we are going to retrieve the test results from our run using the API’s results endpoint and assign this to a JSON file – results.json.

Now would be a great time to save the changes we’ve made to your test.

Getting the Big Picture:

Next, we want to retrieve the web application’s waterfall image – similar to what was returned when executing our test via the Webpagetest.org web UI.

To do so, add this to your script:

waterfallImage=$(cat results.json | jq '.data.runs."1".firstView'.images.waterfall)
waterfallImage="${waterfallImage%\"}"
waterfallImage="${waterfallImage#\"}"
nwget $waterfallImage -O waterfall.png

Since we’ve already retrieved our test results and saved them to the results.json file, we simply need to filter this file for a key-value pair that contains the URL for where our waterfall snapshot resides online.

In the results.json file, the main parent that contains all test run information is the ‘data’ node.  Within that node, our results are divided up amongst each test run.  Here we will further filter our results.json object based on the 1st test run.

Within that test run’s ‘firstView’ node, we have an images node that contains our waterfall image.

We then assign the value of that node to another shell variable (and then use some string manipulation to trim off some unnecessary leading and ending quotation characters).

Finally, as we installed the nwget module at the beginning of this script, we invoke it, passing the URL of our waterfall image as an argument and the option to output the result to a PNG file.

Archiving Results:

Upon each execution, we wish to save our results files so that we can build a collection of historical data as we continue to build and test our application.  Doing this within Jenkins is easy

Just click on the ‘Add Post Build Action’ button within the Jenkins job configuration menu and select ‘Archive the artifacts’ from the menu.

In the text field provided for this new step, enter ‘*.har, results.json, waterfall.png’ and click save.

Now, when you run your test job – once the script succeeds, it will save each instance of our retrieved HAR, waterfall image and results.json file to each respective Jenkins run.

Once you save your job configuration, click the ‘Build with Parameters’ button and try running it.  Your artifacts should be appended to each job run once it completes.

results

Collecting your results in Jenkins

Like a Key-Value Pair in a JSON Haystack:

Next, we’re going to talk about result filtering.  The results.json provided initially by the API is exhaustive in size.  Go ahead and scroll through that file saved from your last successful test run (you will probably be scrolling for a while).  There’s a mountain of data there but let’s see if we can fish some information out of the pile.

The next JSON-relative trick we’re going to show you is how you can filter the results.json from Webpagetest.org down to a specific object you wish to measure statistics for.

The structure of the results.json lists every web element (from .js libraries to images, to .css files) called during the test and enumerates some statistics about it (e.g. time to load, size, etc.).  What if your goal is to monitor and measure statistics about a very specific element within your application (say, a custom .js library you ship with your app)?

needle-in-a-haystack

That’s one way of finding it…

For that purpose, we’ll use cat and js again to filter our results down to one, specific file/element’s results.  Try adding something like this to your script:

cat results.json | jq '.data.runs."1".firstView.requests[] | select(.url | contains(" <a unique identifying string for your web element> "))' > stats.json

This is similar to how we filtered our results to obtain our waterfall image above.  In this case, each test result from Webpagetest.org contains a JSON array called ‘requests’.  Each item in the array is delineated by the URL for each relative web element.

Our script command above parses the contests of results.json, pulling the ‘results’ array out of the initial test run, then filtering that array based on the ‘url’ key, provided that key matches the string we provided to jq’s ‘contains’ method.

These filtered results are then output to the stats.json file.  This file will contain the specific test result statistics for your specific web element.

Add stats.json to the list of artifacts you wish to archive per test run.  Now save and run your test again.  Note, you may need to experiment with the arguments passed to JQ’s contains method to filter your results based on your specific needs.

Next Steps:

At this point, we should have a Jenkins job that contains a script that will allow us to execute tests against Webpagetest.org’s public API and retrieve and archive test results in a set of files and formats.

This can be handy in of itself but what if you have team or organization members who need access to some or all of this data but do not have access to Jenkins (for security purposes or otherwise)?

Some options to expose this data to other teams could be:

  • Sync your archived elements to a publicly available bucket in Amazon S3
  • Have a 3rd party monitoring service (Datadog, Sumo Logic, etc.) read and report test findings
  • Send out email notifications if some filtered stat within our results exceeds some threshold

There’s quite a few ideas to expand upon for this kind of testing and reporting.  We’ll cover some of these in a future blog post so stay tuned for that.

Private or Testing and Wrap-Up:

If you find this method of performance testing useful but are feeling a bit limited by Webpagetest.org’s restrictions on their public API (or the fact that all test results are made public), it is worth mentioning that if you’re needing something more robust or confidential, you can host your own, private instance of the free, open-source API (see Webpagetest.org’s github project for documentation).

Additionally, Webpagetest.org also has pre-built versions of their app available as Amazon EC2 AMIs for use by anyone with AWS access.  You can find more information on their free AMIs here.

Additionally, here’s the script we’ve put together through this post in its entirety:


npm install -g webpagetest
npm install -g jq-cli-wrapper
npm install -g wget-improved

webpagetest test $SITE -k $APIKEY -r $RUNS &gt; test_exec.json
&lt;span style="font-size: 1rem;"&gt;testId=$(cat test_exec.json | jq .data.testId)
&lt;/span&gt;testId="${testId%\"}"
testId="${testId#\"}"
echo $testId

webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)

while [ "$testStatus" != "$RUNS" ]
do
sleep 10
webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)
done

waterfallImage=$(cat results.json | jq '.data.runs."1".firstView'.images.waterfall)
waterfallImage="${waterfallImage%\"}"
waterfallImage="${waterfallImage#\"}"
nwget $waterfallImage -O waterfall.png

cat results.json | jq '.data.runs."1".firstView.requests[] | select(.url | contains(" <a unique identifying string for your web element> "))' > stats.json

We hope this article has been helpful in demonstrating how some free tools and services can be used to quickly stand up performance testing for your web applications.  Please check back with our R&D blog soon for future articles on this and other performance related topics.

 

 

Maintaining Test Data with the “someObject” Test Structure

Language: Scala
TestTool: Scalatest

How did we get here?

When systems become reasonably complex, tests must manage cumbersome amounts of data. A test case that may test a small bit of functionality may start to require large amounts of domain knowledge about the system being tested. This is often done through the mock data used to set up the test. Maintenance of this data becomes cumbersome, monotonous and can feel Sisyphean. To solve these problems we created “someObject”, a modular system that allows us to maintain data in only one location while providing the flexibility to create specific data for our tests.

Let’s Do A Code Time!

To start this post, we’re going to build a system without the “someObject” test structure to provide context for its use. (To skip to the “someObject” structure, jump to here!). Suppose we are building a service that reports on advertising campaigns. We may create a class that describes an advertising campaign and call it `Campaign`.

case class Campaign(id: String,
    name: String,
    client: String,
    startDate: UTCDate,
    endDate:UTCDate,
    deleted: Boolean)

Now we are going to store this campaign in a database, and we need to write some integration tests to make sure this operation is performed properly. A test that confirms a campaign is stored properly might look like this:

it should "properly store a single campaign" in {
    Given("we have a proper campaign")
    val campaignToStore = Campaign(
        id = "someId",
        name = "someName",
        client = "someClient",
        startDate = UTCDate(1955, 10, 6),
        endDate = UTCDate(1956, 10, 6),
        deleted = false)

    And("we store this campaign in the database")
    database.storeCampaign()

    When("we fetch the given campaign")
    val fetchedCampaign:Campaign = database.fetchCampaign()

    Then("All the campaign values were stored properly")
    fetchedCampaign.id shouldBe campaignToStore.id
    fetchedCampaign.name shouldBe campaignToStore.name
    fetchedCampaign.client shouldBe campaignToStore.client
    fetchedCampaign.startDate shouldBe campaignToStore.startDate
    fetchedCampaign.endDate shouldBe campaignToStore.endDate
    fetchedCampaign.deleted shouldBe campaignToStore.deleted
}

BlogIntro.scala

Add some functionality:

Now we add the ability to update some values for this campaign in the database, and we need to test this new piece of functionality. That test might look something like this:

it should "properly update a single campaign" in {
    Given("we have a proper campaign")
    val campaignToStore = Campaign(
        id = "someId",
        name = "someName",
        client = "someClient",
        startDate = UTCDate(1955, 10, 6),
        endDate = UTCDate(1956, 10, 6),
        deleted = false)

    And("some update parameters for our campaign")
    val updateParameters = UpdateParameters(name = Some("someNewName"))

    And("we store this campaign in the database")
    database.storeCampaign(campaignToStore)

    When("we update this campaign")
    database.updateCampaign("someId", updateParameters)

    val fetchedCampaign:Campaign = database.fetchCampaign("someId")

    Then("All the campaign values were stored properly")
    //Unchanged values
    fetchedCampaign.id shouldBe campaignToStore.id
    fetchedCampaign.client shouldBe campaignToStore.client
    fetchedCampaign.startDate shouldBe campaignToStore.startDate
    fetchedCampaign.endDate shouldBe campaignToStore.endDate
    fetchedCampaign.deleted shouldBe campaignToStore.deleted
    //Changed values
    fetchedCampaign.name shouldBe updateParameters.name
}

BlogUpdateFunction.scala

But here we see the duplication of test boilerplate code in `campaignToStore`. We don’t want to have to copy over `campaignToStore` into every test, so we might want to abstract that out to be used all over the suite.

object MyCampaignTestObjects {
    val campaignToStore = Campaign(
        id = "someId",
        name = "someName",
        client = "someClient",
        startDate = UTCDate(1955, 10, 6),
        endDate = UTCDate(1956, 10, 6),
        deleted = false)
}

BlogAbstractToSuite.scala

Now we can use the same data in every test!

Add a Test that requires unique data:

Suppose we now write a function to fetch all the campaigns that are stored in the database. We might need a test that involves uniqueness in the data we store, such as the following example:

it should "properly fetch all stored campaigns" in {
    Given("we store several unique campaigns")
    val anotherCampaignToStore = Campaign(
        id = "someSecondId",
        name = "someSecondName",
        client = "someSecondClient",
        startDate = UTCDate(1955, 10, 6),
        endDate = UTCDate(1956, 10, 6),
        deleted = false)
    database.storeCampaign(campaignToStore)
    database.storeCampaign(anotherCampaignToStore)

    When("we fetch all campaigns")
    val allCampaigns = database.fetchAllCampaigns()

    Then("all campaigns are returned")
    allCampaigns shouldBe List(campaignToStore, anotherCampaignToStore)
}

BlogTestWithUniqueTestData.scala

In the example, we re-used the campaign we abstracted out earlier for conciseness, but this makes this test unclear that `anotherCampaignToStore` is unique. What if someone else comes in and changes `campaignToStore` and it happens to match data from `anotherCampaignToStore`? This test would then become flakey and nobody likes flakey tests. We might decide to just make all data used in this test local to this test, but then we will need to maintain the test data in both this test, and `MyCampaignTestObjects`.

Add Some Arbitrary Data Constraints:

Suppose now that there is a new design constraint on how campaigns can be stored in the database. Now all client names must be lowercased in all campaigns:

object MyCampaignTestObjects {
    val campaignToStore = Campaign(
    id = "someId",
    name = "someName",
    //We change the client name to match our new requirement
    client = "some_client",
    startDate = UTCDate(1955, 10, 6),
    endDate = UTCDate(1956, 10, 6),
    deleted = false)
}

BlogNewConstraint.scala

Now we start to see the issue with maintaining test data across the whole suite that we’ve been constructing. We need to find every mock campaign that is used in our suite and ensure that its client field data is lowercased. Realistically, many of our tests, (specifically in this example, the `fetchAllCampaigns` test) don’t care about the client field of their campaign, and so we shouldn’t need to care about the client field value while setting up our mock test data. Because this example is small, it’s not cumbersome to directly update the value to satisfy the new constraint. Now let us Imagine a large set of suites, each containing hundreds of unique test cases. Suddenly this single suite requires a large amount of work to refactor one field across each test case. Nobody wants to do that monotonous maintenance. To address this, our team adopted the “someObject” structure to minimize this data maintenance within our tests.

someObject Test Structure:

When designing this test structure, we wanted to make our test data extendable for use anywhere it is needed. We used Scala’s `trait` to mix in necessary functions to provide test objects to the objects inside our tests, such as the `MyCampaignTestObjects` object above:

trait CampaignTestObjects {
    def someCampaign(id: String = "someId",
                     name: String = "someName",
                     client: String = "some_client",
                     startDate: UTCDate = UTCDate(1955, 10, 6),
                     endDate: UTCDate = UTCDate(1956, 10, 6),
                     deleted: Boolean = false): Campaign =
        Campaign(
            id = id,
            name = name,
            client = client,
            startDate = startDate,
            endDate = endDate,
            deleted = deleted)
}
 object MyCampaignTestObjects extends CampaignTestObjects {
    //Any other setup methods for test data
}

Now we can revisit our `fetchAllCampaigns` test example.

it should "properly fetch all stored campaigns" in {
    Given("we store several unique campaigns")
    val campaignToStore = someCampaign(id = "someId")
    val anotherCampaignToStore = someCampaign(id = "someNewId")
    database.storeCampaign(campaignToStore)
    database.storeCampaign(anotherCampaignToStore)

    When("we fetch all campaigns")
    val allCampaigns = database.fetchAllCampaigns()

    Then("all campaigns are returned")
    allCampaigns shouldBe List(campaignToStore, anotherCampaignToStore)
}

BlogBasicSomeObject.scala

Inside this test, we’ve set up two unique campaigns, by calling the `someCampaign` method from our test data trait. This populates the returned campaign with dummy data that we don’t care about. All we need out of this method is “some campaign” with “some data”. Now, instead of obscuring the intent of the test case by setting up cumbersome, overly-expressive mock data, we can simply override the implicitly available mock objects with only the necessary data. For the unique campaigns needed in the `fetchAllCampaigns` test, we only only really care about each campaign’s identifier. We don’t update the name, client, startDate, etc. because this test doesn’t care about any of that data. We only need to care that the campaigns are unique for our database. Under this test structure, when we receive the design change about the client names being lowercased, we don’t need to update our `fetchAllCampaigns`.

Another Example:

Let’s provide another example that our team encountered. Campaigns inside our database now need to also store the amount of money spent on each ad campaign. We’re adding a new column to our database, and changing the database schema.

case class Campaign(id: String,
    name: String,
    client: String,
    totalAdSpend: Int,
    startDate: UTCDate,
    endDate: UTCDate,
    deleted: Boolean)

Now, every test that has a campaign involved needs to be updated to include a new field; but under the “someObject” structure we only need to add two lines and all existing tests should be working fine again:

trait CampaignTestObjects {

    def someCampaign(id: String = "someId",
                     name: String = "someName",
                     client: String = "some_client",
                     totalAdSpend: Int = 123456,
                     startDate: UTCDate = UTCDate(1955, 10, 6),
                     endDate: UTCDate = UTCDate(1956, 10, 6),
                     deleted: Boolean = false): Campaign =
        Campaign(
            id = id,
            name = name,
            client = client,
            totalAdSpend = totalAdSpend,
            startDate = startDate,
            endDate = endDate,
            deleted = deleted)
}

BlogPostSchemaChange.scala

 

Behavior Driven Tests:

The purpose of the “someObject” structure is to minimize data maintenance within tests. We want to ensure that we’re disciplined about only setting data that the tests need to care about. There are cases where data might seem necessary for what we are testing, but we can abstract that data away to de-couple the test’s reliance on hard coded values. For example, suppose we have a function that returns the sum of all the `totalAdSpend` across our database.

def sumAllSpend(campaignsToSum: List[Campaign]):Int = 
    campaignsToSum.reduce(_.totalAdSpend + _.totalAdSpend)

To test this function, we might write a test like this:

it should "properly sum all totalAdSpend" in {
    Given("we store several unique campaigns")
    val campaignToStore = someCampaign(id = "someId", totalAdSpend = 123)
    val anotherCampaignToStore = someCampaign(id = "someNewId", totalAdSpend = 456)
    database.storeCampaign(campaignToStore)
    database.storeCampaign(anotherCampaignToStore)

    When("we sum all ad spend")
    val totalTotalAdSpend = sumAllSpend(database.fetchAllCampaigns())

    Then("the result is the sum of all ad spend")
    totalTotalAdSpend shouldBe 975
}

BlogPostNonBehaviorTest.scala

While this test does work, and it utilizes this “someObject” structure, it still forces data management at the test level.

`sumAllSpend` doesn’t really care about any one campaign’s `totalAdSpend` value. It only cares that we add all of the `totalAdSpend` values up correctly. We could instead write our test to assert on this behavior instead of doing the math ourselves and taking on the responsibility of managing more data.

it should "properly sum all totalAdSpend" in {
   Given("we store several unique campaigns")
   val campaignToStore = someCampaign(id = "someId")
   val anotherCampaignToStore = someCampaign(id = "someNewId")

   val allCampaignsToStore = List(campaignToStore,anotherCampaignToStore)
   allCampaignsToStore.foreach(campaign =&amp;gt; database.storeCampaign(campaign))

   When("we sum all ad spend")
   val totalTotalAdSpend = sumAllSpend(database.fetchAllCampaigns())

   Then("the result is the sum of all ad spend")
   totalTotalAdSpend shouldBe allCampaignsToStore.reduce(_.totalAdSpend + _.totalAdSpend)
}

BlogPostBehaviorTest.scala

With this test, we don’t care what campaigns sales were, we don’t care how many campaigns are stored, and we don’t care about any constant value. This test will return the sum of all campaign’s `totalAdSpend` value that we store in the database.

Conclusion:

In this introductory blog post, we explored the someObject testing structure in scala, but this concept is not intended to be language specific. Scala makes this concept easy to implement through the use of Default Parameter Values but in a future post I’ll show how it can be implemented through the Builder Pattern in a language like Java. Another unexplored “someObject” concept is the granularity of control in setting default data. This post introduces the “global” and test specific setting of default data, but doesn’t explore how to set test suite level data for our test objects, and the cases in which that might be useful. I’ll discuss that the future post as well.

Code snippets:

BlogIntro.scala
BlogUpdateFunction.scala
BlogAbstractToSuite.scala
BlogTestWithUniqueTestData.scala
BlogNewConstraint.scala
BlogBasicSomeObject.scala
BlogPostSchemaChange.scala
BlogPostNonBehaviorTest.scala
BlogPostBehaviorTest.scala

Publishing Load Test Results with Jmeter, Jenkins and S3

A while ago, I published a blog post that presented a tutorial overview of how to use Jmeter for load testing a typical RESTful API.

This post builds upon that original post with handy information on some updated reporting features of Jmeter as well a quick dive into how you can better propagate your load results in a continuous build environment (ala Jenkins).

if you’re completely new to Jmeter, please read my previous blog post, linked above before proceeding.

For those of you experienced with Jmeter (hopefully you already have some tests you’re wanting to deploy somewhere rather than let them live contentedly on your local machine), get ready because you’re gonna love this

67295777

It’s truly amazing the gifs you can find on the web.

Meet the New Jmeter, Same as the Old Jmeter (not really)

The previous load test tutorial covered setting up a load test from scratch using an early version of Jmeter.  One of the benefits of using Jmeter, as outlined in the original post was that it has been around for quite a long time, is well known, stable and free.  However, one of the major cons in using Jmeter is that its’ well, old – and has gone for a long while without major updates – especially when it comes to report generation.

Well, welcome to Jmeter 3!  It still comes with that same, great, vintage 90s UI but now with a hoard a bug fixes, stability upgrades and – wait for it – native HTML report generation, wooo!  Check out version 3’s documentation here:

Installation and Setup

So, let’s get down to it!  You can find the latest build of Jmeter 3 here: https://jmeter.apache.org/download_jmeter.cgi

Steps to setting up Jmeter 3 locally are similar to the installation steps covered in my original blog post.  Pre-installation requirements are the same:

  • Java JDK (version 7 or later)
  • Be sure your Java HOME local environment or path variables are set before execution.

Functionally, Jmeter 3 appears identical to older versions of the application but there are a lot of changes under the hood.  I recommend unarchiving and installing Jmeter 3 in its own directory, separate from any previous version of Jmeter you may have previously installed.

Migrating your tests

If you’re following along with this tutorial and wish to create a new load test from scratch, please follow the test setup steps in the original post’s tutorial however, be sure to SKIP the 3rd party report and listener portion of the test setup (guess what, you won’t be needing those plugins anymore).

If you have an existing test you wish to simply fire up in Jmeter 3, your mileage will vary in this case.  Some of the 3rd party listeners and other tools you may have used in your setup may be incompatible with the newer version of Jmeter.  If you have any custom listeners embedded in your test I recommend doing the following:

  • Open your original test in your older version of Jmeter
  • Remove 3rd party listeners, samplers or elements from your test (right-click and select remove)
  • Save the updated test as a new test
  • Open the newly created test in Jmeter 3

From this point, you won’t really need any of your original report listeners configured in your test, however, for debugging purposes, I do recommend keeping a simple response listener in your test – one for successful API responses, another separate for failed responses – just to help with debugging any issues your test might run into.

In a few cases, I’ve found despite removing any 3rd party plugins that some tests have compatibility issues with Jmeter 3.  Sadly, the easiest course of action is to rebuild the test from scratch.  Such is life (sigh).

Such is life...

Such is life…

Executing Tests with Report Generation

Given you have your initial test configured for Jmeter 3, let’s start up the app and view it in the new UI (though it looks like the old UI).  Once you start Jmeter froim the command line, you’ll see a message like this in your terminal:

================================================================================
Don't use GUI mode for load testing, only for Test creation and Test debugging !
For load testing, use NON GUI Mode & adapt Java Heap to your test requirements
================================================================================

You don’t say!?  If you’re a long-time user of earlier version of Jmeter, you’re probably thinking, “Well, no duh!” right?

Open your load test in the UI and give it a look-over to make sure its configured the way you intend.  It should be minimal simple thread group, your HTTP samplers, your results tree report and any CSV data resources you might need.

Your test may look something like this

Your test may look something like this

Next, save your test and close Jmeter (we’re now done with the Jmeter UI).

Now, we are going to use Jmeter’s command line interface to execute the load test and generate the report for the test.

Do the following:

Open your CLI terminal and navigate to your Jmeter 3 bin directory.  Execute the following command:

jmeter -n -t <test JMX file> -l <test log file> -e -o <Path to output folder>

Be sure to set the ‘test JMX file’ argument to the path where your load test’s .JMX file resides, relative to the Jmeter bin directory.

The test log file should be an arbitrary log file path/name that currently does not exist.

The output folder path is where your report will be written to.  This must be an empty directory.

A couple of notes:

If you configured your test to run for a set period, just sit back and wait.  The report will be generated in the output directory you provided once the test execution finishes.

If you configured it to run in perpetuity – you will need to formally stop the test before report generation will execute (simply killing your terminal session won’t help you here).

To stop your Jmeter test from the command line, simply open a new terminal window and enter one of the following commands:

Control + . – This stops Jmeter, halting any active listeners.

Control + ,  – This sends a shutdown command to Jmeter – it will wait until all active threads have finished their current tasks before closing shop (this is the recommended method to use unless you’re impatient

or there is an emergency – like, oh you forgot you pointed your test at production).

Are you really going to argue with Chuck?

Are you really going to argue with Chuck?

Once your test has gracefully exited, navigate to the output folder you specified and open your newly minted report in your web browser of choice.

Jmeter 3 HTML report

Jmeter 3 HTML report

Pretty sweet huh?
sxf8ww

If you haven’t seen Jmeter’s dashboard report before, you’re probably thinking, “golly, there’s a lot of stuff here” – and there is.  The dashboard report contains a pretty wide variety of data on your test execution.  Enumerating through all of that here would be exhausting however – if you’re interested in all the gory details of Jmeter’s dashboard metrics, check out this in-depth explanation here:

https://jmeter.apache.org/usermanual/generating-dashboard.html

Show and Tell (mostly show)

So now you have this fancy load test report but hoarding all this metric goodness on your local development workstation is no fun for the rest of your team.  But that’s OK because if you have access to a continuous integration build environment (ala – Jenkins), the rest of this blog post will walk you through taking your Jmeter test and throwing it into a Jenkins build that allows you to not only run your test remotely but also save and propagate your results to the rest of your agile team.

Make your test available

First things first, you’re going to need to take your Jmeter test, get it off your local machine and into some form of web-accessible delivery.  If you have AWS access, you could place your .JMX files and resources into a bucket and be done with it.  In our case, I recommend creating a private Github repository for your tests and committing your Jmeter working directory, tests and resource files to that repository.  There’s a couple of reasons for this:

  • It’s easy to access a given Github repository with Jenkins
  • You can manage updates and versions of your tests just like source code
  • You can manage variants of your load tests using Github’s branch management features

That said, before tossing your tests into Github’s gaping maw, here’s an important note regarding information security:

Depending on your type of load test, you may have some API or other service access keys stored in your test.  You don’t want to check in API or other access keys into your repository (even if it is marked private).  But don’t just take my word for it:

https://softwareengineering.stackexchange.com/questions/205606/strategy-for-keeping-secret-info-such-as-api-keys-out-of-source-control

If you’ve already committed your tests to your repository and you still have some software keys in there and are thinking, “Oh boy… What now!?”, here’s a link to some handy notes on how to extract some sensitive values from your test or code using some handy features of Git:

https://gist.github.com/derzorngottes/3b57edc1f996dddcab25

Getting Up and Running with Jenkins

Jenkins is one of the most widely available CI build tools in use today.  As such, we’re going to cover how to set up a Jenkins task to run your load tests.  If you’re using some other build system (e.g. Team City) this walk-through won’t be as pertinent but essentially, the tasks will be the same.

Step 1: Get access to Jenkins:

Assuming you already have a Jenkins build server of some sort available to your organization – let’s log into it.  If you don’t have access to Jenkins, talk to your friendly neighborhood dev ops engineer.  Note, that for this walkthrough, you’re going to need some level of administrator access to set up everything we need to execute our tests.

Step 2: Create and Setup your Job with Parameters:

Do the following:

  • From the Jenkins home page, choose a directory you wish to run your load test jobs from (or create one).
  • Click the “New Item” link and then choose “Freestyle Project”
  • Give your project a name and description
  • Choose the “This project is parameterized” option – we’ll use parameters to allow you to configure your load test If needed.
  • Select the “New string parameter” option and create a new parameter named, “thread”
  • Set the default value to whatever you want your test’s default number of threads to be (e.g. 40).
  • Create another string parameter and name it “ramp” – set its default value to the number of seconds you wish your test to take to ramp up to its maximum load.

If your test is going to loop, create another string parameter named, “loops” and set its default, numeric value.

If you have an API key (or keys) you need for your test to execute, as mentioned above, and have already removed said keys from your test (substituting them for a placeholder value) you can create additional string parameters here (naming them key1, key2, etc.).  You can then set your API key(s) to the default value field.  Granted, you are propagating your API key here but this is a much better practice in terms of security than leaving them in a potentially publicly accessible repository as your Jenkins instance should be private and protected.

5dbe72d3d6a5db0d23a62eea640d7c5a_1457280669769jpg-keymaster-gatekeeper-meme_250-250

Step 3: Git Jenkins to Talk to Git:

While still in the Jenkins job configuration screen, under the Source Code Management section, select “Git” and enter your repository’s URL alias into the Repository URL field.

If this is your first time doing this, Jenkins expects a specific format for your Github URL strings.  It should look something like this:

https://github.com:<your main repository name>/<your project name>

If you’re using a publicly accessible Github repository, skip the next step, enter your preferred branch name (or */master if you don’t need a specific branch).

Next, select the appropriate credentials option from the relative drop down menu to allow Jenkins to properly introduce itself to your Github repo (if you’re having trouble with this, ask your dev ops engineer or Jenkins server admin for help – and maybe do something nice for them while you’re at it like get them a thank you card or some cookies or something, because at this point, you’d be out of luck without them, right?).

bp4

Step 5: Build Settings:

You can kick off your tests manually however, it’s probably better to set up timer or other method to handle this for you.  Try the following:

Under the Build Triggers heading, check the Build periodically option

Enter your time range you wish the build to trigger at (this will be in Cron notation – which is tricky if you’re seeing this for the first time).

bp5

This article does a decent job explain Cron’s formatting: http://www.nncron.ru/help/EN/working/cron-format.htm

Now, click on the Add build step option and from the dropdown menu, select ‘Execute shell’.

Here, we will define the command we will use to execute the Jmeter test once Jenkins pulls the test resources from Github. This will be like the shell command we used to run Jmeter locally as described above.

bp7

Syncing to Amazon Web Services

Now we need to make sure we’re saving our generated report somewhere so the rest of our team can review it.  Jenkins has a Jmeter performance plugin that can archive our results.  Also, we could just have Jenkins archive the output directory Jmeter generates the report in (though handing all the recursive directories it will build might be a pain).

Given you have an AWS account, you have the options to sync your reports to an AWS bucket and host them like a static web page.  This method would be beneficial if you need to expose your load test results to team members or other parties that do not or should not have access to your build systems (e.g. product manager, support team members, etc.).

If you don’t have access to AWS, aren’t sure you have access to AWS or may have issues with your build server communicating to AWS, once again, now is a fine time to make friends with (*cough* bribe *cough*) your friendly neighborhood dev ops engineer.

Let’s say you’ve taken care of that and you and your dev ops guru are best buddies now.  Next, you’ll need to further configure your Jenkins job by adding another Execute shell build step.  Make sure this one is ordered sequentially after your Jmeter test execution step.

In the Execution shell, simply add a command to change to the output directory your report will be in (defined in your test execution command) and use the AWS sync command to send your HTML report to AWS:

cd bin/target
 aws s3 sync . s3://mybucketroot/mybucketname

Save your Jenkins job.  You’re done (with Jenkins at least).

Now, in the above step, we are assuming you have a bucket already created in AWS where you can sync your data to.  You’re also going to need access to modify how the bucket works in terms of its permissions.

Making Your Test Results Readable from the Web:

By default, your AWS bucket is out there on the web, you can read from it, write from it, but your other team members will have trouble reading the HTML report from that location.  We need to configure the bucket so that it will display its contents like a static web page.  Here’s how:

Log into your AWS account and access your specified bucket.

Click on the bucket and select the Properties option

Click the Static website hosting panel

bp9

Enable the static hosting option

Specify your index.html file location (this should be the relative path to the main HTML file for your uploaded report).

bp10

Save your changes and boom – now you’re done.

Save and run your job.  If your account permissions are all set up correctly, your test should execute and the test results will be uploaded to the specified AWS bucket.

From here, simply access your AWS bucket using the following styled web link:

https://s3.amazonaws.com/myBucketRoot/myBucketName

995f09e2e0291e1c61e3d22a62438b95f23395d966984d472eb5b43ea04f76beYour test results should now be viewable just as if they were a static web page by anyone on your team.

Wrapping it Up:

Now that you have your load tests residing in a Github repository, feel free to leverage Github’s branching features to allow you to manage multiple iterations of your load tests:

Commit different variations of the same branch to new branches

Use Jenkin’s copy feature to copy your existing job, and just point it to your different branches to allow you to execute multiple test types.

Use Jmeter’s UI locally to copy/edit your existing tests to make additional iterations.  Save these to your repo and commit them just like you would any other coding project.

Again, use Jenkin’s copy feature to create variations of your test run jobs – this time, just edit your shell command to execute your desired .JMX test file per a given execution.

Thanks for reading and I hope this give you further ideas and insight into how you can leverage Jmeter to further improve your software projects through load testing.