What was Old is New: Finding Joy in Modernising Legacy Systems

(cover image from ThisisEngineering RAEng)

Let’s face it: software is easier to write than maintain. This is why we, as software engineers, prefer to just “rip it out and start over” instead of trying to understand what another developer (or our past self) was thinking. We seem to have collectively forgotten that “programs must be written for people to read, and only incidentally for machines to execute”. You know it’s true — we’ve all had to painstakingly trace through a casserole of spaghetti code and thin, old-world-style abstractions digging for the meat of the program only to find nothing but a mess at the bottom of our plates.

It’s easy to yell “WTF” and blame the previous dev, but the truth is often more complicated. We can’t see the future, so it’s impossible to understand how requirements, technology, or business goals will grow when we design a net-new system. As a result, systems can become unreadable as their scope increases along with the business’s dependency on them. This is a bit of a paradox: older, harder to maintain systems often provide the most value. They are hard to work on because they’ve grown with the company, and scary to work on because breaking it could be a catastrophe.

Here’s where I’m calling you out: if you like hard, rewarding problems… try it. Take the oldest system you have and make it maintainable. You know the one I’m talking about — the one no one will “own”. That one the other departments depend on but engineers hate. The one you had to patch Log4Shell on first. Do it. I dare you.I recently had such an opportunity to update a decade old machine learning system at Bazaarvoice. On the surface, it didn’t sound exciting: this thing didn’t even have neural networks! Who cares! We ll… it mattered. This system processes nearly every user-generated product review received by Bazaarvoice — nearly 9 million per month — and does so with 90 million inference calls to machine learning models. Yup — 90 million inferences! It’s a huge scale, and I couldn’t wait to dive in.

In this post, I’ll share how modernizing this system through a re-architecture, instead of a re-write, allowed us to make it scalable and cost-effective without having to rip out all of the code and start over. The resulting system is serverless, containerized, and maintainable while reducing our hosting costs by nearly 80%. This post is a companion piece to a talk I recently presented at AWS Data Summit for Software Companies on generating value from data by leveraging our best practices to ensure success in machine learning projects. This post discusses the technical aspects in more detail, but you can watch the high-level talk linked at the end.

Something Old

First, let’s take a look at what we’re dealing with here. The legacy system my team was updating moderates user-generated content for all of Bazaarvoice. Specifically, it determines if each piece of content is appropriate for our client’s websites.

Photo by Diane Picchiottino

This sounds straightforward — eliminate obvious infractions such as hate speech, foul language, or solicitations — but in practice, it’s much more nuanced. Each client has unique requirements for what they consider appropriate. Beer brands, for example, would expect discussions of alcohol, but a children’s brand may not. We capture these client-specific options when we onboard new clients, and our Client Services team encodes them into a management database.

For some added complexity, we also sample a subset of our content to be moderated by human moderators. This allows us to continuously measure the performance of our models and discover opportunities for building more models.

The full architecture of our legacy system is shown below:

Our legacy moderation system hosted machine learning models on a single EC2 instance. This made deployments slow and limited scalability to the host’s memory size.

This system has some serious drawbacks. Specifically — all of the models are hosted on a single EC2 instance. This wasn’t due to bad engineering — just the inability of the original programmers to foresee the scale desired by the company. No one thought that it would grow as much as it did. In addition, the system suffered from developer rejection: it was written in Scala, which few engineers understood. Thus, it was often overlooked for improvement since no one wanted to touch it.

As a result, the system continued to grow in a keep-the-lights-on manner. Once we got around to re-architecting it, it was running on a single x1e.8xlarge instance. This thing had nearly a terabyte of ram and costs about $5,000/month (unreserved) to operate. Don’t worry, though, we just launched a second one for redundancy and a third for QA 🙃.

This system was costly to run and was at a high risk of failure (a single bad model can take down the whole service). Furthermore, the code base had not been actively developed and was thus significantly out of date with modern data science packages and did not follow our standard practices for services written in Scala.

Something New

When redesigning this system we had a clear goal: make it scalable. Reducing operating costs was a secondary goal, as was easing model and code management.

The new design we came up with is illustrated below:

Our new architecture deploys each model to a SageMaker Serverless endpoint. This lets us scale the number of models without limit while maintaining a small cost footprint.

Our approach to solving all of this was to put each machine learning model on an isolated SageMaker Serverless endpoint. Like AWS Lambda functions, serverless endpoints turn off when not in use — saving us runtime costs for infrequently used models. They also can scale out quickly in response to increases in traffic.

In addition, we exposed the client options to a single microservice that routes content to the appropriate models. This was the bulk of the new code we had to write: a small API that was easy to maintain and let our data scientists more easily update and deploy new models.

This approach has the following benefits:

  • Decreased the time to value by over 6x. Specifically, routing traffic to existing models is instantaneous, and deploying new models can be done in under 5 minutes instead of 30.
  • Scale without limit – we currently have 400 models but plan to scale to thousands to continue to increase the amount of content we can automatically moderate.
  • Saw a cost reduction of 82% moving off EC2 as the functions turn off when not in use, and we’re not paying for top-tier machines that are underutilized.

Simply designing an ideal architecture, however, isn’t the really interesting hard part of rebuilding a legacy system — you have to migrate to it.

Something Borrowed

Our first challenge in migration was figuring out how the heck to migrate a Java WEKA model to run on SageMaker, let alone SageMaker Serverless.

Fortunately, SageMaker deploys models in Docker containers, so at least we could freeze the Java and dependency versions to match our old code. This would help ensure the models hosted in the new system returned the same results as the legacy one.

Photo from JJ Ying

To make the container compatible with SageMaker, all you need to do is implement a few specific HTTP endpoints:

  • POST /invocation — accept input, perform inference, and return results.
  • GET /ping — returns 200 if the JVM server is healthy

(We chose to ignore all of the cruft around BYO multimodel containers and the SageMaker inference toolkit.)

A few quick abstractions around com.sun.net.httpserver.HttpServer and we were ready to go.

And you know what? This was actually pretty fun. Toying with Docker containers and forcing something 10 years old into SageMaker Serverless had a bit of a tinkering vibe to it. It was pretty exciting when we got it working — especially when we got the legacy code to build it in our new sbt stack instead of maven. The new sbt stack made it easy to work on, and containerization ensured we could get proper behavior while running in the SageMaker environment.

Something Blue

So we have the models in containers and can deploy them to SageMaker — almost done, right? Not quite.

Photo by Tim Gouw

The hard lesson about migrating to a new architecture is that you must build three times your actual system just to support migration.

In addition to the new system, we had to build:

  • A data capture pipeline in the old system to record inputs and outputs from the model. We used these to confirm that the new system would return the same results.
  • A data processing pipeline in the new system to compute results and compare them to the data from the old system. This involved a large amount of measurement with Datadog and needed to offer the ability to replay data when we found discrepancies.
  • A full model deployment system to avoid impacting the users of the old system (which would simply upload models to S3). We knew we wanted to move them to an API eventually, but for the initial release, we needed to do so seamlessly.

All of this was throw-away code we knew we could toss once we finished migrating all of the users, but we still had to build it and ensure that the outputs of the new system matched the old.

Expect this upfront.

While building the migration tools and systems certainly took more than 60% of our engineering time on this project, it too was a fun experience. Unit testing became more like data science experiments: we wrote whole suites to ensure that our output matched exactly. It was a different way of thinking that made the work just that much more fun. A step outside our normal boxes, if you will.

So… Just Try It

Next time you’re tempted to rebuild a system from code up, I’d like to encourage you to try migrating the architecture instead of the code. You’ll find interesting and rewarding technical challenges and will likely enjoy it much more than debugging unexpected edge cases of your new code.


The talk I gave is a bit more high-level and goes into the MLOps side of things. Check it out here:

Kedro 6 Months In

We build AI software in two modes: experimentation and productization. During experimentation, we are trying to see if modern technology will solve our problem. If it does, we move on to productization and build reliable data pipelines at scale.

This presents a cyclical dependency when it comes to data engineering. We need reliable and maintainable data engineering pipelines during experimentation, but don’t know what that pipeline should do until after we’ve completed the experiments. In the past, I and many data scientists I know have used an ad-hoc combination of bash scripts and Jupyter Notebooks to wrangle experimental data. While this may have been the fastest way to get experimental results and model building, it’s really a technical debt that has to be paid down the road.

The Problem

Specifically, the ad-hoc approach to experimental data pipelines causes pain points around:

  • Reproducibility: Ad-hoc experimentation structures puts you at risk of making results that others can’t reproduce, which can lead to product downtime if or when you need to update your approach. Simple mistakes like executing a notebook cell twice or forgetting to seed a random number generator can usually be caught. But other, more insidious problems can occur, such as behavior changes between dependency versions.
  • Readability: If you’ve ever come across another person’s experimental code, you know it’s hard to find where to start. Even documented projects might just say “run x script, y notebook, etc”, and it’s often unclear where the data come from and if you’re on the right track. Similarly, code reviews for data science projects are often hard to read: it’s asking a lot for a reader to differentiate between notebook code for data manipulation and code for visualization.
  • Maintainability: It’s common during data science projects to do some exploratory analysis or generate early results, and then revise how your data is processed or gathered. This becomes difficult and tedious when all of these steps are an unstructured collection of notebooks or scripts. In other words, the pipeline is hard to maintain: updating or changing it requires you to keep track of the whole thing.
  • Shareability: Ad-hoc collections of notebooks and bash scripts are also difficult for a team to work on concurrently. Each member has to ensure their notebooks are up to date (version control on notebooks is less than ideal), and that they have the correct copy of any intermediate data.

Enter Kedro

A lot of the issues above aren’t new to the software engineering discipline and have been largely solved in that space. This is where Kedro comes in. Kedro is a framework for building data engineering pipelines whose structure forces you to follow good software engineering practices. By using Kedro in the experimentation phase of projects, we build maintainable and reproducible data pipelines that produce consistent experimental results.

Specifically, Kedro has you organize your data engineering code into one or more pipelines. Each pipeline consists of a number of nodes: a functional unit that takes some data sets and parameters as inputs and produces new data sets, models, or artifacts.

This simple but strict project structure is augmented by their data catalog: a YAML file that specifies how and where the input and output data sets are to be persisted. The data sets can be stored either locally or in a cloud data storage service such as S3.

I started using Kedro about six months ago, and since then have leveraged it for different experimental data pipelines. Some of these pipelines were for building models that eventually were deployed to production, and some were collaborations with team members. Below, I’ll discuss the good and bad things I’ve found with Kedro and how it helped us create reproducible, maintainable data pipelines.

The Good

  • Reproducibility: I can’t say enough good things here: they nailed it. Their dependency management took a bit of getting used to but it forces a specific version on all dependencies, which is awesome. Also, the ability to just type kedro install and kedro run to execute the whole pipeline is fantastic. You still have to remember to seed random number generators, but even that is easy to remember if you put it in their params.yml file.
  • Function Isolation: Kedro’s fixed project structure encourages you to think about what logical steps are necessary for your pipeline, and write a single node for each step. As a result, each node tends to be short (in terms of lines of code) and specific (in terms of logic). This makes each node easy to write, test, and read later on.
  • Developer Parallelization: The small nodes also make it easier for developers to work together concurrently. It’s easy to spot nodes that won’t depend on each other, and they can be coded concurrently by different people.
  • Intermediate Data: Perhaps my favorite thing about Kedro is the data catalog. Just add the name of an output data set to catalog.yml and BOOM, it’ll be serialized to disk or your cloud data store. This makes it super easy to build up the pipeline: you work on one node, commit it, execute it, and save the results. It also comes in handy when working on a team. I can run an expensive node on a big GPU machine and save the results to S3, and another team member can simply start from there. It’s all baked in.
  • Code Re-usability: I’ll admit I have never re-used a notebook. At best I pulled up an old one to remind myself how I achieved some complex analysis, but even then I had to remember the intricacies of the data. The isolation of nodes, however, makes it easy to re-use them. Also, Kedro’s support for modular pipelines (i.e., packaging a pipeline into a pip package) makes it simple to share common code. We’ve created modular pipelines for common tasks such as image processing.

The Bad

While Kedro has solved many of the quality challenges in experimental data pipelines, we have noticed a few gotchas that required less than elegant work arounds:

  • Incremental Dataset: This support exists for reading data, but it’s lacking for writing datasets. This affected us a few times when we had a node that would take 8-10 hours to run. We lost work if the node failed part of the way through. Similarly, if the result data set didn’t fit in memory, there wasn’t a good way to save incremental results since the writer in Kedro assumes all partitions are in memory. This GitHub issue may fix it if the developers address it, but for now you have to manage partial results on your own.
  • Pipeline Growth: Pipelines can quickly get hard to follow since the input and outputs are just named variables that may or may not exist in the data catalog. Kedro Viz helps with this, but it’s a bit annoying to switch between the navigator and code. We’ve also started enforcing name consistency between the node names and their functions, as well as the data set names in the pipeline and the argument names in the node functions. Finally, making more, smaller pipelines is also a good way to keep your sanity. While all of these techniques help you to mentally keep track, it’s still the trade off you make for coding the pipelines by naming the inputs and outputs.
  • Visualization: This isn’t really considered much in Kedro, and is the one thing I’d say notebooks still have a leg up on. Kedro makes it easy for you to load the Kedro context in a notebook, however, so you can still fire one up to do some visualization. Ultimately, though, I’d love to see better support within Kedro for producing a graphical report that gets persisted to the 08_reporting layer. Right now we worked around this by making a node that renders a notebook to disk, but it’s a hack at best. I’d love better support for generating final, highly visual reports that can be versioned in the data catalog much like the intermediate data.

Conclusion

So am I a Kedro convert? Yah, you betcha. It replaces the spider-web of bash scripts and Python notebooks I used to use for my experimental data pipelines and model training, and enables better collaboration among our teams. It won’t replace a fully productionalized stream-based data pipeline for me, but it absolutely makes sure my experimental pipelines are maintainable, reproducible, and shareable.

Root Cause Analysis for Hadoop Applications

Parth Shah and Thai Bui

Overview

One of the reasons why Hadoop jobs are hard to operate is their inability to provide clear, actionable error diagnostic messages for users. This stems from the fact that Hadoop consists of many interrelated components. When a component fails or behaves poorly, the failure will be cascaded to its dependent components which causes a job to fail.

This blog post is an attempt to help to solve that problem by created a user-friendly, self-serving and actionable Hadoop diagnostic system.

Our Goals

Due to its complex nature, the project was split into multiple components. First, we prototyped a diagnostic tool to help debug Hadoop application failure by providing a clear root cause analysis and save engineering time. Second we purposely inflict failures on a cluster (via a method called chaos testing) and collected the data to understand how certain log messages map to errors. Finally, we examined regular expression as well as natural language processing (NLP) techniques to automatically provide root cause analysis in production.

To document this, we organized the blog in the following sections:

  • Error Message Analytics Portal
    • To provide a quick glance at the known root cause.
  • Datadog Dashboard
    • To calculate failure rates related to the unknown root cause and known root cause.
    • Separate infrastructure failure from a missing data failure (missing partition).
  • Data Access
    • All relevant logs messages from services like Yarn, Oozie, HDFS, hive server, etc. were collected and stored under an S3 bucket with an expiration policy.
  • Chaos Data Generation
    • Using chaos testing, we produced actual errors related to memory, network, etc. This was done to understand relationships between log messages and root cause errors.
    • A service was made to create an efficient and simple way to run chaos tests and collect its corresponding/related log data.
  • Diagnostic Message Classification
    • Due to the simple and repetitive nature of log messages (low in entropy), we built a natural language processing model that classified specific error types of an unknown failure.
    • Tested the model on the chaos data for the specific workload at Bazaarvoice

Error Message Analytics Portal

Bazaarvoice provides an internal portal tool for managing analytics applications by the end-users. The following screenshot demonstrates an example message of a “Job failures due to missing data”. This message is caught by using a simple regular expression of the stack trace. A regular expression works because there is only one way a job could fail due to missing data.



DataDog Dashboard

What is a partition?

Partitioning is a strategy of dividing a table into related parts based on date, components or other categories. Using a partition, it is easy and faster to query a portion of data. However, the partition a job is querying can sometimes not be available which causes a job to fail by our design. The dashboard below calculates metrics and keeps track of jobs that failed due to an unavailable partition.

The dashboard classifies failed jobs as either a partition failure (late/missing data) or an unknown failure (Hadoop application failure). Our diagnostic tool attempts to find the root cause of the unknown failures since a late or missing data is an easy problem to solve. 



Data Access

Since our clusters are powered by Apache Ambari, we leveraged and enhanced Ambari Logsearch Logfeeder to ship logs of relevant services directly to S3 and partitioned the data by as shown in the raw_log of the Directory Tree Diagram below. However, as the dataset got bigger, partitioning was not enough to efficiently query the data. To speed up read performance and to iterate faster, the data was later converted into ORC format.



Convert JSON logs to ORC logs

DROP TABLE IF EXISTS default.temp_table_orc;
CREATE EXTERNAL TABLE default.temp_table_orc (
cluster STRING,
file STRING,
thread_name STRING,
level STRING,
event_count INT,
ip STRING,
type STRING,
...
hour INT,
component STRING
) STORED AS ORC
LOCATION 's3a://<bucket_name>/orc-log/${workflowYear}/${workflowMonth}/${workflowDay}/${workflowHour}/';
INSERT INTO default.temp_table_orc SELECT * FROM bazaar_magpie_rook.rook_log
WHERE year=${workflowYear} AND month=${workflowMonth} AND day=${workflowDay} AND hour=${workflowHour};

Sample Queries for Root Cause Diagnosis

SELECT t1.log_message,
       t1.logtime,
       t1.level,
       t1.type,
       t2.Frequency
FROM
  (SELECT log_message,
          logtime,
          TYPE,
          LEVEL
   FROM
     ( SELECT log_message,
              logtime,
              TYPE,
              LEVEL,
              row_number() over (partition BY log_message
                                 ORDER BY logtime) AS r
      FROM bazaar_magpie_rook.rook_log
      WHERE cluster_name = 'cluster_name'
        AND DAY = 28
        AND MONTH=06
        AND LEVEL = 'ERROR' ) S
   WHERE S.r = 1 ) t1
LEFT JOIN
  (SELECT log_message,
          COUNT(log_message) AS Frequency
   FROM bazaar_magpie_rook.rook_log
   WHERE cluster_name = 'cluster_name'
     AND DAY = 28
     AND MONTH=06
     AND LEVEL = 'ERROR'
   GROUP BY log_message) t2 ON t1.log_message = t2.log_message
WHERE t1.logtime BETWEEN '2019-06-28 13:05:00' AND '2019-06-28 13:30:00'
ORDER BY t1.logtime
LIMIT 400;

SELECT log_message, logtime, type
FROM bazaar_magpie_rook.rook_log WHERE level = 'ERROR' AND type != 'logsearch_feeder' AND logtime BETWEEN '2019-06-24 09:05:00' AND '2019-06-24
12:30:10'
ORDER BY logtime
LIMIT 1000;

SELECT log_message, COUNT(log_message) AS Frequency
FROM  bazaar_magpie_rook.rook_log
WHERE cluster_name = 'dev-blue-3' AND level = 'ERROR' AND logtime BETWEEN '2019-06-27 13:50:00' AND '2019-06-29 14:30:00'
GROUP BY log_message
ORDER BY COUNT(log_message) DESC
LIMIT 10;

Chaos Data Generation

The chaos data generation process makes up a bulk of this project as well is the most important. It stems from the process of chaos testing to experiment on software and infrastructure in order to understand the systems ability to withstand unexpected or turbulent conditions. This concept of testing was first introduced by Netflix in 2011. The following pseudocode explains how it works.

Submit a normal job through an API on a specific cluster

Create a list of IP addresses for all the testable nodes in that cluster (such as the workers nodes).

Once we have all associated nodes, inject failures into them (stress test memory or network)

While the job is not done

    Let the job run

At this point, the job has either failed or succeeded

Stop all stress tests

Gather the job details such as start time, end time, status, and most importantly its associated stress test type (memory, packet_corruption, etc.). 

Store job details and its report in JSON format

Job Report Example

{
"duration":"45 Minutes",
"nominalTime":"2018-10-27 07:00:00",
"cost":0.7974499089253186,
"downloadLinks":[],
"errorMessage":"Error: Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]",
"startTime":"2019-07-25 18:45:37",
"stopTime":"2019-07-25 19:31:18","failedAction":"hive-action",
"chaos_error":"packet_corruption",
"workflowId":"0001998-190628043141230-oozie-oozi-W",
"status":"KILLED"
}

We followed the same process to generate chaos data on different types of injected failures such as:

  • High Memory Utilization on the Host
  • Packet Corruption
  • Packet Loss
  • High Memory Util on a Container

While there are certainly more errors the model is capable of learning and classifying, for the purpose of prototyping we kept the type of failures to 2-3 categories.

Diagnostic Message Classification

In this section, we explore 2 types of error classification methods, a simple regex and supervised learning.

Short Term Solution with Regex

There are many ways to analyze text for different patterns. One of these ways is called Regular Expression Matching (regex). Regex is “special strings representing a pattern to be matched in search operation”. One use of regex is finding a keyword like “partition”. When a job fails due to missing partitions its error message usually looks something like this “Not all partitions are available even after 16 attempts“. The regex for this specific case would look like \W*((?i)partition(?-i))\W* .

A regex log parser could easily be able to identify this error and do what is necessary to fix the problem. However, the power of regex is very limiting when it comes to analyzing complex log messages and stack traces. When we find a new error, we would have to manually hardcode a new regular expression to pattern match the new error type which is very error-prone and tedious in the long run.

Since the Hadoop eco-system includes many components, there are many combinations of log messages that can be produced; a simple regex parser would be difficult to work here because it can not process general similarities between different sentences. This is where natural language processing helps.

Long Term Solution with Supervised Learning

The idea behind this solution is to use natural language processing to process log messages and supervised learning to learn and classify errors. This model should work well with log data vs. normal language due since machine logs are more structured and low in entropy. You can think of entropy as a measure of how unstructured or random a set of data is. The English language tends to have high entropy relative to machine logs due to its sometimes illogical sentence structure. On the other hand, machine-generated log data is very repetitive which makes it easier to model and classify.

Our model required pre-processing of logs which are called tokenization. Tokenization is the process of taking a set of text and breaking it up into its individual words or tokens. Next, the set of tokens were used to model their relationship in high dimensional space using Word2Vec. Word2Vec is a widely popular model used for learning vector representation of words called, “word embeddings” (word2vec). Finally, we use the vector representation to train a simple logistic regression classifier using the chaos data generated earlier. The diagram below shows a similar training processing from Experience Report: Log Mining using Natural Language Processing and Application to Anomaly Detection by Christophe Bertero, Matthieu Roy, Carla Sauvanaud and Gilles Tredan.



In contrast to the diagram above, since we only trained the classifier on error logs, it was not possible to classify a normal system as it should produce no error logs. Instead, we trained the data on different types of stressed systems. Using the dataset generated by chaos testing, we were able to identify the root cause for each of the error message for each failed job. This allowed us to make our training dataset as shown below. We split our dataset into a simple 70% training and 30% for testing.

Subsequently, each log message was tokenized to create a word vector. All the tokens were inputted into a pre-trained word2vec model which mapped out each word in a vector space, creating a word embedding. Each line was represented as a list of vectors and an average of all the vectors in a log message represents the feature vector in high dimensional space. We then inputted each feature vector and its labels into a classification algorithm such as logistic regression or random forest to create a model that could predict the root cause error from a logline. However, since failure often contains multiple error log messages, it would not be logical to simply reach a root cause conclusion from just a single line of the log from a long error log. A simple way to tackle this problem would be to window the log and input the individual lines from the window into the model and output the final error as the most common error outputted by all the lines in a window. This is a very naive way of breaking apart long error log so more research would have to be done to process error logs without losing the valuable insight of their dependent messages.

Below are a few examples of error log messages and their tokenized version.

Example Raw Log

java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-3.1.0.3.1.0.0-78.jar:3.1.0-SNAPSHOT] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)

Tokenized Raw Log

[java,lang,RuntimeException,org,apache,thrift,transport,TSaslTransportException,No,data,no,sasl,data,stream,org,apache,thrift,transport,TSaslServerTransport,Factory,getTransport,TSaslServerTransport,java219]
[org,apache,thrift,server,TThreadPoolExecutor,WorkerProcess,run,TThreadPoolServer,java,269]
...

Results

The model accurately predicted the root cause of failed job with 99.3% accuracy in our test dataset. At an initial glance, the metrics calculated on the test data look very promising. However, we still need to evaluate its efficiency in production to get a more accurate picture. The initial success in this proof of concept warrants further experimentation, testing, and investigation for using NLP to classify errors. 

Training Data 70% (Top 20 Rows)

Test Data Results 30% (Top 20 Rows)

Conclusion

In order to implement this tool in production, data engineers will have to automate certain aspects of the data aggregation and model building pipeline

Self Reporting Errors

A machine learning model is only as good as its data. For this tool to be robust and accurate, any time engineers encounter a new error or an error that the model does not know about, they should report their belief of the root cause so the corresponding errors logs get tagged with the specific root cause error. For instance, when a job fails for a similar reason, the model will be able to diagnose and classify its error type. This can be done through a simple API, form, Hive query that takes in the job id and its assumed root_cause. The idea behind is this that by creating a form, we could manual label log messages and errors in case the classifier fails to correctly diagnose the true problem. 

The self-reported error should take the form of the chaos error to ensure the existing pipeline will work.

Automating Chaos Data Generation

The chaos tests should run on a regular basis in order to keep the model up to date. This could be done by creating a Jenkins job that automatically runs on a regular cadence. Often times, running a stress test will make certain nodes to be unhealthy. It causes our automation to be unable to SSH into the nodes again to stop the stress test such as when the stress test is interfering with connectivity to the node (see the error below). This can be solved by creating a time limit for the stress test, so the script does not have to ssh again once the job has finished. In the long run, as the self-reported errors grow, the model should rely on chaos test generated test less. Chaos tests produce logs for very extreme situations which could be not typical for a normal production environment. 

java.lang.RuntimeException: Unable to setup local port forwarding to 10.8.100.144:22
    at com.bazaarvoice.rook.infra.util.remote.Gateway.connect(Gateway.java:103)
    at com.bazaarvoice.rook.infra.regress.yarn.RootCauseTest.kill_memory_stress_test(RootCauseTest.java:174)

Hackathon 2019

This year’s Bazzarvoice Hackathon coincided with our annual all hands meeting in Austin. Our global offices took time to work on projects that focused on innovation, social integrations, and improved efficiencies. Teams across our departments participated This included: R&D, Product, Customer Services, and Knowledge Base.

Hackathon teams took two days to work on their projects. The following day, teams present their outcomes in a science fair setting while the company voted on the projects.  The top 10 teams then then went on to present to the entire company.

Thanks to all who participated and especially those who organized the various activities. We hope to see many of these projects become new product enhancements.

Response API Demo App

Are you looking to develop your own application on top of the Bazaarvoice Response API? Well, we got something for you. The Response API Demo App is a simple Node-React application which demonstrates how to use Response API in conjunction with our 3-legged OAuth2 API. It is recommended to go through the Developer Portal and read about OAuth2 API and the Response API before diving into the application architecture below.

This application was bootstrapped with Create React App and consists of two separate components – the front-end client side in React and a back-end server side in NodeJS.

Let’s talk more about the back-end architecture first. Almost all of the server-side logic is contained in server.js. Using the Express framework for NodeJS, this file defines the following endpoints:

  • /api/redirect – This endpoint is supposed to handle any incoming redirections from OAuth2, obtain an access token using either an authorization code (if it is a first time login) or a refresh token (if the existing access token has expired or is expiring soon). If you are building an app integrated with OAuth2, you must define an endpoint to handle redirects in a similar way.
  • /api/check-login – This is an application specific endpoint which was defined so that the client side can easily verify using the server if a user is logged in or not so that they can then be taken to the login page.
  • Abstraction of GET, POST, PATCH, DELETE calls to the Response API –  These endpoints are basically a clone of the endpoints provided by the Response API which are documented here on the Developer Portal. This abstraction is done so that the client side can call these secured endpoints in-directly without having to expose any of the OAuth2 credentials through the user’s browser. All our logic that relates to obtaining and refreshing OAuth tokens stays on the server side securely.

This application uses express-session for storing and maintaining user sessions using browser cookies which is important for being able to maintain multiple users without storing state on your backend. However, this implementation of user sessions is not suitable for production applications and you should probably be maintaining user sessions using a combination of cookies and session storage.

The server side credentials and configurations are supposed to be stored in server/server-config.json which is then picked up by the Node server. Note that these credentials are confidential and should not be exposed to the client side in any way.

Coming to the front-end architecture, we are using the React library and React Semantic UI for the UI Components. The client side does a bunch of stuff ranging from querying a review from the Conversations API and then using that Review ID to obtain/add/modify/delete the corresponding responses to that review using the Response API. Following components make up the core of the front-end:

  • Login Page – If a user is not logged in (we check this using the /api/check-login endpoint as discussed above), we redirect them to this page which has a little widget that allows them to go to a Bazaarvoice login page, enter their Bazaarvoice Portal login credentials and once they are properly authenticated, the OAuth2 API redirects them to the /api/redirect endpoint which then handles all the token exchange logic. You can configure this redirect URI to be whatever you want during your app provisioning process but for the demo application to work as expected, it should be http://localhost:5000/api/redirect as you can see in the config file. All of the OAuth2 login and token exchange process happens during this step.
  • Search Page – This page essentially serves as the landing page for this application where a user can enter a Review ID (this can be obtained using Workbench or the Conversations API) and then they are taken to the review page.

  • Review Page – Once the user is authenticated and they search for a valid Review ID, they are taken to this page which fetches the specified review by calling the Conversations API and then fetches all responses for that review by calling the application’s backend which in turn calls the Response API. So, on this page, a user can see contents of a review along with all responses. This also allows them to add a new response, edit an existing one or delete a response.

You might have noticed a department field on a response which appears as a drop-down menu. These menu options have been hard-coded in the demo application and can be modified in the departmentFormOptions.js file. Further, client.js is an utility file that presents all the commonly used API calls as simple functions accessible to all components on the client side.

The client side configurations are stored in response-demo/client/src/utils/config.js file. This is built into the the project when the front-end client is packaged before running. These configurations are accessible to the browser so you shouldn’t store anything confidential here.

Apart from that, the application follows a pretty standard architecture and you can read more about it here. On the deployment side of things, there is a Dockerfile which builds the client artifacts, sets up the server and gets the application up and running on your local environment. Follow these instructions to get it up and running locally. The app can also be deployed to Flynn by making changes to the redirect URI in configurations and also making sure that your Flynn Redirect URI is added to the list of allowed Redirect URIs for your app credentials.

Vger Lets You Boldly Go . . .

Are you working on an agile team? Odds are high that you probably are. Whether you do Scrum/Kanban/lean/extreme, you are all about getting work done with the least resistance possible. Heck, if you are still on Waterfall, you care about that.  But how well are you doing? Do you know? Is that something a developer or a lead should even worry about or is a SEP? That’s a trick question. If your team is being held accountable and there is a gap between their expectations and your delivery, by the transitive property, you should worry about some basic lean metrics.

Here at Bazaarvoice, we are agile and overwhelmingly leverage kanban. Kanban emphasizes the disciplines of flow and continuous improvement. In an effort to make data-driven decisions about our improvements, we needed an easy way to get the relevant data. With just JIRA and GitHub alone, access to the right data has a significant barrier to entry.

So, like any enterprising group of engineers, we built an app for that.

What we did

Some of us had recently gone through an excellent lean metric forecasting workshop with Troy Magennis from Focused Objective. In his training he presented the idea of displaying a quadrant of lean metrics in order to force a narrative for a teams behavior, and to avoid overdriving on a single metric. This really resonated with me and seemed like good paradigm for the app we wanted to build.

And thus, Vger was born.

We went with a simple quadrant view with very bookmarkable url parameters. We made it simple to for teams to self-service by giving them an interface to make their own “Vger team” and add whatever “Vger boards” they need.  Essentially, if you can make a JQL query and give it a board in JIRA, Vger can graph the metrics for it. In the display, we provide a great deal of flexibility by letting teams configure date ranges for the dashboard, work types to be displayed, and the JIRA board columns to be considered as working/non-working.

Now the barrier to entry for lean metrics is down to “can you open a browser.”  Not too shabby.

The Quadrant View

We show the following in the quadrant view:

1. Throughput – The number of completed tickets per week.

2. Variation – the variation (standard deviation/mean) for the Throughput.

3. Backlog Growth – the tickets opened versus closed.

4. Lead Times – The lead times for the completed tickets. This also provides a detailed view by Jira board column to see where you spend most of your time.

We at Bazaarvoice are conservative gamblers, so you’ll see the throughput and lead time quadrants show the 50%, 80%, and 90% likelihood (the inverse of percentile).  We do this because relying on the average or mean is not your friend. Who want’s to bet on a coin toss? Not us. We like to be right say, eight out of ten times.

The Quarterly View

Later, we were asked to show throughput by quarter to help with quarterly goal planning. We created a side-car page for this.  It shows Throughput by quarter:

We also built a scatterplot for lead times so outliers could be investigated:

This view has zoomable regions and each point lets you click through to the corresponding JIRA ticket. So that’s nice.

But Wait! Git This….

From day one, we chose to show the same Quadrant for GitHub Pull Requests.

Note that we show rejected and merged lines in the PR Volume quadrant.  We also support overlaying your git tags on both PR and JIRA ticket data.  Pretty sweet!

I Want to Do More

Vger lets you download throughput data from the Quadrant and Quarterly views. You can also download lead time from the Quarterly view too. This lets teams and individuals perform their own visualizations and investigations on these very useful lean metrics.

But Why?

Vger was built with three use cases in mind:

Teams should be informed in retros

Teams should have easy access to these key lean metrics in their retros. We recommend that they start off viewing the quadrant and seeing if they agree with the narrative the retro facilitator presents. They should also consider the results of any improvement experiments they tried. Did the new behavior make throughput go up as they hoped it would? It the new behavior reduce time spent in code review? Did it reduce the number open bugs? etc.  Certainly not everything in a retro should be mercilessly data-driven, but it is a key element to a culture of continuous improvement.

Managers should know this data and speak to it

Team managers commonly speak to how their teams are progressing. These discussions should be data-driven, and most importantly it should be driven by the same data the team has access to (and hopefully retros to). It should also be presented in a common format that still provides for some customization. NOTE: You should avoid comparing team to team in Vger or a similar visualization. In most situations, that way leads to futility, confusion, and frustration.

We should have data to drive data-driven decisions about the future

Lean forecasting is beyond the scope of this post however, Troy Magennis has a fine take on it.  My short two cents on the matter is: a reasonably functioning team with even a little bit of run time should never be asked “how long will it take?”  Drop that low value ritual and do the high value task of decomposing the work, then forecast with historical data. Conveniently, you can download this historical data from Vger you used in your spreadsheet of choice.  I happen to like monte carlo simulations myself.

Isn’t This for Kanban?

You’ll note I used the term “lean metrics” throughout. I wanted to avoid any knee-jerk “kanban vs scrum vs ‘how we do things'”reaction. These metrics apply no matter what methodology you consciously (or unconsciously) use for the flow of work through your team.  It was built for feature development teams in mind, but we had good success when our client implementation team started using it as an early adopter. It allowed them to have a clear view into their lead time details and ferret out how much time was really spent waiting on clients to perform an action on their end.

Cool. How Do I Get a Vger?

We open sourced it here, so help yourself. This presented as “it worked for us”-ware and is not as finely polished as it could be, so it has some caveats. It is a very simple serverless app. We use JIRA and GitHub, so only those tools are currently supported. If you use similar, give Vger a try!

What’s Next?

If your fingers are itching to contribute, here’s some ideas:

  • Vger’s ETL process could really use an update
  • The Quadrant view UI really needs an update to React to match the Quarterly view
  • Make it flexible for your chosen issue tracker or source control?
  • How about adding a nice Cumulative Flow Diagram?

 

Still Looking Good While Testing: Automated Testing With a Visual Regression Service Part II

If you’ve followed our blog regularly, you’ve probably read our post on using visual regression testing tools and services to better test your applications’ front end look and feel.  If not, take a few minutes to read through our previous post on this topic.

Now that you’re up to speed, let’s take what we did in our previous post to the next level.

In this post, we’re going to show how you can alter your test configuration and code to test mobile browsers through a given testing service (browserstack) as well as get better reporting out of our tests natively and through options available to us in Jenkins.

Adjusting the Fuzz Factor:

If you run your test multiple times across several browsers with the visual regression service  using default settings, you may notice a handful of exception images cropping up in the diff directory which look nearly identical.

browser-fight

Browser fight!!!

 

 

 

 

 

 

 

If you happen to be testing across several browsers (using Browserstack or a similar service) which potentially have these browsers hosted in real or virtual environments with widely differing screen resolutions, this may impact how the visual-regression service performs its image diff comparison.

To alleviate this, either ensure your various browser hosts are able to display the browser in the same resolution or add a slight increase in your test’s mismatch tolerance.

As mentioned in our previous post, you can do this by editing the values under the ‘visual regression service’ object in your project’s wdio.conf file.  For example:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.25,
  }),

Setting the mismatch tolerance value to 0.25 in the above snipped would allow the regression service a 25% margin of error when checking screen shots against any reference images it has captured.

Better Test Results:

Also mentioned in the previous post, one of the drawbacks to given examples of using the visual-regression service in our tests is that there is little feedback being returned other than the output of our image comparison.

head-in-sand

Having fun with that lack of feedback?

 

 

 

 

 

However, with a bit of extra code, we can make usable assertion statements in our test executions once a screen comparison event is generated.

The key is the checkElement() method which is really a WebdriverIO method that is enhanced by the visual-regression service.  This method returns an object that contains some meta data about the check being requested for the provided argument.

We can assign the method call to a new variable, which, once we ‘deserialize’ the object into something readable (e.g. a JSON string) we can then leverage Chai or some other framework to use assertions to make our tests more descriptive.

Here’s an example:

it('should test some things visually', () => {
  ...
  let returnedContents = JSON.stringify(browser.checkElement('<my web element>');
  console.log(returnedContents);
  assert.includes('<deserialized text to assert for', returnedContents);
});

In the above code snippet, near the end of the test, we are calling ‘checkElement()’ to do a visual comparison of the contents of the given web element selector, then converting the object returned by ‘checkElement()’ to a string.  Afterward, we are asserting there is some text/string content within the stringified object our comparison returned.

In the case of the text assertion, we would want to assert a successful match message is contained within the returned object.  This is because the ‘checkElement()’ method, while it may return data that indicates a test failure, on its own will not trigger an exception that would appropriately fail our test should an image comparison mismatch occur.

Adding Mobile

going-mobile

Oooh – someone downloaded the pancake app!

 

 

 

 

 

Combining WebdriverIO’s framework along with the visual-regression service and a browser test service like Browserstack, we can create tests that run against real browsers.  To do this, we will need to make some changes to our WebdriverIO config.  Try the following:

  1. Make a copy of your wdio.conf.js
  2. Name the copy wdio.mobile.conf.js
  3. Edit your package.json file
  4. Copy the ‘test’ key/value pair under ‘scripts’ and past it to a new line, rename it ‘test:mobile’
  5. Point the test:mobile script to the wdio.mobile.conf.js config file and save your changes

Next, we need to edit the contents of the wdio.mobile.conf.js script to run tests only against mobile devices.  The reason for adding a whole, new test config and script declaration is that due to the way mobile devices behave, there are some settings to declare for mobile browser testing with WebdriverIO and Browserstack which are incompatible with running tests against desktop browsers.

Edit the code block at the top of the mobile config file, changing it to the following:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');
function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserOrientation = context.meta.orientation;
    return path.join(basePath,
    `${testName}_${browserName}_v${browserVersion}_${browserOrientation}_mobile.png`);
  };
}

Note that we’ve removed the declarations for height and width dimensions.  As Browserstack allows us to test on actual mobile devices, defining viewport constraints are not only unnecessary, but it will result in our tests failing to execute (the height and width dimension object can’t be passed to webdriver as part of a configuration for a real, mobile device).

Next, update the visual-regression service’s orientation configuration near the bottom of the mobile config files to the following:

orientations: ['portrait', 'landscape'],

Since applications using responsive design can sometime break when moving from one orientation to another on mobile devices, we’ll want to run out tests in both orientation modes.

Stating this behavior in our configuration above will trigger our tests to automatically switch orientations and retest.

Finally, we’ll need to update our browser capability settings in this config file:

capabilities: [
{
  device: 'iPhone 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPhone 8',
},
{
  device: 'iPad 6th',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPad',
},
{
  device: 'Samsung Galaxy S9',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Galaxy S9',
},
{
  device: 'Samsung Galaxy Note 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Note 8',
},
]

In the code above, the ‘realMobile’ descriptor is necessary to run tests against modern mobile devices through Browserstack.  For more information on this, see Browserstack’s documentation.

Once your change is saved, try running your tests on mobile by doing:

npm run test:mobile

Examine the image outputs of your tests and compare them to the results from your desktop test run.  Now you should be able to run tests for your app’s UI across a wide range of browsers (mobile and desktop).

Taking Things to Jenkins

You could include the project we’ve put together so far into the codebase of your app (just copy the dev dependencies from your package.json your specs and configs into the existing project).

Another option is to use this as a standalone testing tool to apply some base screen-shot based verification of your app.

In the following example, we’ll go through the steps you can to set up a project like this as a resource to use in Jenkins to test a given application and send the results to members of your team.

Consider the value of being able to generate a series of targeted screen shots of elements of your app per build and send them to team members like a product manager or UX designer.  This isn’t necessarily ‘heavy lifting’ from a dev ops standpoint but automating any sort of feedback for your app will trend toward delivering a better app, faster.

Assuming you have your project hosted in Github, you have administrative access to Jenkins and that your app is hosted in an environment Jenkins can access (your organization’s Amazon S3 bucket, hosted from a Docker image, etc.):

I. Create a new Jenkins job

jenkins-proj-name

 

 

 

 

* Create a new Jenkins task within the same space the job that builds and ships your web app (should one exist).  Assign it a name.

II. Configure the job to pull from your image testing app from Github

jenkins-set-repo

 

 

 

 

 

* Go to the source control configuration portion of the job.  Enter the info for your repository for the test app you’ve built.

III. Set the project to build periodically

jenkins-set-trigger

 

 

 

 

 

 

* Within the build settings of the job, choose the build periodically option and configure your job’s frequency.

Ideally, you would set this job to trigger once the task that builds and ‘ships’ your app completes successfully (thus, executing your tests every time a new version of the app is published).

Alternatively, to run periodically based on a given time, then enter something like ‘* 23 * * *’ into Jenkins’ cron configuration field to set your job to run once every day (in this case, at 11 PM, relative to your server’s configuration).

IV. Build your shell execution script

jenkins-shell-script

 

 

 

 

 

 

 

* Create a build step and for your task and choose the shell script option.  You can embed your npm test script execution here.

V. Collect your artifacts

From the list of post build options, choose the ‘archive artifacts’ option.  In the available text field, enter the path to the generated screen shots you wish to capture.  Note that this path may differ depending on how your Jenkins server is configured.  If you’re having trouble pinpointing the exact path to your artifacts, echo the ‘pwd’ command in your shell script step to have the job list your working directory in the job’s console output then work from there.

jenkins-archive

VI. Sending email notifications

jenkins-email-config

 

 

 

 

 

 

* Lastly, choose from the post build options menu the advanced email notification option.

Enter an email or list of email addresses you wish to contact once this job completes.  Fill out your email settings (subject line, body, etc.) and be sure to enter the path to the screen resources you will wish to attach to a given email.

Save your changes and you are ready to go!

This job will run on a regular basis, execute a given set of UI specific tests and send screen capture information to inquiring team members who would want to know.

You can create more nuanced runs but using Jenkins’ clone feature to copy this job, altering to run your mobile-specific tests, diversifying your test runs.

Further Reading

If you’re looking to dive further into visual regression testing, there are several code examples online worth reviewing.

Additionally, there are other visual regression testing services worth investigating such as Percy.IO, Screener.IO and Fluxguard.

Looking Good While Testing: Automated Testing With a Visual Regression Service

A lot of (virtual) ink has been spilled on this blog about automated testing (no, really). This post is another in a series of dives into different automated testing tools and how you can use them to deliver a better, higher-quality web application.

Here, we’re going to focus on tools and services specific to ‘visual regression testing‘ – specifically cross-browser visual testing of an application front end.

What?

By visual regression testing, we mean regression testing applied specifically to an app’s appearance may have changed across browsers or over time as opposed to functional behavior.

Why?

One of the most common starting points in testing a web app is to simply fire up a given browser, navigate to the app in your testing environment, and note any discrepancy in appearance (“oh look, the login button is upside down. Who committed this!?”).

american_psycho

A spicy take on how to enforce code quality – we won’t be going here.

 

 

 

 

 

 

 

The strength of visual regression testing is that you’re testing the application against a very humanistic set of conditions (how does the application look to the end user versus how it should look). The drawback is that doing this is generally time consuming and tedious.  But that’s what we have automation for!

hamburger_ad_vs_reality

How our burger renders in Chrome vs how it renders in IE 11…

How?

A wise person once said, ‘In software automation, there is no such thing as a method call for “if != ugly, return true”‘.

For the most part – this statement is true. There really isn’t a ‘silver bullet’ for the software automation problem of fully automating testing the appearance of your web application across a given browser support matrix.  At least not with some caveats.

The methods and tools for doing so can run afoul of at least one of the following:

  • They’re Expensive (in terms of time, money or both)
  • They’re Fragile (tests emit false negatives, can be unreliable)
  • They’re Limited (covers only a subset of supported browsers)

faberge_egg

Sure, you can support delicate tools. Just keep in mind the total cost.

 

Tools

We’re going to show how you can quickly set up a set of tools using WebdriverIO, some simple JavaScript test code and the visual-regression-service module to create snapshots of your app front end and perform a test against its look and feel.

Setup

Assuming you already have a web app ready for testing in your choice of environment (hopefully not in production) and that you are familiar with NodeJS, let’s get down to writing our testing solution:

interesting_man_tests_code

This meme is so old, I can’t even…

 

 

 

 

 

 

 

 

1. From the command line, create a new project directory and do the following in order:

  • ‘npm init’ (follow the initialization prompts – feel free to use the defaults and update them later)
  • ‘npm install –save-dev webdriverio’
  • ‘npm install –save-dev wdio-visual-regression-service’
  • ‘npm install –save-dev chai’

2. Once you’ve finished installing your modules, you’ll need to configure your instance of WebdriverIO. You can do this manually by creating the file ‘wdio.conf.js’ and placing it in your project root (refer to the WebdriverIO developers guide on what to include in your configuration file) or you can use the wdio automated configuration script.

3. To quickly configure your tools, kick off the automated configuration script by running ‘npm run wdio’ from your project root directory.  During the configuration process, be sure to select the following (or include this in your wdio.conf.js file if you’re setting things up manually):

  • Under frameworks, be sure to enable Mocha (we’ll use this to handle things like assertions)
  • Under services be sure to enable the following:
    • visual-regression
    • Browserstack (we’ll leverage Browserstack to handle all our browser requests from WebdriverIO)

Note that in this case, we won’t install the selenium standalone service or any local testing binaries like Chromedriver. The purpose of this exercise is to quickly package together some tools with a very small footprint that can handle some high-level regression testing of any given web app front end.

Once you have completed the configuration script, you should have a wdio.conf.js file in your project configured to use WebdriverIO and the visual-regression service.

Next, we need to create a test.

Writing a Test

First, make a directory within your project’s main source called tests/. Within that directory, create a file called homepage.js.

Set the contents of the file to the following:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
});

it('should look as expected', () => {
browser.checkElement('#header');
});
});

That’s it. Within the single test function, we are calling a method from the visual-regression service, ‘checkElement()’. In our code, we are providing the ID, ‘header’ as an argument but you should replace this argument with the ID or CSS selector of a container element on the page you wish to check.

When executed, WebdriverIO will open the root URL path it is provided for our web application then will execute its check element comparison operation. This will generate a series of reference screen shots of the application. The regression service will then generate screen shots of the app in each browser it is configured to test with and provide a delta between these screens and the reference image(s).

A More Complex Test:

You may have a need to articulate part of your web application before you wish to execute your visual regression test. You also may need to execute the checkElement() function multiple times with multiple arguments to fully vet your app front end’s look and feel in this manner.

Fortunately, since we are simply inheriting the visual regression service’s operations through WebdriverIO, we can combine WebriverIO-based method calls within our tests to manipulate and verify our application:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
  });

  it('should look as expected', () => {
    browser.waitForVisible('#header');
    browser.checkElement('#header');
  });

  it('should look normal after I click the button, () => {
    browser.waitForVisible('.big_button');
    browser.click('.big_button');
    browser.waitForVisible('#main_content');
    browser.checkElement('#main_content');
  });

  it('should have a footer that looks normal too, () => {
    browser.scroll('#footer');
    browser.checkElement('#footer');
  });
});

Broad vs. Narrow Focus:

One of several factors that can add to the fragility of a visual test like this is attempting to account for minor changes in your visual elements. This can be a lot to bite off and chew at once.

Attempting to check the content of a large and heavily populated container (e.g. the body tag) is likely going to contain so many possible variations across browsers that your test will always throw an exception. Conversely, attempting to narrow your tests’ focus to something very marginal (e.g. selecting a single instance of a single button) may never be touched by code implemented by front end developers and thus, you may be missing crucial changes to your app UI.

flight_search_results

This is waaay too much to test at once.

 

 

 

 

 

 

 

The visual-regression service’s magic is in that it allows you to target testing to specific areas of a given web page or path within your app – based on web selectors that can be parsed by Webdriver.

price_link

And this is too little…

 

 

 

 

Ideally, you should be choosing web selectors with a scope of content that is not too large nor too small but in between. A test that focuses on comparing content of a specific div tag that contains 3-4 widgets will likely deliver much more value than one that focuses on the selector of a single button or a div that contains 30 widgets or assorted web elements.

Alternately, some of your app front end may be generated by templating or scaffolding that never received updates and is siloed away from code that receives frequent changes from your team. In this case, marshalling tests around these aspects may result in a lot of misspent time.

menu_bar

But this is just about right!

 

 

Choose your area of focus accordingly.

Back to the Config at Hand:

Before we run our tests, let’s make a few updates to our config file to make sure we are ready to roll with our initial homepage verification script.

First, we will need to add some helper functions to facilitate screenshot management. At the very top of the config file, add the following code block:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');

function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserViewport = context.meta.viewport;
    var browserWidth = browserViewport.width;
    var browserHeight = browserViewport.height;
 
    return path.join(basePath, `${testName}_${browserName}_v${browserVersion}_${browserWidth}x${browserHeight}.png`);
  };
}

This function will be utilized to build the paths for our various screen shots we will be taking during the test.

As stated previously, we are leveraging Browserstack with this example to minimize the amount of code we need to ship (given we would like to pull this project in as a resource in a Jenkins task) while allowing us greater flexibility in which browsers we can test with. To do this, we need to make sure a few changes in our config file are in place.

Note that if you are using a different browser provisioning services (SauceLabs, Webdriver’s grid implementation), see WebdriverIO’s online documentation for how to set you wdio configuration for your respective service here).

Open your wdio.conf.js file and make sure this block of code is present:

user: process.env.BSTACK_USERNAME,
key: process.env.BSTACK_KEY,
host: 'hub.browserstack.com',
port: 80,

This allows us to pass our browser stack authentication information into our wdio script via the command line.

Next, let’s set up which browsers we wish to test with. This is also done within our wdio config file under the ‘capabilities’ object. Here’s an example:

capabilities: [
{
  browserName: 'chrome',
  os: 'Windows',
  project: 'My Project - Chrome',
  'browserstack.local': false,
},
{
  browserName: 'firefox',
  os: 'Windows',
  project: 'My Project - Firefox',
  'browserstack.local': false,
},
{
  browserName: 'internet explorer',
  browser_version: 11,
  project: 'My Project - IE 11',
  'browserstack.local': false,
},
],

Where to Put the Screens:

While we are here, be sure you have set up your config file to specifically point to where you wish to have your screen shots copied to. The visual-regression service will want to know the paths to 4 types of screenshots it will generate and manage:

lots_a_screens

Yup… Too many screens

 

 

 

 

 


References: This directory will contain the reference images the visual-regression service will generate on its initial run. This will be what our subsequent screen shots will be compared against.

Screens: This directory will contain the screen shots generated per browser type/view by tests.

Errors: If a given test fails, an image will be captured of the app at the point of failure and stored here.

Diffs: If there is a given comparison performed by the visual-regression service between an element from a browser execution and the reference images which results in a discrepancy, a ‘heat-map’ image of the difference will be capture and stored here. Consider the content of this directory to be your test exceptions.

Things Get Fuzzy Here:

fozzy

Fuzzy… Not Fozzy

 

 

 

 

 

 

 

 

Finally, before kicking off our tests, we need to enable our visual-regression service instance within our wdio.conf.js file. This is done by adding a block of code to our config file that instructs the service on how to behave. Here is an example of the code block taken from the WebdriverIO developer guide:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.20,
  }),
  viewportChangePause: 300,
  viewports: [{ width: 320, height: 480 }, { width: 480, height: 320 }, { width: 1024, height: 768 }],
  orientations: ['portrait'],
},

Place this code block within the ‘services’ object in your file and edit it as needed. Pay attention to the following attributes and adjust them based on your testing needs:

‘viewports’:
This is a JSON object that provides width/height pairs to test the application at. This is very handy if you have an app that has specific responsive design constraints. For each pair, the test will be executed per browser – resizing the browser for each set of dimensions.

‘orientations’: This allows you to configure the tests to execute using portrait and/or landscape view if you happen to be testing in a mobile browser (default orientation is portrait).

‘viewportChangePause’: This value pauses the test in milliseconds at each point the service is instructed to change viewport sizes. You may need to throttle this depending on app performance across browsers.

‘mismatchTolerance’: Arguably the most important setting there. This floating-point value defines the ‘fuzzy factor’ which the service will use to determine at what point a visual difference between references and screen shots should fail. The default value of 0.10 indicates that a diff will be generated if a given screen shot differs, per pixel from the reference by 10% or more. The greater the value the greater the tolerance.

Once you’ve finished modifying your config file, lets execute a test.

Running Your Tests:

Provided your config file is set to point to the root of where your test files are located within the project, edit your package.json file and modify the ‘test’ descriptor in the scripts portion of the file.

Set it to the following:

‘./node_modules/.bin/wdio /wdio.desktop.conf.js’

To run your test, from the command line, do the following:

‘BSTACK_USERNAME= BSTACK_KEY= npm run test — –baseUrl=’

Now, just sit back and wait for the test results to roll in. If this is the first time you are executing these tests, the visual-regression service can fail while trying to capture initial references for various browsers via Browserstack. You may need to increase your test’s global timeout initially on the first run or simply re-run your tests in this case.

Reviewing Results:

If you’re used to your standard JUnit or Jest-style test execution output, you won’t necessarily similar test output here.

If there is a functional error present during a test (an object you are attempting to inspect isn’t available on screen) a standard Webdriver-based exception will be generated. However, outside of that, your tests will pass – even if a discrepancy is visually detected.

However, examine your screen shot folder structure we mentioned earlier. Note the number of files that have been generated. Open a few of them to view what has been capture through IE 11 vs. Chrome while testing through Browserstack.

Note that the files have names appended to them descriptive of the browser and viewport dimensions they correspond to.

example1

Example of a screen shot from a specific browser

 

 

 

Make note if the ‘Diff’ directory has been generated. If so, examine its contents. These are your test results – specifically, your test failures.

example2

Example of a diff’ed image

 

 

 

There are plenty of other options to explore with this basic set of tooling we’ve set up here. However, we’re going to pause here and bask in the awesomeness of being able to perform this level of browser testing with just 5-10 lines of code.

Is there More?

This post really just scratches the surface of what you can do with a set of visual regression test tools.  There are many more options to use these tools such as enabling mobile testing, improving error handling and mating this with your build tools and services.

We hope to cover these topics in a bit more depth in a later post.   For now, if you’re looking for additional reading, feel free to check out a few other related posts on visual regression testing here, here and here.

Auditing your Web App for Accessibility with Lighthouse

If you’ve followed our blog for some time, you’ve likely encountered posts detailing how to engage in various kinds of software testing, from performance to data-driven to security and more.

This post continues that trend with a focus on testing your site for accessibility.

What is Accessibility?

All Access Badge

 

 

 

 

 

If you are unfamiliar with the concept, accessibility, within the realm of web applications is a term that covers a wide range of concerns. The primary focus is codifying and helping to reinforce design standards that ensure the web is accessible to all – regardless of people’s ability to consume and interact with it.

This is a very large topic and its definition is beyond the scope of this blog post’s intent. If you’re looking for a starting point however, our friends at W3C.org can help you with that: https://www.w3.org/standards/webdesign/accessibility

If you want to cut to the chase, there are two major accessibility standards you should be aware of (and are generally considered the path toward compliance). Both of these are supported and covered by the W3C as international standards and can be reviewed in full here:

You can also visit this link for some related coding examples.

Why Accessibility?

Why should everyone be concerned with accessibility design issues?  The short answer can be found from this quite from Sir Tim Berners-Lee, via the W3C:

“The power of the Web is in its universality.
Access by everyone regardless of disability is an essential aspect.”

A more complex answer is that, as the web reaches more and more global ubiquity, we as software creators need to take into account the range of needs to access the web from all users. This is simply fair play as web technology has matured and become evermore essential to our daily lives. There are also legal ramifications for not taking accessibility into account. Here are some examples:

How Accessible is my Site?

If you’ve closely followed the design principles of WCAG 2.0, then your web application front end should be compliant with most modern accessibility software such as screen readers. However it’s difficult to verify your app’s end-user experience in this regard without actually sitting down with a browser, your app, some third-party tools and applying some old-fashioned manual testing.

The challenge with this – you’re likely already thinking – is that this is time consuming and labor intensive. If you are on a small software team or have a very tight deadline (or both), this level of manual testing might be a hard (but necessary) pill for your product team to swallow.

There are 3rd party services that can perform this level of testing for you. A quick google search can get you started down this path. However, if hiring a 3rd party is either logistically or financially out of reach, there are plenty of software tools that can help take the sting out of accessibility verification.

Chrome is your Friend (Really)?

As of this blog post, Google’s Chrome browser currently accounts for about 1/2 of all worldwide browser usage on the web. Chrome however is not only popular, it also has a lot of helpful tools under the hood to aid in web development.

In fact, you can use your basic, desktop version of Chrome to perform an accessibility audit of a given site you happen to have open in your browser. To try this trick out, simply do the following:

  • Open your browser to a given site you wish to audit
  • Open the Chrome developer tools console
  • Under the developer console tabs, (e.g. Network, Elements, Console, etc.) search for and click on the Audits tab.
  • Click the Perform Audit option, select Accessibility and click Run Audit to begin.

chrome a11y audit screenshot

Look what we found under the hood!

 

 

 

 

The audit tool in this case will scan the contents of the site at the current URL you have open for markup patterns that support accessibility standards (e.g. proper element naming conventions, css settings, etc.) and flag patterns within the markup on the page that violate these standards. These of course are based on the WCAG and AIRA conventions mentioned at the beginning of this post.

This is a powerful but very easy to access and run set of tools for web developers (you can perform several kinds of audits, not just ones for accessibility). So easy in fact, you might be thinking that there must be a way to program these tools (and you would be right).

Illuminating Accessibility with Lighthouse!

Once you started the audit within Chrome, you might have noticed a message stating that something called Lighthouse was starting up, prior to the audit being executed.

Lighthouse in fact, is the engine that performs the audit. It’s also available as a library dependency or even its own stand-alone project.

Lighthouse can be incorporated into your project front end or build cycle in several ways. One of those is to use the Lighthouse as a standalone analyzation tool which you can use to verify the front end of your web app post deployment (think of this as another set of functional tests with a different, specific kind of output – in this case, an accessibility report.

You can get Lighthouse running locally via command line quickly by doing the following (assuming you have NodeJS installed):

1. Open a new terminal and do, npm install -g lighthouse
2. Once the latest version of lighthouse is installed, do, lighthouse "url of choice" -GA —output html —output-path ./report.html

Once the above command executes and finishes, you can open your Lighthouse report.html locally in your web browser:

Lighthouse Report

Lighthouse Report

 

 

 

 

 

 

Drill down into the different segments of the report to see what best practices you are following for accessibilityy (or performance, SEO or other web standards) and where you should target to improve your app.

Jenkins Configuration

Jenkins configuration for Lighthouse

 

 

 

 

To include this report within your CI build process, just add the above Node command line statements to a shell script attached to your job of choice and make sure to archive the results (if configuring this in a Jenkins task, your job config will probably look something like this:)

Accessibility Linting?

By default, the report Lighthouse provides is expansive and may take a while to read through.

Fortunately, there are several options to move this form of testing further upstream. In this case, we are going to look at a small, open source project called Lightcrawler.

Lightcrawler allows you to call specific audit aspects of Lighthouse from within your NodeJS project. The benefit of doing this is that you can more specifically target what you are auditing and perform said audits before you ship your project through to the rest of your CI process.

You can install Lightcrawler into your project by simply doing npm install —save-dev lightcrawler.

Next, given your Node app is configured with a script that will generate and run the app locally, simply add this to your “scripts” portion of your app’s package.json file:

”audit:a11y”: "./node_modules/.bin/lightcrawler --url=http://localhost:8080 --config lighthouse-config.json",

With Lightcrawler you can configure an accessibility audit in a variety of ways that can meet your project needs. For the purpose of this post, we will configure Lightcrawler to run only tests classified within an accessibility audit, and to run all tests of that type currently available. Here is our configuration file:

{
  "extends": "lighthouse:default",
  "settings": {
    "crawler": {
      "maxDepth": 1,
      "maxChromeInstances": 5
    },
    "onlyCategories": [
      "Accessibility"
    ],
    "onlyAudits": [
      "accesskeys",
      "aria-allowed-attr",
      "aria-required-attr",
      "aria-required-parent",
      "aria-required-childres",
      "aria-roles",
      "aria-valid-attr-value",
      "audio-caption",
      "button-name",
      "bypass",
      "color-contrast",
      "definition-list",
      "dlitem",
      "document-title",
      "duplicate-id",
      "frame-title",
      "html-has-lang",
      "html-lang-valid",
      "image-alt",
      "input-image-alt",
      "lable",
      "layout-table",
      "link-name",
      "list",
      "list-item",
      "meta-refresh",
      "meta-viewport",
      "object-alt",
      "tabindex",
      "td-headers-attr",
      "th-has-data-cells",
      "valid-lang",
      "video-caption",
      "video-description"
    ]
  }
}

You can copy the JSON above and save it as lightcrawler.conf.js within your project directory and direct your script to the config file’s path.

Test out your newly added audit script but running npm run audit:a11y. If everything is in working order, you should receive a command line output detailing the number of audits executed by Lightcrawler and error output for any given audit tests that may have failed (and more importantly, why).

Lightcrawler Console Output

Lightcrawler Console Output

 

 

 

This may look similar to linting report output from something like Prettier or ESLint.  Similar to the way Prettier or ESLint are used in pre-commit hooks within your project, you could add a Lightcrawler script to your pre-commit hook in order to better safeguard your application code from possible accessibility violations.

Beyond Auditing and Linting!

This should give you some ideas as to how you can reduce the overall effort required to deliver your apps’ web front end while still maintaining accessibility.  By no means are the handful of tips presented here deep enough to fully cover everything under the accessibility umbrella.  That task will require significant time and effort but, in keeping with the web’s initial intent of inclusivity and connectivity, one we as software developers should strive toward.

Getting Started with Dependency Security and NodeJS

Internet security is a topic that receives more attention every day.  If you’re reading this article in early 2018, issues like Meltdown, Specter and the Equifax breach are no doubt fresh in your mind.

Cybersecurity is a massive concern and can seem overwhelming.  Where do you start?  Where do you go?  What do you do if you’re a small application team with limited resources?  How can you better engineer your projects with security in mind?

Tackling the full gestalt of this topic, including OWASP and more is beyond the scope of this article. However, if you’re involved in a small front end team, and if you leverage services like AWS, have gone through the OWASP checklist and are wondering, ‘what now?’, this article is for you (especially if you are developing in NodeJS, which we’ll focus on in this article).

Let’s Talk About Co(de) Dependency:

A tiger and her baby pigs

We’re talking about claiming dependents right?

 

 

 

 

 

 

 

 

 

 

 

One way we can quickly impact the security of our applications is through better dependency management.  Whether you develop in Python, JavaScript, Ruby or even compiled languages like Java and C#, library packages and modules are part of your workflow.

Sure, it’s 3rd party code but that doesn’t mean it can’t have a major impact on your project (everyone should remember the Leftpad ordeal from just a few years ago).

As it turns out, your app’s dependencies can be your app’s greatest vulnerability.

Managing Code you Didn’t Write:

Below, we’ll outline some steps you can take to tackle at least one facet of secure development – dependency management.  In this article, we’ll cover how you can implement a handful of useful tools into a standard NodeJS project to protect, remediate and warn against potential security issues you may introduced into your application via its dependency stack.

Your code has issues

We’ve all had to deal with other people’s problematic code at one time or another

 

 

 

 

 

 

 

 

 

 

Using Dependency-Check:

Even if you’re not using NodeJS, if you are building any project that inherits one or more libraries, you should have some form of dependency checking in place for that project.

For NodeJS apps, Dependency-Check is the easiest, lowest-hanging fruit you can likely reach for to better secure your development process.

Dependency-Check is a command-line tool that can scan your project and warn you of any modules being included in your app’s manifest but not actually being utilized in any functioning code (e.g. modules listed in your package.json file but never ‘required’ in any class within your app).  After all, why import code that you do not require?

Installation is a snap.  From your project directory you can:

Npm install –g dependency-check

dependency-check package.json –unused

If you have any unused packages, Dependency-Check will warn you via a notice such as:

Fail! Modules in package.json not used in code: chai, chai-http, mocha

Armed with a list of any unused packages in hand, you can prune your application before pushing it to production.  This tool can be triggered within your CI process per commit to master or even incorporated into your project’s pre-commit hooks to scan your applications further upstream.

RetireJS:

‘What you require, you must retire’ as the saying goes – at least as it does when it comes to RetireJS.  Retire is a suite of tools that can be used for a variety of application types to help identify dependencies your app has incorporated that have known security vulnerabilities.

Retire is suited for JavaScript based projects in general, not just the NodeJS framework.

For the purpose of this article, and since we are dealing primarily with the command line, we’ll be working with the CLI portion of Retire.

npm install -g retire

then from the root of your project just do:

retire 

If you are scanning a NodeJS project’s dependencies only, you can narrow the scan by using the following arguments:

retire  —nopdepath 

By default, the tool will return something like the following, provided you have vulnerabilities RetireJS can identify:

ICanHandlebarz/test/jquery-1.4.4.min.js 
↳ jquery 1.4.4.min has known vulnerabilities: 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969 
http://research.insecurelabs.org/jquery/test/ 
&lt;a href="http://bugs.jquery.com/ticket/11290"&gt;
http://bugs.jquery.com/ticket/11290</a>

Retire scans your app and compares packages used within it with an updated online database of packages/package versions that have been identified by the security world as having exploitable code.

Note the scan output above identified that the version of jquery we have imported via the ICanHandlebarz module has a bug that is exploitable by hackers (note the link to CVE-2011-4969 – the exploit in question).

Armed with this list, we can identify any unnecessary dependencies within our app, as well as any exploits that could be utilized given the code we are currently importing and using.

NSP:

There are a multitude of package scanning utilities for NodeJS apps out on the web. Among them, NSP (Node Security Platform) is arguably the most popular.

Where the previously covered Retire is a dependency scanner suited for JavaScript projects in general, NSP, as the name implies, is specifically designed for Node applications.

The premise is identical: this command line tool can scan your project’s package manifest and identify any commonly known web exploits that can be leveraged given the 3rd party packages you have included in your app.

Utilizing NSP and Retire may sound redundant but, much like a diagnosing a serious condition via medical professional, it’s often worth seeking a second opinion.  It’s also equally easy to get up and running with NSP:

npm install -g nsp

nsp check —reporter &lt;Report type (e.g. HTML)&gt;

Running the above within the root of your node application will generate a report in your chosen output format

Again, wiring this up into a CI job should be straightforward.  You can even perform back-to-back scans using both Retire and NSP.

Snyk:

Yes, this is yet another dependency scanner – and like NSP, another one that specifically NodeJS oriented but with an added twist: Snyk.

Snyk home page

Snyk home page

 

 

 

 

 

 

 

 

 

With Retire and NSP, we can quickly and freely generate a list of vulnerabilities our app is capable of having leveraged against it.  However, what of remediation?  What if you have a massive Node project that you may not be able to patch or collate dependency issues on quickly?  This is where Snyk comes in handy.

Snyk can generate detailed and presentable reports (perfect for team members who may not be elbow deep in your project’s code).  The service also provides other features such as automated email notifications and monitoring for your app’s dependency issues.

Typical Snyk report

Typical Snyk report

 

 

 

 

 

 

 

 

 

 

 

Cost: Now, these features sound great (they also sound expensive).  Snyk is a free service, depending on your app’s size.  For small projects or open source apps, Snyk is essentially free.  For teams with larger needs, you will want to consult their current pricing.

To install and get running with Snyk, first visit https://snyk.io and register for a user account (if you have a private or open source project in github or bitbucket, you’ll want to register your Snyk account with your code management tool’s account.

Next, from your project root in the command line console:

npm install -g snyk

snyk –auth

Read through the console prompt.  Once you receive a success message, your project is now ready to report to your Snyk account.  Next:

Snyk test

Once Snyk is finished you will be presented with a URL will direct you to a report containing all vulnerable dependency information Snyk was able to find regarding your project.

The report is handy in that it not only identifies vulnerabilities and the packages they’re associated with but also steps for remediation (e.g. – update a given package to a specific version, remove it, etc.).

Snyk has a few more tricks up its sleeve as well.  If you wish to dive headlong into securing your application. Simply run:

Snyk wizard

This will rescan your application in an interactive wizard mode that will walk you through each vulnerable package and prompt you for actions to take for each vulnerable package.

Note, it is not recommended to use Snyk’s wizard mode when dealing with large scale applications due to it being very time consuming.

It will look like this:

Snyk monitor output

Snyk monitor output

 

 

 

 

Additionally, you can utilize Snyk’s monitoring feature which, in addition to scanning your application, will send email notifications to you or your team when there are updates to vulnerable packages associated with your project.

Putting it all together in CI:

Of course we’re going to arrange these tools into some form a CI instance.  And why not?  Given we have a series of easy-to-implement command line tools, adding these to part of our project’s build process should be straight forward.

Below is an example of a shell script we added as a build step in a Jenkins job to install and run each of these scan tools as well as output some of their responses to artifacts our job can archive:

#Installs our tools
npm install -g snyk
npm install -g dependency-check
npm install -g retire

#Runs dependency-check and outputs results to a text file
dependency-check ./package.json >> depcheck_results.txt

#Runs retire and outputs results to a text file
retire --nodepath node_modules >> retire_results.txt

#Authenticates use with Snyk
snyk auth $SNYK_TOKEN 

#Runs Snyk's monitor task and outputs results to a test file 
snyk monitor >> snyk_raw_monitor.txt

#Uses printf, grep and cut CLI tools to retrieve the Snyk report URL
#and saves that URL to a text file
printf http: >> snykurl.txt
grep https snyk_raw_monitor.txt | cut -d ":" -f 2 >> snykurl.txt
cat snykurl.txt

#Don't forget the set your post-job task in your CI service to save the above .txt
#files as artifacts for our security check run.

Note – depending on how you stage your application’s build lifecycle in your CI service, you may want to break the above script up into separate scripts within the job or separate jobs entirely. For example, you may want to scan your app using Dependency-Check for every commit but save your scan of your application using NSP or Snyk for only when the application is built nightly or deployed to your staging environment.  In which case, you can divide this script accordingly.

Note on Snyk:

In order to use Snyk in your CI instance, you will need to reference your Snyk authentication key when executing this tool (see the API key reference in the script).

This key can be passed as a static string to the job.  Your auth key for Snyk can be obtained by logging in at Snyk.io and retrieving your API key from the account settings page after clicking on the My Account link from Snyk’s main menu.

Next Steps:

This is only the tip of the iceberg when it comes to security and web application development.  Hopefully, this article has given you some ideas and hits of where to start in writing more secure software.  For more info on how to secure your Node-based apps, here’s some additional reading to check out: