Publishing Load Test Results with Jmeter, Jenkins and S3

A while ago, I published a blog post that presented a tutorial overview of how to use Jmeter for load testing a typical RESTful API.

This post builds upon that original post with handy information on some updated reporting features of Jmeter as well a quick dive into how you can better propagate your load results in a continuous build environment (ala Jenkins).

if you’re completely new to Jmeter, please read my previous blog post, linked above before proceeding.

For those of you experienced with Jmeter (hopefully you already have some tests you’re wanting to deploy somewhere rather than let them live contentedly on your local machine), get ready because you’re gonna love this

67295777

It’s truly amazing the gifs you can find on the web.

Meet the New Jmeter, Same as the Old Jmeter (not really)

The previous load test tutorial covered setting up a load test from scratch using an early version of Jmeter.  One of the benefits of using Jmeter, as outlined in the original post was that it has been around for quite a long time, is well known, stable and free.  However, one of the major cons in using Jmeter is that its’ well, old – and has gone for a long while without major updates – especially when it comes to report generation.

Well, welcome to Jmeter 3!  It still comes with that same, great, vintage 90s UI but now with a hoard a bug fixes, stability upgrades and – wait for it – native HTML report generation, wooo!  Check out version 3’s documentation here:

Installation and Setup

So, let’s get down to it!  You can find the latest build of Jmeter 3 here: https://jmeter.apache.org/download_jmeter.cgi

Steps to setting up Jmeter 3 locally are similar to the installation steps covered in my original blog post.  Pre-installation requirements are the same:

  • Java JDK (version 7 or later)
  • Be sure your Java HOME local environment or path variables are set before execution.

Functionally, Jmeter 3 appears identical to older versions of the application but there are a lot of changes under the hood.  I recommend unarchiving and installing Jmeter 3 in its own directory, separate from any previous version of Jmeter you may have previously installed.

Migrating your tests

If you’re following along with this tutorial and wish to create a new load test from scratch, please follow the test setup steps in the original post’s tutorial however, be sure to SKIP the 3rd party report and listener portion of the test setup (guess what, you won’t be needing those plugins anymore).

If you have an existing test you wish to simply fire up in Jmeter 3, your mileage will vary in this case.  Some of the 3rd party listeners and other tools you may have used in your setup may be incompatible with the newer version of Jmeter.  If you have any custom listeners embedded in your test I recommend doing the following:

  • Open your original test in your older version of Jmeter
  • Remove 3rd party listeners, samplers or elements from your test (right-click and select remove)
  • Save the updated test as a new test
  • Open the newly created test in Jmeter 3

From this point, you won’t really need any of your original report listeners configured in your test, however, for debugging purposes, I do recommend keeping a simple response listener in your test – one for successful API responses, another separate for failed responses – just to help with debugging any issues your test might run into.

In a few cases, I’ve found despite removing any 3rd party plugins that some tests have compatibility issues with Jmeter 3.  Sadly, the easiest course of action is to rebuild the test from scratch.  Such is life (sigh).

Such is life...

Such is life…

Executing Tests with Report Generation

Given you have your initial test configured for Jmeter 3, let’s start up the app and view it in the new UI (though it looks like the old UI).  Once you start Jmeter froim the command line, you’ll see a message like this in your terminal:

================================================================================
Don't use GUI mode for load testing, only for Test creation and Test debugging !
For load testing, use NON GUI Mode & adapt Java Heap to your test requirements
================================================================================

You don’t say!?  If you’re a long-time user of earlier version of Jmeter, you’re probably thinking, “Well, no duh!” right?

Open your load test in the UI and give it a look-over to make sure its configured the way you intend.  It should be minimal simple thread group, your HTTP samplers, your results tree report and any CSV data resources you might need.

Your test may look something like this

Your test may look something like this

Next, save your test and close Jmeter (we’re now done with the Jmeter UI).

Now, we are going to use Jmeter’s command line interface to execute the load test and generate the report for the test.

Do the following:

Open your CLI terminal and navigate to your Jmeter 3 bin directory.  Execute the following command:

jmeter -n -t <test JMX file> -l <test log file> -e -o <Path to output folder>

Be sure to set the ‘test JMX file’ argument to the path where your load test’s .JMX file resides, relative to the Jmeter bin directory.

The test log file should be an arbitrary log file path/name that currently does not exist.

The output folder path is where your report will be written to.  This must be an empty directory.

A couple of notes:

If you configured your test to run for a set period, just sit back and wait.  The report will be generated in the output directory you provided once the test execution finishes.

If you configured it to run in perpetuity – you will need to formally stop the test before report generation will execute (simply killing your terminal session won’t help you here).

To stop your Jmeter test from the command line, simply open a new terminal window and enter one of the following commands:

Control + . – This stops Jmeter, halting any active listeners.

Control + ,  – This sends a shutdown command to Jmeter – it will wait until all active threads have finished their current tasks before closing shop (this is the recommended method to use unless you’re impatient

or there is an emergency – like, oh you forgot you pointed your test at production).

Are you really going to argue with Chuck?

Are you really going to argue with Chuck?

Once your test has gracefully exited, navigate to the output folder you specified and open your newly minted report in your web browser of choice.

Jmeter 3 HTML report

Jmeter 3 HTML report

Pretty sweet huh?
sxf8ww

If you haven’t seen Jmeter’s dashboard report before, you’re probably thinking, “golly, there’s a lot of stuff here” – and there is.  The dashboard report contains a pretty wide variety of data on your test execution.  Enumerating through all of that here would be exhausting however – if you’re interested in all the gory details of Jmeter’s dashboard metrics, check out this in-depth explanation here:

https://jmeter.apache.org/usermanual/generating-dashboard.html

Show and Tell (mostly show)

So now you have this fancy load test report but hoarding all this metric goodness on your local development workstation is no fun for the rest of your team.  But that’s OK because if you have access to a continuous integration build environment (ala – Jenkins), the rest of this blog post will walk you through taking your Jmeter test and throwing it into a Jenkins build that allows you to not only run your test remotely but also save and propagate your results to the rest of your agile team.

Make your test available

First things first, you’re going to need to take your Jmeter test, get it off your local machine and into some form of web-accessible delivery.  If you have AWS access, you could place your .JMX files and resources into a bucket and be done with it.  In our case, I recommend creating a private Github repository for your tests and committing your Jmeter working directory, tests and resource files to that repository.  There’s a couple of reasons for this:

  • It’s easy to access a given Github repository with Jenkins
  • You can manage updates and versions of your tests just like source code
  • You can manage variants of your load tests using Github’s branch management features

That said, before tossing your tests into Github’s gaping maw, here’s an important note regarding information security:

Depending on your type of load test, you may have some API or other service access keys stored in your test.  You don’t want to check in API or other access keys into your repository (even if it is marked private).  But don’t just take my word for it:

https://softwareengineering.stackexchange.com/questions/205606/strategy-for-keeping-secret-info-such-as-api-keys-out-of-source-control

If you’ve already committed your tests to your repository and you still have some software keys in there and are thinking, “Oh boy… What now!?”, here’s a link to some handy notes on how to extract some sensitive values from your test or code using some handy features of Git:

https://gist.github.com/derzorngottes/3b57edc1f996dddcab25

Getting Up and Running with Jenkins

Jenkins is one of the most widely available CI build tools in use today.  As such, we’re going to cover how to set up a Jenkins task to run your load tests.  If you’re using some other build system (e.g. Team City) this walk-through won’t be as pertinent but essentially, the tasks will be the same.

Step 1: Get access to Jenkins:

Assuming you already have a Jenkins build server of some sort available to your organization – let’s log into it.  If you don’t have access to Jenkins, talk to your friendly neighborhood dev ops engineer.  Note, that for this walkthrough, you’re going to need some level of administrator access to set up everything we need to execute our tests.

Step 2: Create and Setup your Job with Parameters:

Do the following:

  • From the Jenkins home page, choose a directory you wish to run your load test jobs from (or create one).
  • Click the “New Item” link and then choose “Freestyle Project”
  • Give your project a name and description
  • Choose the “This project is parameterized” option – we’ll use parameters to allow you to configure your load test If needed.
  • Select the “New string parameter” option and create a new parameter named, “thread”
  • Set the default value to whatever you want your test’s default number of threads to be (e.g. 40).
  • Create another string parameter and name it “ramp” – set its default value to the number of seconds you wish your test to take to ramp up to its maximum load.

If your test is going to loop, create another string parameter named, “loops” and set its default, numeric value.

If you have an API key (or keys) you need for your test to execute, as mentioned above, and have already removed said keys from your test (substituting them for a placeholder value) you can create additional string parameters here (naming them key1, key2, etc.).  You can then set your API key(s) to the default value field.  Granted, you are propagating your API key here but this is a much better practice in terms of security than leaving them in a potentially publicly accessible repository as your Jenkins instance should be private and protected.

5dbe72d3d6a5db0d23a62eea640d7c5a_1457280669769jpg-keymaster-gatekeeper-meme_250-250

Step 3: Git Jenkins to Talk to Git:

While still in the Jenkins job configuration screen, under the Source Code Management section, select “Git” and enter your repository’s URL alias into the Repository URL field.

If this is your first time doing this, Jenkins expects a specific format for your Github URL strings.  It should look something like this:

https://github.com:<your main repository name>/<your project name>

If you’re using a publicly accessible Github repository, skip the next step, enter your preferred branch name (or */master if you don’t need a specific branch).

Next, select the appropriate credentials option from the relative drop down menu to allow Jenkins to properly introduce itself to your Github repo (if you’re having trouble with this, ask your dev ops engineer or Jenkins server admin for help – and maybe do something nice for them while you’re at it like get them a thank you card or some cookies or something, because at this point, you’d be out of luck without them, right?).

bp4

Step 5: Build Settings:

You can kick off your tests manually however, it’s probably better to set up timer or other method to handle this for you.  Try the following:

Under the Build Triggers heading, check the Build periodically option

Enter your time range you wish the build to trigger at (this will be in Cron notation – which is tricky if you’re seeing this for the first time).

bp5

This article does a decent job explain Cron’s formatting: http://www.nncron.ru/help/EN/working/cron-format.htm

Now, click on the Add build step option and from the dropdown menu, select ‘Execute shell’.

Here, we will define the command we will use to execute the Jmeter test once Jenkins pulls the test resources from Github. This will be like the shell command we used to run Jmeter locally as described above.

bp7

Syncing to Amazon Web Services

Now we need to make sure we’re saving our generated report somewhere so the rest of our team can review it.  Jenkins has a Jmeter performance plugin that can archive our results.  Also, we could just have Jenkins archive the output directory Jmeter generates the report in (though handing all the recursive directories it will build might be a pain).

Given you have an AWS account, you have the options to sync your reports to an AWS bucket and host them like a static web page.  This method would be beneficial if you need to expose your load test results to team members or other parties that do not or should not have access to your build systems (e.g. product manager, support team members, etc.).

If you don’t have access to AWS, aren’t sure you have access to AWS or may have issues with your build server communicating to AWS, once again, now is a fine time to make friends with (*cough* bribe *cough*) your friendly neighborhood dev ops engineer.

Let’s say you’ve taken care of that and you and your dev ops guru are best buddies now.  Next, you’ll need to further configure your Jenkins job by adding another Execute shell build step.  Make sure this one is ordered sequentially after your Jmeter test execution step.

In the Execution shell, simply add a command to change to the output directory your report will be in (defined in your test execution command) and use the AWS sync command to send your HTML report to AWS:

cd bin/target
 aws s3 sync . s3://mybucketroot/mybucketname

Save your Jenkins job.  You’re done (with Jenkins at least).

Now, in the above step, we are assuming you have a bucket already created in AWS where you can sync your data to.  You’re also going to need access to modify how the bucket works in terms of its permissions.

Making Your Test Results Readable from the Web:

By default, your AWS bucket is out there on the web, you can read from it, write from it, but your other team members will have trouble reading the HTML report from that location.  We need to configure the bucket so that it will display its contents like a static web page.  Here’s how:

Log into your AWS account and access your specified bucket.

Click on the bucket and select the Properties option

Click the Static website hosting panel

bp9

Enable the static hosting option

Specify your index.html file location (this should be the relative path to the main HTML file for your uploaded report).

bp10

Save your changes and boom – now you’re done.

Save and run your job.  If your account permissions are all set up correctly, your test should execute and the test results will be uploaded to the specified AWS bucket.

From here, simply access your AWS bucket using the following styled web link:

https://s3.amazonaws.com/myBucketRoot/myBucketName

995f09e2e0291e1c61e3d22a62438b95f23395d966984d472eb5b43ea04f76beYour test results should now be viewable just as if they were a static web page by anyone on your team.

Wrapping it Up:

Now that you have your load tests residing in a Github repository, feel free to leverage Github’s branching features to allow you to manage multiple iterations of your load tests:

Commit different variations of the same branch to new branches

Use Jenkin’s copy feature to copy your existing job, and just point it to your different branches to allow you to execute multiple test types.

Use Jmeter’s UI locally to copy/edit your existing tests to make additional iterations.  Save these to your repo and commit them just like you would any other coding project.

Again, use Jenkin’s copy feature to create variations of your test run jobs – this time, just edit your shell command to execute your desired .JMX test file per a given execution.

Thanks for reading and I hope this give you further ideas and insight into how you can leverage Jmeter to further improve your software projects through load testing.

Database Migration

MalcolmInTheMiddleGif

(Always One More Thing…)

Who Are We?

The Ad Management team here at Bazaarvoice grew out of an incubator team. The goal of our incubator is to quickly iterate on ideas, producing prototypes and “proof of concept” projects to be iterated on if they validate a customer need. The project of interest here generates reports based on aggregations of event data gathered from several other teams at the company. As our project gained traction, it grew in size and scope, eventually leading to the need to revisit some of the design decisions made in the prototyping phase. Specifically, we found the original database system we chose, EmoDB, to not meet our needs as our requirements evolved.

Why Migrate?

When this project was started, it began as a prototype designed to get the project rolling as quickly and easily as possible. The initial team chose EmoDB since they were familiar with the in-house technology from their other projects and it fit our initial needs. As the project gained traction, and we had more data to operate against, we encountered scalability issues, initially resolved with caching and some refactoring. We found that we were querying EmoDB as if it were a typical relational database, when it’s not actually designed for that use case. (Emodb is an eventually consistent json blob store with a change notification databus that spans multiple AWS AZs and Regions. EmoDB powers many of our solutions at Bazaarvoice and is now open-source and available at:
https://github.com/bazaarvoice/emodb

We chose to switch to MySql to leverage the Relational Data Model for rolling up aggregations of data we collect and calculate. We ran into problems previously when we retrieved whole documents to perform aggregations on our data, leading us to decide that a technology that is optimized for relational models would suit the project much better.

How to Migrate?

Since our project already had trained users by the time we wanted to migrate database systems, we needed to design our migration with a no-down-time approach; “seamlessly” changing out the back-end implementation for our users. We also made these transitions configurable, such that we wouldn’t need to make one large switch to master from the new system, but we could choose which services were ready to be cut over to the new data back-end.
The following image is our design document that describes how we planned our migration. On the left is our origin code base named “legacy”. On the right was the proposed design for our new service stack for the migration. Inserted into the middle is the “Service Facade” where we intended to run our quality assurance against live data between the legacy technology stack and our new technology stack.

Copy of ad-management migration to MySql based stack - Page 1

How to Maintain Data Consistency?

Depending on the size of the data that is being diffed and migrated between databases, it can be expensive to run the necessary migrations. Our solution was both writing specific tasks that backfilled data or directly migrating data sets to the new data source. This allowed us to smoke test that our services are working, without expending large amounts of time or money finding bugs along the way. As our confidence grew in our custom tooling and services, we would backfill and migrate larger chunks of data, until we had migrated everything necessary to master from our new service.

What is a Service Facade?

The service facade layer is responsible for executing the respective operations out of the legacy and new stacks. This is where we placed our diffing logic to compare the results returned from Emo and Mysql for the same operation. The facade returns data from the pre-defined configured stack. This meant that certain areas of the application could be sourcing from Mysql, while other areas, that we weren’t confident in, continued to source from Emo. For example, our CampaignRoiReportBuilderServiceFacade written in Scala looked something like this:


class CampaignRoiReportBuilderServiceFacade @Inject()(
private val campaignRoiReportBuilderServiceLegacy:CampaignRoiReportBuilderServiceLegacy,
private val campaignRoiReportService:CampaignRoiReportService,
private val campaignConfig: CampaignConfiguration,
private val facadeDiffTool: FacadeDiffTool) {
...
  def buildReport(...): Future[Option[CampaignRoiReport]] = {
    val roiReportFromEmoDbFuture: Future[Option[CampaignRoiReport]] = campaignRoiReportBuilderServiceLegacy.buildReport(...)
    val roiReportFromMySqlFuture: Future[Option[CampaignRoiReport]] = campaignRoiReportService.buildReport(...)

    // Extract data from scala futures
    for {
      roiReportFromEmoDbMaybe &lt;- roiReportFromEmoDbFuture
      roiReportFromMySqlMaybe &lt;- roiReportFromMySqlFuture } { // Pattern Matching to extract values from scala Option (roiReportFromEmoDbMaybe, roiReportFromMySqlMaybe) match { case (None, None) =&gt; //this is an impossible case, but listed to avoid compilation warning
        case (Some(_), None) =&gt; LOG.warn("/*Report missing/mismatched data*/")
        case (None, Some(_)) =&gt; LOG.warn("/*Report missing/mismatched data*/")
        case (Some(roiReportFromEmo),Some(roiReportFromMySql)) =&gt;
          val mismatches = facadeDiffTool.campaignROIReportLegacyDiff(roiReportFromEmo,roiReportFromMySql)
          if(mismatches.nonEmpty)
            LOG.warn("/*Report missing/mismatched data*/")
      }
    }

//This is how we configure what source we return to the resource.
    campaignConfig.masteringFrom match {
      case EmoDb =&gt;
        roiReportFromMySqlFuture.onFailure{
          case e:Throwable =&gt; LOG.warn("Failed to build an ROI report on the MySql side", e)
        }
        roiReportFromEmoDbFuture
      case MySql =&gt;
        roiReportFromEmoDbFuture.onFailure{
          case e:Throwable =&gt; LOG.warn("Failed to build an ROI report on the EmoDb side", e)
        }
        roiReportFromMySqlFuture
    }
  }
}

The original resource classes will be modified to call from the new facade layer, but no other functionality should change. If constructed properly, the facade layer will act in the same way as the original service because the facade mimics the public functions available in the original service class. These duplicated functions will call to the methods from the legacy service class as well as the new service. With the responses from both the legacy and new services, the facade layer can make an assessment on the differences between the two service stacks. To report our differences such that we could be notified during API usage, we would log them out to our log management and monitoring system.

How Did We Capture Mismatches?

Logging was a large concern of ours. We knew that there would be many differences per call while we were debugging our new service stack. As an example, on one call, we reported 2000+ differences. We wanted to compose all differences into one log per call in a meaningful way. For this, we wrote custom diff tooling that would return differences in the data as sets of MismatchedField classes.


case class MismatchedField[T](name: String,
                              legacyValue: T,
                              newValue: T)


This templated class will hold the values returned from both the legacy service (legacyValue), and the new stack’s service (newValue), as well as some meaningful tag with which to identify where this mismatch came from (name). We would then compose all mismatches for any given call into a single log through our custom diff tool. Every function within our custom diff tool returns Set[MismatchedField[Any]]. We can then compose each set into a single set of differences such that we can use only one log call to write out the whole set of differences in one log entry.

An Interesting Finding:

One of the most interesting findings we had through this migration weren’t bugs that came in constructing our new service stack for the new database, but that we found bugs in our original database stack. One take-away from this was to make sure to investigate any mismatches found down to the source data. We found during the code migration process that some of our legacy functionality was written incorrectly. As an example, in our legacy code we were storing some aggregated data in sets, unintentionally masking duplicate data. When re-implementing these same aggregations for our new service stack, they were correctly implemented as a list, producing a mismatch in the data. Through our investigation, instead of simply matching the data to how our legacy service worked, we went back to our origin data, and ran the calculations manually through the Scala REPL. In doing so, we found that the new service was correct, where our legacy code was wrong. Fortunately, the bug within our legacy code was a simple fix. We implemented the fix within the legacy code and our mismatch disappeared.

Other Take-aways:

An important team take-away was to be very upfront and declarative about the work that the migration would require. Our investigation into the migration not only involved setting up a new technology stack for MySql, but also changing our build tool from Maven to SBT, introducing a Flyway + Jooq plugin to enforce type safety throughout the migration, designing a new data model (which was ultimately the driving factor for doing the migration in the first place), as well as upgrading our code up to the newest scala version to leverage all of the previous changes. Ultimately, we severely underestimated, and under-ticketed the work necessary to start our migration.

It is also important to keep in mind that every team is different and has different needs. When having conversations about database migrations, take the time to do a proper risk assessment for the work ahead. Keep these conversations going during the migration as well. As a team, we ended up prioritizing new feature requests and non-migration related bugs because the migration felt orthogonal to our production environment.

A further takeaway is that we could have saved ourselves a lot of time if we had more realistically assessed our users. In retrospect, the users of these reports were internal and would have been more lenient with smaller service outages, which would have allowed us to leverage our configurable services to migrate much sooner. At the expense of stability, we believe that we could have had a quicker migration by forcing ourselves to fix problems forward, instead of maintaining our legacy code for as long as we did. Still, most scenarios don’t have this luxury and we hope the façade based approach is of help to you.

Why the value of hackathons goes beyond free pizza Categories: Culture

bv_hackathon_logo_blue_background-01-copy-1-1024x644

About six months ago, we shared Why We Hackathon. At Bazaarvoice, we host a company-wide hackathon twice a year, and our next one kicks off this week. My previous post primarily focused on the people and company culture aspects of running a hackathon. In lieu of writing “Synergizing Innovation With Disruptive Hackathons”, this time around I’d like to  to share the real value we’ve seen from our hackathons, as well as some ideas on how to realize that value with your own hackathon.

Leveling-up our offerings

If you participate in a hackathon at Bazaarvoice, it is likely that what you create will somehow touch our clients. Many of the products and features we offer today got their start from a hackathon project.

Curations, our social media curation platform, started from one of our public hackathons. Bazaarvoice engineers working with engineers from FeedMagnet, one of the companies invited to participate, created the initial prototype of displaying reviews and social content together. Since then, we’ve put in years of work to build a system that can collect, manage, and display content at tremendous scale, but it all started with that original hackathon project.

Recently, we’ve seen many hackathon projects focused on helping consumers find the perfect product. Most of these took shape as various forms of personalization, like product recommendations, which we’re now offering as a limited-availability product. A different version of this technology is a new capability we’ve been experimenting with called Post Interaction Notification. In short, instead of showing products it thinks someone is interested in, it asks for reviews about products they’ve purchased. You guessed it — also a past hackathon project.

Our successful projects aren’t limited to display-specific projects. Our last hackathon saw projects that used different machine learning technologies to improve our ability to identify relationships across products, shopper profiles, and related consumer-generated content (CGC). For example, one project solved the problem of widely varied product titles and descriptions in our product catalog. If searching for ‘60” Flatscreen TV’, our product matching services would previously only return results that exactly matched those search terms. Now, thanks to the hackathon project, our system understands what 60”, flatscreen,  and TV actually mean. It can then find similar, but semantically different, items like “60 inch Flatscreen Television” or “60in. LCD TV”. This capability dramatically improves our personalization and syndication services by finding the same products across thousands of varied product catalogs.

While not every hackathon project grows into a full-fledged solution, they continually temper and sharpen the direction of our offerings and often improve our clients’ CGC programs.

Improving our client experience

Projects of this variety range from seemingly simple improvements to complex, long term initiatives. One project adored by both our clients and employees (we use these tools too!) was the ability to hop between different client accounts available to the logged-in user in the administration portal; simple, but a big time and frustration saver.

Speaking of administration tools, we’ve been building a new platform to build and deliver consistent tooling across our capabilities. Many of the last hackathon’s projects focused on providing actionable insight into the health of implemented Bazaarvoice capabilities on client’s websites, as well as supporting aspects like product-catalog imports and email-based review solicitation. These projects have shaped the solutions we’re building to proactively communicate the technical health of a client’s CGC program.

hackathon

How we hackathon

Normally, you’ll read “just give people pizza, beer, and games for a few days” as the recipe for a successful hackathon. In my opinion, those things are fantastic additions to any hackathon. However, it’s the people that make a hackathon great. To create success, empower people to find creative solutions to real problems.

We treat product development as a company-wide exercise. Having this sort of inclusive culture means individuals building, supporting,marketing, and selling our solutions are actively involved with client feedback, product feedback, and market opportunity across all of our functions. Providing a few days to create solutions to the problems seen over past months unleashes incredible ideas and creativity.

Secondly, this isn’t a top-down driven event. We involve a handful of hackathon participants and their peers to help plan and execute our hackathons. They make sure every activity, piece of swag, and communication is planned and ready to grow and sustain a big list of happy attendees. After all, the primary focus of the event to create space and time to be creative, have fun, and build stronger relationships.

Logistically, we made a change last hackathon that greatly improved the involvement from employees who weren’t participating on a hackathon team.It can be a bit challenging to stay focused with 50 teams demoing projects on stage one after the other, let alone remembering what you liked most in the final voting. Instead, we tried a science-fair style demo to promote walking around, engaging with various teams, and “investing” in the most promising projects using tickets. This boosted participation from those that aren’t interested in hacking, but who are interested in seeing and voting in the end results. This month’s hackathon we’ve got trifold panels, glue sticks, and markers to really get into the science fair spirit.

Synergizing innovation with disruptive hackathons

It’s not about buzzwords and forced team bonding. If you’re looking to run a hackathon, focus on the people — both those on hackathon teams and those who just want to spectate. If you’re looking to participate in a hackathon, try to find a real problem to solve. Regardless of your role, you’ll be pleasantly surprised at the creative, brilliant output. And at the very least, you’ll get some free pizza. 

Context and Higher Order Components: Two Immediately Applicable Topics from the Advanced React Workshop

Thanks to Bazaarvoice I recently attended an “Advanced React Workshop” put on by React Training and taught by Ryan Florence, one of the creators of React Router.

Ryan was an engaging teacher and the workshop was filled with memorable quotes. Here are some highlights:

  • The great conundrum of accessibility is that learning it is not accessible.
  • Like forceUpdate, never use context alone. Always bring a friend with you.
  • Good abstractions can always recreate other abstractions.
  • Is that a Justin Beiber song?

The training took place over two days in East Austin and I learned a ton of stuff that I was able to immediately put to use in my day-to-day projects. Let’s dive right in.

Context

In addition to the familiar props and state, React components also have access to a magical thing called context.

Context is for when your application has a hierarchical structure and needs to pass data from a parent component down to another level of child component (child, grandchild, great grandchild, etc.). As the React documentation puts it,

“In some cases, you want to pass data through the component tree without having to pass the props down manually at every level. You can do this directly in React with the powerful “context” API.”

Before we move on to the context API itself, let’s make it a little more clear why you would want to use it.

Consider the previous example of a parent node with a child, grandchild, and great grandchild. In order for the parent to pass a property (via props) down to the great-grandchild, it would first need to pass the property through child and grandchild. Note that there is no other reason for child and grandchild to care about that property.

As you can imagine, the desire to eliminate this kind of cruft increases as your project grows in size and complexity.

Context to the Rescue

context helps by passing properties through the tree automatically and any component in the subtree can access them.

When Your Component is Providing Context

Components (like parent in the example above) define a static childContextTypes object that defines the properties that they will put on context. childContextTypes is very similar to the propTypes object that we’re already familiar with.

Additionally, components that are putting properties on context must actually provide their values, and do so via the getChildContext component member function. This function should return an object with the same structure as defined in childContextTypes.

For example:

// Parent.js

static childContextTypes = {
  rating: React.PropTypes.string
}

// Just like a React lifecycle method.
// Don't worry about when this actually gets called.
getChildContext () {
  return {
    rating: '4.5'
  }
}

When Your Component is Consuming Context

To continue the example, your great-grandchild component now must consume this property. It can do so by defining a static contextTypes object that describes the context properties it expects, and can then pull the properties off of this.context directly.

// GreatGrandchild.js

static contextTypes {
  rating: React.PropTypes.string
}

render () {
  const { rating } = this.context;
  // Do stuff with rating.
}

Common Use Cases

Great use cases for context include localization, theming, etc. Basically any time you have things you don’t want to have to pass down through a bunch of components that don’t need to know about them. In fact, React Redux uses context to make its store available to all of your project’s components – imagine how cumbersome it would be to pass the store to every single component!

Common Pitfalls

Name collisions is a common pitfall. Consider the case where a component that has both a parent and grand parent component that provide a rating property on context. The suggested workaround in this case is to name your context after your component:

class ParentComponent extends React.Component {
  static childContextTypes = {
    parentComponent: React.PropTypes.shape({
      rating: React.PropTypes.string
    })
  }
}

class GrandParentComponent extends React.Component {
  static childContextTypes = {
    grandParentComponent: React.PropTypes.shape({
      rating: React.PropTypes.string
    })
  }
}

Then your child component could differentiate the two: this.context.parentComponent.rating versus this.context.grandParentComponent.rating.

User Beware!

A post about context would not be complete without pointing out that the very first section in React’s documentation about context is titled “Why Not To Use Context”, and it includes dire warnings like:
“If you want your application to be stable, don’t use context.”, and “It [Context] is an experimental API and it is likely to break in future releases of React.”

Ryan did point out to us however, that even though the documentation warns that the Context API is likely to change, it is one of the only React APIs that has stayed constant 😛

Higher Order Components

In software, a higher order function is a function that returns another function. Consider:

function add (x) {
  return function (y) {
    return x + y;
  }
}

const addBy2 = add(2);

addBy2(10); // 12

Similarly, a Higher Order Component is a function that returns a (React) Component.

/**
 * A higher order component that facilitates the creation of
 * container components that simply wrap divs around their children.
 *
 * @param  {String} displayName - The displayName of the returned component.
 * @param  {String} className   - The class name to use for the wrapper div.
 * @return {Component}          - A React component.
 */
export function build(displayName, className) {
  const ContainerComponent = (props) => (
    <div className={className}>{ props.children }</div>
  )

  ContainerComponent.displayName = displayName;
  ContainerComponent.propTypes = {
    children: React.PropTypes.node
  };

  return ContainerComponent; 
}

// Usage:
export const WidgetHeader = build('WidgetHeader', 'bv_header');
export const WidgetBody = build('WidgetBody', 'bv_body');
export const WidgetFooter = build('WidgetFooter', 'bv_footer');

Higher Order Components are great for reducing redundancy across components, and can also be used to “wrap up” a lot of complexity and encapsulate it, which allows the rest of your components to stay simple and straightforward. If you were familiar with the old React mixins, Higher Order Components replace a lot of what mixins did.

That’s All, Folks!

Context and Higher Order Components are just two topics I was able to immediately integrate into my React projects here at Bazaarvoice. Hopefully you will find them useful too!

My Hacktoberfest

This past October I participated in an awesome Open Source event called “Hacktoberfest”, sponsored by Digital Ocean and GitHub.

Hacktoberfest is a month-long celebration of Open Source where developers are encouraged to contribute to the community. Participation is easy:

  1. Pull requests can be made in any GitHub-hosted repositories/projects.
  2. A contribution can be anything—fixing bugs, creating new features, or updating and writing documentation.

Further, if you opened four pull requests in Open Source repositories between October 1st and October 31st you would win a cool Hacktoberfest t-shirt and other swag.

Maintainers of Open Source projects (including some here at BV) were asked to tag open issues with “Hacktoberfest” if they wanted help with that issue during the event. GitHub provides the ability to search issues based tags, so it was really easy to find cool projects and issues to work on.

I personally started off small, helping one team track down a bug with their JSON files, and another finish a database for movies used by their front-end application (similar to IMDB).

Next I found a Hacktoberfest issue in the the New York Times’s kyt repository. Kyt is a build, test and development tool for advanced JavaScript apps. I ended up helping them fix a bug in one of their setup scripts.

Then came my Hacktoberfest pièce de résistance.

In my 20% time here at Bazaarvoice I had been playing around with browser extensions / add-ons, specifically in an effort to make implementing our products easier for our clients. So when I saw that Mozilla and the Mozilla Developer Network (MDN) were asking for someone to create a browser extension for them, I was immediately interested.

They noticed that a popular type of extension being authored was what they were calling a “replacement” add-on, something that would replace words or phrases in a web page with alternate words, or images, etc.

In their Web Extensions Examples repository, they were looking for an example of such an add-on that they could turn into a “How to Write your First Add-On” tutorial. Thus their two main requirements were:

  1. The code must be simple and easy to follow for beginners.
  2. The code must be performant because it would likely be copied a lot.

Seeing as how readability and performance are two of the main things that we check for in every code review here at Bazaarvoice, this was right up my alley!

I was so excited that I stayed up all weekend to finish the project:

I submitted my pull request, worked with the developers at Mozilla, and was so proud when my Emoji Substitution contribution was merged into their repository. What a rush!

As we traded Hacktoberfest-themed emoji (???? and ???? were my favorites), fixed bugs, and fleshed out their projects, it was really cool to lend my expertise and experience the gratitude of all the teams I worked with – this is what Open Source is all about!

I had a great time participating in Hacktoberfest this year and will definitely do it again next year. You should join me!

I want to be a UX Designer. Where do I start?

So many folks are wonder what they need to do to make a career of User Experience Design. As someone who interviewed many designers before, I’d say the only gate between you and a career in UX that really matters is your portfolio. Tech moves too fast and is too competitive to worry about tenure and experience and degrees. If you can bring it, you’re in!

That doesn’t mean school is a waste of time, though. Some of the best UX Design candidates I’ve interviewed came from Carnegie Mellon. We have a UX Research intern from the University of Texas on staff right now, and I’m blown away by her knowledge and talent. A good academic program can help you skip a lot of trial-by-fire and learning things the painful way. But most of all, a good academic program can feed you projects to use as samples in your portfolio. But goodness, choose your school carefully! I’ve also felt so bad for another candidate whose professors obviously had no idea what they were talking about.

Okay, so that portfolio… what should it demonstrate? What sorts of samples should it include? Well, that depends on what sort of UX Designer you want to be.

Below is a list of to-dos, but before you jump into your project, I strongly suggest forming a little product team. Your product team can be your your knitting circle, your best friend and next-best-friend, a fellow UX-hopeful. It doesn’t really matter so long as your team is comprised of humans.

I make this suggestion because I’ve observed that many UX students actually have projects under their belt, but they are mostly homework assignments they did solo. So they are going through the motions of producing journey maps, etc., but without really knowing why. So then they imagine to themselves that these deliverables are instructions. This is how UX Designers instruct engineers on what to do. Nope.

The truth is, deliverables like journey maps and persona charts and wireframes help other people instruct us. In real life, you’ll work with a team of engineers, and those folks must have opportunites to influence the design; otherwise, they won’t believe in it. And they won’t put their heart and soul into building it. And your mockups will look great, and the final product will be a mess of excuses.

So, if you can demonstrate to a hiring manager that you know how to collaborate, dang. You are ahead of the pack. So round up your jackass friends, come up with a fun team name, and…

If you want to be a UX Researcher,

Demonstrate product discovery.

  • Identify a market you want to affect, for example, people who walk their dogs.
  • Interview potential customers. Learn what they do, how they go about doing it, and how they feel at each step. (Look up “user journey” and “user experience map”)
  • Organize customers into categories based on their behaviors. (Look up “personas”)
  • Determine which persona(s) you can help the most.
  • Identify major pain points in their journey.
  • Brainstorm how you can solve these pain points with technology.

Demonstrate collaboration.

  • Allow the customers you interview to influence your understanding of the problem.
  • Invite others to help you identify pain points.
  • Invite others to help you brainstorms solutions.

If you want to be a UI designer,

Demonstrate ideation.

  • Brainstorm multiple ways to solve a problem
  • Choose the most compelling/feasible solution.
  • Sketch various ways that solution could be executed.
  • Pick the best concept and wireframe the most basic workflow. (Look up “hero flow”
  • Be aware of the assumptions your concept is based upon. Know that if you cannot validate them, you might need to go back to the drawing board. (Look up “product pivoting”)

Demonstrate collaboration.

  • Invite other people to help you brainstorm.
  • Let others vote on which concept to pursue.
  • Use a whiteboard to come up with the execution plan together.
  • Share your wireframes with potential customers and to see if the concept actually resonates with them.

If you want to be an IX Designer and Information Architect,

Demonstrate prototyping skill.

  • Build a prototype. The type of prototype depends on what you want to test. If you are trying to figure out how to organize the screens in your app, just labeled cards would work. (Look up “card sorting). If you want to test interactions, a coded version of the app with dummy content is nice, but clickable wireframes might be sufficient.
  • Plan your test. List the fundamental tasks people must be able to perform for your app to even make sense.
  • Correct the aspects of your design that throw people off or confuse people.

Demonstrate collaboration.

  • Allow customers to test-drive your prototype. (Look up “usability testing”)
  • Ask others to help you think of the best ways to revise your design based on the usability test results.

If you want to be a visual designer,

Demonstrate that you are paying attention.

  • Collect inspiration and media that you think your customers would like. Hit up dribbble and muzli and medium and behance and google image search and, and, and.
  • Organize all this media by mood: the pale ones, the punchy ones, the fun ones, whatever.
  • Pick the mood that matches the way you want people to feel when they use your app.
  • Style the wireframes with colors and graphics to match that mood.
  • Bonus: create a marketing page, a logo, business cards, and other graphic design assets that show big thinking.

Demonstrate collaboration.

  • Ask customers what media and inspiration they like. Let them help you collect materials.
  • Ask customers how your mood boards make them feel, in their own words.

Whew! That’s a lot of work! I know. At the very least, school buys you time to do all this stuff. And it’s totally okay to focus on just UX Research or just Visual Design and bill yourself as a specialist. Anyway, if you honestly enjoy UX Design, it will feel like playing. And remember to give your brain breaks once in a while. Go outside and ride your bike; it’ll help you keep your creative energy high.

Hope that helps, and good luck!

This article was originally published on Medium, “How to I break into UX Design?

As a software engineer, how do I change my career to DevOps?

At Bazaarvoice, we’re big fans of cloud. Real big. We’re also fans of DevOps. There’s been a lot of discussion over the past several years about “What is DevOps?” Usually, this term is used to describe Systems Engineers and Site Reliability Engineers (sometimes called Infrastructure Engineers, Systems Engineers, Operations Engineers or, in the most unfortunate case, Systems Administrators, which is an entirely different job!). This is not what DevOps means, but in the context of career development, it carries the connotation of a “modern” Systems or Site Reliability Engineer.

There’s a lot of great literature about what a DevOps engineer is. I encourage you to read this interview of Google’s VP of Engineering, as well as Hixson and Beyer’s excellent paper on Systems Engineering and its distinction among Software, Systems and Site Reliability engineers. Although DevOps engineering goes beyond these technical descriptions, I’ll save that exegesis for another time. (Write me if you want to hear it, though!)

Many companies claim to hire or employ DevOps engineers. Few actually do. Bazaarvoice does. Google does, too, although they’re very hipster about it (they called it Site Reliability Engineering before the term DevOps landed on the scene, so they don’t call it DevOps because they had it before it was cool, or something). I don’t know about other companies because I haven’t worked at them (well, I haven’t worked at Google either, but they are pretty vocal about their engineering philosophies, so I’ll take them at their word). But there’s a lot of industry buzzwordium with little substance. This isn’t a jab at other companies (but really, Bazaarvoice is way cooler), it’s just a side-effect of assigning job titles based on pop culture. If you’re really a DevOps engineer, then you already know all of this, and you probably filter out a lot of this nonsense on a daily basis.

But we’re here to answer a specific question: If I’m already a software engineer, how do I become a DevOps engineer?

So, you’re a developer and you want to get in on the ops action. Boy, are you in for a surprise! This isn’t about installing Arch Linux and learning to write Perl. There’s a place for that kind of thing (a very small, dark place in a very distant corner of the universe), but that isn’t inherently what DevOps means.

Let’s consider a few of the properties and responsibilities of DevOps engineering.

A DevOps engineer:

  • Writes code / software. In fact, he is a proper software engineer.
  • Builds tools.
  • Does the painful things, as often and frequently as possible.
  • Participates in the on-call rotation (yes, for 2 a.m. production outages).
  • Infrastructure design.
  • Scaling systems (any system or subsystem — networking, applications, load balancers).
  • Maintenance. Like rebooting that frail vhost with a memory leak that no one’s bothered to fix or take ownership of.
  • Monitoring.
  • Virtualization.
  • Agile/kanban/whatever development methodology. It’s not so much that agile is “right.” It’s just the most efficient way to complete a work queue (taking into account interruptions and blockers). A good DevOps engineer has strong opinions about this!
  • Software release cycles and management. In fact, you might even see “development methodology” and software release cycles as the same thing.
  • Automation. Automation. Automation.
  • Designing a branch/release strategy for the provided SCM (git, Mercurial, svn, etc). Which you do have.
  • Metrics / reporting. Goes hand-in-hand with monitoring, although they are different.
  • Optimization / tuning. Of applications, tools, services, hardware…anything.
  • Load and performance testing and benchmarking, including performance testing of highly complex systems. And you know the difference between load testing and performance testing.
  • Cloud. Okay, you don’t really have to have cloud experience, but it can fundamentally change the way you think about complex systems. No one in a colo facility devised the notion of “immutable infrastructure.”
  • Configuration management. Or not. You have an opinion about it. (You’ve surely heard of Puppet, Chef, Ansible, etc. Yes?)
  • Security. At every layer.
  • Load balancing / proxying / replicating. (Of services, systems, components and processes.)
  • Command-line fu. A DevOps engineer is familiar with tools at his disposal for debugging, diagnosing and fixing issues on one or many servers, quickly. You know how pipes work, and you can count how many records contained some phrase in a log file with ease, for example.
  • Package management.
  • CI/CIT/CD — continuous integration, continuous integration testing, and continuous deployment. This is the closest thing to the real meaning of “DevOps” that a Systems Engineer will do.
  • Databases. All of them. SQL, NoSQL, whatever. Distributed ones, too!
  • Solid systems expertise. We’re talking about the networking stack, how hard disks work, how filesystems work, how system memory works, how CPU’s work, and how all these things come together. This is the traditional “operations” expertise you’ve heard about.

Phew! That’s a lot. Turns out, almost all of these skills are directly applicable to software engineering. The only difference is the breadth of domain, but a good software engineer will grow his breadth of domain expertise into operations naturally anyway! A DevOps engineer just starts his growth from a different side of the engineering career map.

Let’s stop and think for a moment about some things DevOps engineer is not. These details are critically important!

A DevOps engineer is not:

  • Easier than being a software engineer. (Ding! It is being a software engineer.)
  • Never writing code. I write tons of code.
  • Installing Linux and never touching your favorite OS again.
  • Working the third shift. (At least, it shouldn’t be; if it is, quit your job and come work with me.)
  • Inherently more “fun” than being a software engineer, although you may prefer it, if you’re into that kind of thing.
  • Greenfield. You’ll deal with old stuff in addition to new stuff. But as a good engineer, you care about business value and pragmatism.
  • An unsuccessful software engineer. Really: if you can’t write code, don’t expect to be a good DevOps engineer until you can.

A career shift

Here are a few things you should do to begin positioning yourself as a DevOps engineer.

  • Realize that you’re already an engineer, so becoming a DevOps engineer means you are just moving yourself to a different domain to grow and learn from a different direction.
  • Interview at a company that’s hiring DevOps. If you get hired, you’ll learn the operations side of things fast. Real fast. Or get fired. (Hint: you should disclose your experience honestly!) If you don’t get hired, you’ll learn what is still missing from your resume / experience that’s preventing you from becoming a full-time DevOps engineer. Incidentally, we’re hiring. 🙂
  • Tell your boss you want to become a DevOps engineer at your company. Your boss should help you to this end. If he/she does not, quit. Then come work at Bazaarvoice with me and a bunch of really awesome, super talented engineers working on some really awesome and challenging problems.
  • Obtain practical experience by using your skills as a software engineer to build tools rather than applications. Look at any of the open source projects Netflix has written for examples / ideas.
  • Learn OpenStack or some equivalent infrastructure-level project. (OpenStack has tremendous breadth, which is why I recommend it.) You can do this on your own time and budget. It’s not important whether OpenStack sucks compared to Rackspace Cloud. What’s important is that you understand all of the various components and why they are important. Have a wad of cash lying around? Learn Amazon Web Services or Google Compute Engine instead.
  • Bonus: learn about Apache Mesos and Kubernetes, and why they’re useful / important. Using them is less important than understanding them.
  • Participate in anything your team does involving operations — deployment, scale, etc. (See list above: “What DevOps is.”) If your team doesn’t do any of that (i.e., they send artifacts over to Operations and the Operations team does deployment), go over to the Operations team and sit in on a few deployments. You may be surprised!

Do I need to have deep operations experience to become a good devops engineer?

I’ve asked myself the same question. I come from a development background myself and only had less than a year of experience dealing with operations (part time) before becoming a DevOps engineer. (Granted, I had a referral vouching for me, which weighed in my favor.) Despite my less-than-stellar CS/algorithm skills (based on my complete lack of formal computer science education), I’ve had enough experience writing software that I could apply these concepts to systems in a similar fashion. That is, in the same way a software engineer needs to understand at what point his application code should be abstracted in support of future changes (i.e., refactored into smaller components), a DevOps engineer needs to understand at what point a component of his infrastructure should be abstracted in support of future changes (i.e., rebuilding all of his servers and rearchitecting infrastructure that’s already in production, despite the potential risk, in order to solve a problem, like allowing the architecture to scale to business needs). At its core, a good engineer is just as good whether he’s writing software or deploying it. Understanding complex systems and their interactions is a critical skill. But all of these are important for software engineers, whether you’re writing application code or not!

I hope this post helps you in your endeavor to become a DevOps engineer, or at least understand what it means to be a DevOps engineer at Bazaarvoice (as I said before, it may mean something totally different at other companies!). You should get your feet wet with some of the things we do. If it gets you tingly and excited, then come work with me at Bazaarvoice: http://www.bazaarvoice.com/careers/research-and-development/.


This article was originally posted as an answer on Quora. Due to surprising popularity, I’ve updated the article and posted it here.

Cross-Platform Mobile SDK Testing

This Bazaarvoice blog entry is co-authored by Tanvir Pathan as part of a Bazaarvoice internship project on the Bazaarvoice Mobile Team.

Automated testing of native mobile applications has long been a pain point in the world of mobile app development. If you are creating and distributing apps or open source SDKs across two or more major platforms (Android and iOS in our case), you can easily find yourself duplicating efforts to test the same source and business logic across different technology stacks. For example, if you have experienced developers and testers using Xcode for iOS apps, they may tend to automate testing with Instruments Automation, where Android developers and testers may automate with Espresso or UIAutomator. This becomes an expensive proposition for development and maintenance of unit tests, which can be costly as your test coverage increases.

Test strategy can also vary depending on the type of mobile app development your shop pursues: native, hybrid, cross-compile, mobile web. Hence, the selection of test tools will vary depending on how you build and deploy apps.

In this blog post, we’ll detail a novel solution to cross-platform testing of our native SDKs, along with some background of other mobile tool offerings. Our solution focuses on cross-platform open source mobile SDK testing utilizing Cordova to wrap our SDKs in a generic JavaScript interface, and Calabash to drive our cross-platform behavioral tests.

If you want to check out the full solution, the Cordova plugin and description on how to execute Calabash can be found in our Github repository.

How to View Mobile Application Development

Non-mobile app developers typically don’t actually know the difference between a web app, native, or hybrid app. If you work in any business that supports some kind of mobile solution (and you probably wouldn’t be reading this if you didn’t) it’s really important to understand some fundamental differences. It’s very easy to just throw out the word “mobile” in conversation and not realize there’s multiple parts to this elephant!

blindmen

The table below presents four general categories of mobile application development. Keep these categories in mind when talking about “mobile” in general and don’t fall in the trap of the blind men and the elephant.

Mobile Development Types & Tools

Type
Description
Tools
Native Application Development Developers creating purely native apps will write in the language supported by their target platform. For iOS apps, developers can write in Swift and/or Objective-C, while Android developers can write in Java (and C/C++ for lower level execution)
Cross-compilation Developers can also write apps for multiple platforms in a single language, such as Java Script. Cross-compilation tools will take a common language and actually convert it to the target language of the native platform. In this case, while developers aren’t writing in the native language, the tools create real native apps. Some of the most common tools are: Appcelerator (JavaScript), Xamarin (C#), React Native (Java Script)
Hybrid Applications Hybrid apps utilize Web Views to display content, typically written with common web development technologies such as Java Script, HTML, and CSS. Hybrid apps will typically have a “bridge” that allows javascript code to communicate with the native libraries to do things like access the camera, location services, or contacts. Cordova (aka PhoneGap) as the application container. Developers choose their favorite UI layer to work with Corodva: ionic, Sencha Touch, jQuery Mobile, Onsen, Framework 7.
Web Applications / Mobile Web A web application isn’t as much an application as it is a mobile optimized web site. Hence, you won’t find a Web Application in the App or Google Play Store, you just fire up your favorite mobile web browser and load the site.

Cross Platform Mobile UI Testing Tools

When developing for native mobile, developers will typically write unit tests to check individual pieces of functionality and business logic, perhaps even employ certain mocking techniques to test networking and user interface capabilities without the need for a full application. However, when it comes to full system testing of full applications and SDKs, making the right selection can be a tough process. However, if cross-platform testing is your objective and you want to write all your tests in one common scripting language, the options narrow quickly.

While there are platform-specific UI automation frameworks for Android (Robotium, UiAutomation) and iOS (Instruments Automation, Keep It Functional, EarlGrey), there currently only two (that we are aware of) that allow us to test cross-platform with a common script.

Tool
Summary
Appium Appium lets developers write tests for applications without having to add any additional code the applications. It works with native, hybrid, and mobile web applications.
Calabash Calabash is owned and maintained by Xamarin, and provides cross-platform testing for native or hybrid apps. Unit tests can be written in Ruby with Cucumber.

2 Birds, One Stone

kill2birds3

Making the decision to use Cordova and Calabash was fairly easy. First we already distribute our BBVSD via frameworks and libraries for iOS and Android. Second we know some of our customers are creating hybrid applications with Corodva. So we immediately thought: what if we could create a test environment that not only tests our SDK deliverables, but also provides our clients with an easy avenue to integrate Bazaarvoice services into their own Cordova app. Win! Win! As well, because we already use Cucumber extensively at Bazaarvoice, we decided to leverage our already strong in-house expertise and utilize an automation framework that is internally familiar.

Calabash Unit Tests

Another great thing about using Calabash at Bazaarvoice is that we already have an internal framework developed on top of Cucumber. Because Calabash layers on top of Cucumber, the paradigm and philosophy of writing human readable test cases still applies. The test cases utilize Behavior Driven Development  modeling tools to add meaning to your mobile app testing.

Let’s say you are creating the same app for multiple platforms. Typically, you would have to write completely different sets of code to run similar tests. With Calabash, this is not the case. You write one set of code tests and make slight adjustments depending on the platforms in question and you are done!  Best of all, in addition to Calabash being free, the test cases are super easy to write as a developer and even easier to read for others who may be interested in checking out the health of the project.

Needless to say, Calabash provides a lot of benefits for cross platform testing. Lets take a look at an example test case from the BVSDK Cordova Plugin project. Let’s go through a simple scenario based on the following app screens shots from the iOS simulator.

bvsdk_build_simulator bvsdk_running_simulator

Say you wanted to count the number of products that were recommended by our Product Recommendations API. If you were doing it manually you would go through the following steps:

  1. Wait for the app to launch
  2. Make sure you receive a success alert and press ok
  3. Click the Recommendations tab
  4. Then count how many products there are and compare them to what you were expecting

Now how would we code this? Calabash has two essential components: one feature file and one ruby file. The feature file is where you write out the tests and the ruby file is used primarily to make custom functions if needed (although most of what you need comes right out of the box). So returning to our problem, writing out the test case, you simply write down those exact steps in the feature file:

Feature File
Feature: BVSDK Demo App
@recommendations_test
  Scenario: As a user, I want to get new recommendations
    Given the app has launched
    Then I should see "BVSDK has been built successfully."
    Then I press the OK button
    Then I press Recommendations tab
    Then I check number of products

mindblown

That’s really all there is to it. Of course, tests can be also written to be more platform specific when needed.

Entering the Matrix – Travis CI

We use Travis CI for all our public repos at Bazaarvoice. It’s awesome. But, we have to support multiple build tools on different virtual machines. Configuring all these build machines with custom tools sounds and build scripts really scary! Freak out!

matrix

The really slick thing about Travis CI is that you can test multiple configuration, variations, permutations, salutations, etc, etc, etc, by building a matrix in Travis’ Config file (.travis.yml). For our testing, since XCode only runs on OS X and it’s the only way to build for iOS, we must have an OS X image. For the Android Studio and Gradle build tools, we build against Linux. In addition there’s some common tooling we can install for each build machine. The result is that we can use two different VMs for testing each platform, with just one set of tests. Note in the test result below, the build jobs are defined by the environment variables defined in the Travis config file.

travis_matrix

The .travis.yml script looks like this, where we build a matrix with environment variables and platforms:

matrix:
  include:
    - language: android
      env: TO_TEST=ANDROID
      jdk: oraclejdk8
    - os: osx
      osx_image: xcode8
      env: TO_TEST=IOS
  fast_finish: true
android:
  components:
    - platform-tools
    - tools
    - android-23
    - build-tools-23.0.3
    - extra-android-m2repository
    - extra-google-m2repository
    - sys-img-armeabi-v7a-android-19
install:
  - rvm install 2.2.0
  - if [ "$TO_TEST" = "ANDROID" ]; then gem install calabash-android; fi
  - if [ "$TO_TEST" = "IOS" ]; then gem install calabash-cucumber; fi
before_script:
  - if [ "$TO_TEST" = "ANDROID" ]; then chmod 755 createEmulator.sh; fi
  - if [ "$TO_TEST" = "ANDROID" ]; then ./createEmulator.sh; fi
script:
  - if [ "$TO_TEST" = "ANDROID" ]; then chmod 755 androidTest.sh; fi
  - if [ "$TO_TEST" = "ANDROID" ]; then ./androidTest.sh; fi
  - if [ "$TO_TEST" = "IOS" ]; then chmod 755 iosTest.sh; fi
  - if [ "$TO_TEST" = "IOS" ]; then ./iosTest.sh; fi

BVSDK Cordova Plugin Features

So what if I want to try out the BVSDK Cordova plugin? If you want more info or checkout the source code for the plugin and unit tests, just head over to our Cordova Github repo. There’s plenty of info in the README for running the examples and unit tests.

Open Source Contributions

If you are building a Cordova-based application and want to see other things added just let us know, or better yet submit a pull request and we’ll be happy to review it!

Augment your pattern library with page types

Pattern libraries sometimes fall short of helping enterprise teams build different products the same way. These palettes of components (toolbars, pop-ins) and patterns (searching, navigating) can be assembled into any number of UIs, leading to too many right answers. While the public pattern libraries like Google Material must accommodate countless unimagined applications, our private libraries can serve us better.

We have special insight into our own users’ workflows. A page type is a layout and set of patterns packaged together according to the workflow they support. If your pattern library is a basket of ingredients, your page types are the recipes.

These starting points are immensely helpful in a few ways:

  1. Designers starting from 80% instead of from scratch are more likely to approach their design problems in the same way.
  2. Development teams without designers often have everything they need to start building.
  3. Teams have vocabulary that connects patterns to workflows.
  4. Page types make workflow-specific pattern definitions possible.

Defining page types

Workflow-specific pattern definitions

Many pattern libraries, especially the older ones like Yahoo Design Pattern Library, take a bottom-up approach. Documentation starts with the component, noting its general purpose, but focusing primarily on its interactive states. A handful of examples show the component used in various contexts. It is up to designers to imagine how this information relates to their own projects.

Page types are top-down: in this workflow, these components are used in this way.

The example below shows two Object Editor pages with different interpretations of “Toolbar.” The top applies Google Material’s definition of Toolbar:

Toolbar actions appear above the view affected by their actions.

The bottom applies a workflow-specific definition of Toolbar:

If the object is edited indirectly and previewed, configuration actions and preview actions are separated into panel and toolbar, respectively. In this way, the user is not led to believe temporary preview modes (like zoom) are saved with their configuration.

It takes a lot of design thinking to work through how components could best serve a workflow. It is highly valuable, then, to document your best solutions.

Essential page types

Page type documentation prescribes the layout, component arrangement, and interactive patterns used to achieve a desired workflow. Here’s list of page types common to enterprise applications:

  • Manager
    Manipulate a collection of objects.
  • Editor
    Edit an object.
  • Detail
    Consume information through exploration.
  • Navigation
    Consume information by reading it.
Top: Object Manager, Bottom: Object Editor

Identifying page types

In application design, the layout and use of components on any given page create a workflow that serve the page’s central purpose. If the central purpose of your application page is not singular, your design is probably overcomplicated. Before you invent a new page type, reconsider your application architecture.

For example, the central purpose of a document editor is to edit a document. If the page is well-designed, its layout maximizes the editable area, and its buttons and tools all relate to editing. Notice that Google did not smash document editing, management, and publishing into one page.

Object Editor

Optimizing workflows

Sometimes different workflows serve the same central purpose. For example, direct editing and configuration are very different workflows even though their central purpose, Object Editing, is the same. In cases like these, it’s appropriate to offer variations.

Top: directly editing, Bottom: configuration with preview

While page types promote focused design, discipline should not compromise usability. In the examples above, direct editing is the easiest way to work with text. Configuration is the easiest way for non-technical users to change XML values, build an email template, or add filters to a photo, etc.

Optimizing content presentation

It’s important to differentiate between workflow and content presentation: tables emphasize data, lists emphasize titles, cards emphasize media, etc. If the workflow is the same, one page type can house various content types in their optimal formats. In the Object Manager examples below, relevant activities—finding, filtering, selecting, applying actions, etc.—are the same and can be accessed the same way.

Object Manager with tables

Object Manager with list

Similarly, let workflow define your page types, not content presentation or layout. “Table page” is not a good page type because your users do not want to table.

Using page types

Page type templates accelerate design and development projects by advancing their starting points. They also simplify the information architecture design process by providing constraint:

Each screen maps to a page type.

Therefore, each screen represents a workflow with singular purpose. Adhering to this principle influences decisions around how to group functionality into various pages.

Page type templates dropped into a IA map. Created in Mindmeister app.

An IA design that uses your company’s set of established page types as illustrations is more tangible to stakeholders.

What are your page types?

Page types are not new; website designers—especially those who use template-based CMS’s like Joomla—have been using them all along. They are essential to Information Architecture.

We application designers have been somewhat distracted. Pattern libraries—especially when incorporated into UI development environments like Storybook—are incredibly useful. However, only we know what we want to help our users do. Our own private pattern libraries can be far more workflow-aware than the public libraries from which we draw our inspiration.

What are your page types?


This post was originally published on Medium.com

Looking Back at Looking Back: A Retrospective of the Retrospective Process

A while ago, I published a post on this blog about how to perform retrospectives for development teams who proscribe to Kanban and/or the agile development process.

You can read that post here: Don’t Look Back in Anger

I’ve received a lot of feedback on that blog post – enough that I thought I’d follow up with an additional post that details further fine-tuning of our retrospective process – a retro of retros if you will but first…

Yo dawg, I heard you like old memes...













Yo dawg, I heard you like old memes...

OK, now with that out of the way – here’s some insights into what we’ve learned from looking back at the retrospectives we’ve performed over time:

It’s Not Just for Development Teams

Regardless whether you’re using Kanban, Agile development, other forms of SDLC management, (insert snazzy development jargon buzzword here) or not – the process of re-evaluation and improvement can be applied to any team process.

After the previous blog post was published, I had a member of our product knowledge team approach me to tell me how they were planning on using the retrospective process to improve how they communicate technical details to our clients.

It seems like an obvious move but honestly, as the retro process laid out in the previous blog post was very specific to the agile/Kanban management process, I felt this was worth mentioning here.

You Need Moderation

This was only briefly mentioned in the previous blog post but, as we’ve conducted further retrospectives, it became clear that some form of moderation during the retrospective sessions was needed.

In this case, I’m not talking about the need for a designee to “take the minutes” of your sessions (you should already be doing this) but we found that the team really needed someone to help move the retrospective along.  Here’s why:

Time is an Expense – Yup, time is money and if you have a large team spending a large amount of time in retrospective, somewhere, someone is thinking – “boy, that’s a lot of billable hours going on in that meeting”.  You need a moderator to help stick to the time box you’ve put together for your retrospective.  Spend the necessary time for review but not too much time.

Technical Teams go Down Technical Rabbit Holes – And that’s because, especially if you’re dealing with highly technical teams, we by default are pretty obsessed with solving complex problems.  You get enough engineers together to discuss how to solve for there being 4 different competing programming frameworks and you’re going to end up with 5 competing frameworks.  Focus on technical solutions is necessary to solving problems but it’s good to have someone keep the team focused on identifying problems here; determining technical feasibility should be done outside the retrospective.

 

This really does happens - that's why its funny












This really does happens - that's why its funny

Emotional Teams go Down Emotional Rabbit Holes (and all teams are emotional) – Technical prowess aside, people are people and we tend to have some pretty strong feelings about things, one way or another.  If you don’t skirt the danger of falling into a technical time sink of a discussion, I guarantee you that you will at some point fall into an emotionally charged one.  It’s important to have someone help the team keep the discussion tight and focused – so we can focus our passions on moving forward (after we cleared some of the path forward in your retrospective).

Ron Burgundy is emotional

How Do You Moderate?

This has been the subject of many, many management books, articles and college thesis papers.  There’s way too much to unpack when it comes to how to moderate your team meetings (hint, it depends on your team) but here’s some hints we’ve picked up along the way:

Time Can’t Change Me – Don’t be afraid to use a timer (you all have smart phones and there’s an app for that)!  If you’re following a format similar to the one outlined in the previous post – each subsequent phase tends to take longer than the previous one – allocate time accordingly.

Simple Division – Since you’ve likely divided your retro into phases and slated a time limit to conduct it in, divide your time slated for the retrospective among those phases.

Keep in mind the number of people on your time – try to sub-divide time allocated for each phase for each team member (e.g. five members and 15 minutes for phase one? – Try to keep everyone close to 3 minutes a piece per phase).

The Kindest Cut – Don’t be afraid to cut someone off if it looks like their starting to dominate the conversation (but do this gently and with tact).

Take notes of heated discussion points (particularly if it’s between a small subset of your team).  Promise to follow up with those team members with a further discussion of their points – which are important, but do focus on keeping the meeting moving forward.

Remind the team that the sooner the meeting concludes the sooner everyone can get back to work as a team (that’s the whole goal here anyway).

Take Notes:

Again, this was briefly touched upon in the previous post.  Initially, when we began the retrospective process, we weren’t very diligent on taking notes aside from noting which to-do items we wanted to take away and work toward after each retrospective.

The further we refined our process however, to more detailed notes we not only kept but also published.

Taking notes was key to us further refining our process because two things became apparent when looking back at our notes from our previous retrospectives:

  • Some of our to-dos, while we committed time and effort to improve, were brought up again in later retrospectives.
  • Some of our pressing needs at the time of a specific retrospective became non-issues as our team naturally evolved.

Capturing the former allowed us to recognize a persistent problem the team identified but required consecutive effort to resolve.  In this case, we appeared to keep taking on more work within our Kanban process than we had bandwidth to resolve (biting off more than we could chew).  This moment of realization resulted in becoming the central topic of its own retrospective.

The latter, while on the outset seemed not necessarily noteworthy, did provide an ah-ha moment for the team when we could see in black-and-white, on our Confluence page, evidence of not only of our team evolving but in one way or another, improving.

Anonymity Can Be Powerful:

People are people and aside from that being a Depeche Mode song we found that people frankly sometimes don’t like to be put on the spot.  In this case, when we conducted our retrospective retrospective (see the whole thing about the combo breaker below), feedback from several team members spoke about the round-robin style forum method of team discussing highlighted in the our initial blog post.

The main criticism was that, while that method of discussion did help the team to consensually highlight both what went well and what could be improved, the act of doing so in this manner made some people uncomfortable.

The item for improvement became how we can better facilitate our already established method of feedback – while reducing some of the personal, individual aspects of the process to make some team members more comfortable (again, people are people, people are different, some approach different forms of interaction better than others and if you plan on working together as a functional team, that is a fact you will need to address).

 

The other half consists of porkchop sandwiches
















The other half consists of porkchop sandwiches

Our solution took inspiration for our company happy hour and team building invites (no seriously, I was not writing this while racing go karts or having margaritas with the team).

 

 

 

 

 

 

 

We simply decided to move a portion of our team discussing to a service like Survey Monkey:

  • We would put forth a discussion topic session for our future retrospective sessions
  • Invite team members to “the party”
  • Allowed anonymous submissions to the survey (suggestions for the upcoming “party”)
  • Allowed voting for submitted topics for both well-done and to-improve points of discussion
  • Collected the anonymous submissions, focusing on substantially up-voted items
  • And brought those to the retrospective meeting

 

These cacti rock

















These cacti rock

Not only did this give some people a more relaxed avenue to provide feedback, it also cut down on the time needed for performing the actual retrospective meeting (a big plus if you’re dealing with a large team).

Get Metrics Involved:

Applying data science to your own development work itself can be its own full-time job especially if you’re working with a large team or multiple teams.

Now you’re probably thinking, “Oh man, don’t tell me to get all crazy with standard deviations and bell-curves for our own retrospectives – this isn’t a performance review!”

And you’re right – but hear me out.  It pays to have some metrics available when you conduct a retro.  Story Time!

We once conducted a retrospective where a topic of discussion was brought up among a couple of engineers where some voiced their opinion that our work and momentum had slowed over the past month (basically, some felt we were getting less and less done, while engineering resources and work remained rather constant).

This could have been a controversial topic to breech.  However, within the discussion, we had metrics handy to answer whether this was the case or not.

By having metrics readily available that measured both work in progress versus work items completed and comparing that to commit history for our project and then being able to compare these metrics against those from the previous month, we found that:

  • Our commit history had in fact increased
  • Our average stories completed had remains relatively constant across both periods.

We tabled this discussion at this point (importance of a moderator at work!) but further investigation led to the fact that basically while we were working our butts off, our stories were becoming larger and more complex – which led to us targeting in a future retrospective, that we needed to reconsider how we conducted our planning meetings

TL:DR stories weren’t being broken down into sub tasks as much as they should have been.  Work items ballooned in scope and reflectively were bogged down on our Kanban board.

Now… figuring out what metrics you may need is probably going to be a challenge.  This is something that can be very different from team to team.  However, there are two rules you may want to keep in mind when sourcing metrics for your retrospectives:

  • Agree as a team what metrics are important
  • Imperfect visibility is better than zero visibility

Basically, whatever heuristic you decide on using – it needs to be something that makes sense to the whole team.  This in itself could be a great topic for its own retrospective.

By imperfect visibility versus zero visibility, even if your chosen metrics aren’t exhaustive (better for your retro that they’re quick and easy to digest), having some level of metrics is better than having none.  It’s hard to argue with math – even if that math is the equivalent of 1 + 1 = 2.

Some metrics we’ve used – just to give you some ideas:

  • Total number of stories committed to per period
  • Number of stories completed (cross-referenced with commits)
  • Average number of stories in progress during period
  • Number of stories completed based on service level (P1s vs P3s vs “DO IT NAO!”)
  • Lead time of stories from their committed state to their finished state (however your team defines that)

Again, whatever you chose to use – be careful of diving too deep for your retrospective and be sure to make that choice as a team.

Combo Breaker:

In our last retrospective, we did something different. Rather than follow the format provided in the previous we performed a recursive retrospective by looking back at the results of our retrospectives from the beginning of the year.  Time for a Scala joke!

 

We've already blown our quota for this joke















We've already blown our quota for this joke

As indicated above, we came up with some deep, recursive improvements.  Here’s some steps you can follow to do the same:

  • Prior to the upcoming retrospective, we review notes taken from the prior retrospective sessions
  • Try to target a time-frame that makes sense for your team (e.g. last quarter, last calendar year, previous geological epoch, etc.)
  • Note the well-done items and the need-to-improve ones
  • Look for repeating patterns as this is key (no, StackOverflow doesn’t have a regex for this – trust me, I looked).
  • Compile these repeating patterns from previous retrospectives into a small list and bring that to the retrospective.

In the retrospective, instead of following the same procedure your team is no doubt already movin’ and groovin’ to, you can present to this this list (bonus points if you preface this by abruptly jumping atop the nearest available piece of Herman Miller furniture in your office and shouting, “Co-co-co-co-co-co Combo breaker!!!!“).

 

Shameless pandering to 90s gaming nostalgia















Shameless pandering to 90s gaming nostalgia

Actually, no bonus points will be awarded.  However, depending on the demeanor of your co-workers, you may be cheered, you may get a laugh, or may be straight up escorted from the building.  Either way, it’s going to be a fun rest of the day for you.

Instead of sourcing consent from the team (you’ve effectively already done this with your previous retrospectives) go over the take-aways from each previous retro session.  Briefly discuss the following:

  • Of the Things we did well, are we still doing them well?
  • Of the things we needed to improve on – did we improve (DWYSYWD **)?
  • For any item the team answered, “no” for – as a team, pick one of these items for discussion.

From here, continue the retrospective as you would normally run it – discuss what steps need to be taken to improve on your topic of focus, create and assign action items, etc. (don’t forget to take notes).

This technique I’ve found helps accomplish three things:

  • Its handy when pressed for time – as you’re effectively dog-fooding your own previous retrospectives, a lot of the consensus building for the sessions is taken care of.
  • It’s a change of pace – Hey, people get bored with routine. Breaking routine also often lends to new avenues of thinking.
  • Shores up gaps – Let’s be honest, at some point, your team will commit to an item of improvement that is going to take several passes at solving. Here’s how you can make sure you’re not leaving something on the table. ***

Aww...

 

 

 

 

 

 

 

** (Do What You Said You Would Do – Don’t you dig complex acronyms?)

*** If you seriously accomplish every single item of improvement your team targets every time, all the time, congratulations, you’re some kind of unicorn collective.  Give yourselves a high five or whatever it is unicorns do.

 

High five!















High five!

For those who read and enjoyed the previous dive into conducting retrospectives, hopefully this post has given you some insight on how to further fine-tune the process and improve how you and your team improve.

 

You weren’t getting away without a parting shot from this guy.

















You weren’t getting away without a parting shot from this guy