February 4, 2015

More things I learned at GTAC (Day 2)


Editor's Note: I'm sorry that this took so long for me to click "Publish" on!

Here are my notes from the second day of Google's Test Automation Conference (GTAC) back in October.



Google's cloud prides itself on fast uptime and fast network connectivity between instances. Tony Voellm has developed some standards for benchmarking cloud providers, with an emphasis on the "full lifecycle" of instances.

The Google cloud instances Tony brought up during his demo were up within 40s. As someone trying to use AWS for test infrastructure (with its 10m-til-ssh startup times right now), all I can say is WANT!

Roy Williams from Facebook described some "bots" they have for identifying flaky tests. They also categorize tests based on their status (e.g. New, Passing, Failing, Flaky, Disabled) and they have bots that assist in categorizing and moving tests between states according to a pre-defined process. Also, Facebook doesn't have testers, just Engineers. Similarly, they have no dedicated "test" role, or people doing "just testing" or automation.

Michael Bailey from American Express presented a testing tool stack for Android testing: Espresso, Spoon, and WireMock. Of these, WireMock was the most interesting/relevant for me personally, but for Android testing the others do seem to be the state of the art. WireMock is essentially doing what we do with MSL, but within Jetty and a Java client. There also appears to be a PHP version.

Brian Vance from Google described the use of Google's Big Query tool for analytics. Processing times for over a TB of data were in the ~15s range. One thing I'm curious about is how the data sets are created (I don't believe this was covered as part of the demo). Once the data set exists, there's a nice "Power User" type of UI for BigQuery that allows you to write queries, chart results, and do some other cool things.

Selendroid is a mobile testing tool for Android which supports automation of apps, native dialogs/controls, and web UIs. This is a nice combination, and is WebDriver-centric (including compliance with the JSON wire protocol).

Amit from Comcast described their use of WebDriver and RoboHydra within CI. RoboHydra looks interesting, as it is another mocking solution which works on Node.js (like MSL). Would be interesting for us to take a closer look at the API and other features of RoboHydra as a comparison point.

Some guys (ahem) talked about MSL, a tool for mocking HTTP services (like RoboHydra and WireMock). They also demonstrated the MSL "browser client" for mocking in-browser tests from tools like Jasmine, and a Karma plugin for integrating MSL with the popular test runner and reporter.
Alex Eagle from Google talked about some research and tool support for tracking the full lifecycle of "breakages", claiming that "as testers, we need to own more" than just reporting whether a test passed or failed. As I understood it, a breakage means seeing a test failure through to resolution.

Zack from Waterloo (a student) presented his thesis work on a "community" optimization for SAT solvers. I know that many QA problems often reduce to SAT, so I tried to follow. What I was able to extract is that SAT solvers keep getting better, especially when the problems being solved fall into certain categories. I didn't see any immediate test automation applications here (yet).

Patrick Lam from Waterloo (a professor) presented an empirical study of test suites from open source projects. He concluded some high level things like "shorter tests with fewer exceptions and less logic are more common". While the data certainly supported those conclusions, I question whether practicing testers can actually take this as well-founded advice, as that would require assuming that the subject applications represent the best practice. As a user of a few of those applications (including and especially JMeter), I wonder if that's such a good idea. What would really be cool as a follow-up, in my opinion, would be to repeat this on some lauded code bases from the likes of Linux, Google, Facebook, etc. and compare the findings. Perhaps now that the tools exist, this would be something feasible to do.

The "mobile ninjas" from Google closed out the conference by introducing several Android test tool enhancements they have been working on. This looked to be a big reveal of a Go rewrite of ADB which is apparently much more reliable. This group ascribed to an emulator-first philosophy for testing mobile devices (of course it seems like this should be much more scalable than going quickly to real devices).

Overall, GTAC was a very interesting conference. The talks were diverse in topic, quality, depth, and other ways that made for good discussion. I hope to return soon with more exciting automation work!

1 comment: