Monday, September 21, 2015

Short: User Stories vs Continuous Integration

I love Scrum on both philosophical and practical levels; but I'm not sure that I am sold on the idea of User Stories being an optimal boundary for units of work.

User Stories are great, and use of user stories as units of work places what I would agree is an appropriate emphasis on wholly integrated value to the customer. However, I also see value in defining work along other (non-feature) boundaries, especially for the sake of enforcing a definition of done.

Stated another way, I see value in continuous integration, even outside of the boundary of user stories. I think most Dev teams would agree that they see value in that. Does this mean we need another "Definition" here to promote continuous integration prior to story completion? Or is it feasible for our unit of work to (possibly, sometimes) be something other than a user story?

The only downside I see of allowing non-user facing stories is that we could get stuck in the weeds of tasks that do not add real value to the user. The alternative, from what I have seen, is larger units of work that cost more to integrate.

Every team is different, but I would rather err on the side of more predictable units of work and guard against non-valuable work through grooming and review.

Has anyone else encountered this need for a tradeoff?

Thursday, September 17, 2015

Short: DevOps and Separation of Duties

Can (should) DevOps and "separation of duties" co-exist?

As I understand it, separation of duties is a risk mitigation tactic in organizations, whereby "only Team A" can carry out some important set of activities. This approach does seem to have some value, on the surface.

For example, a software project manager may be more motivated by feature delivery dates than acceptable software quality; so it might make sense to have a separate QA role (or team) which enforces best practices for quality. Similar arguments can be made for software/data/infrastructure security, change control, and release implementation.

But DevOps proponents suggest otherwise. If a single team owns every aspect of the software (every phase, every nonfunctional requirement, etc.) even after its release, many things should go much smoother.

But how do we mitigate the original risks? Are they any less real now than they were before? I have seen a few strategies proposed and implemented to this end thus far:

* Automation as a gatekeeper
* Domain experts as a shared but team-embedded resource
* Emphasis on self-service tools which enable while enforcing compliance
* ACID principles  - changes should aim to be Atomic, Consistent, Isolated, and Durable (often enabled by cloud and container technologies, as well as infrastructure automation)

Short: Innovation and Compromise

I'm sure that this problem is not unique to tech companies, but I have noticed that some of the most challenging barriers to innovation are social (political) rather than technical. I'm currently struggling to find the right balance between pushing for innovation and agreeably settling for compromises.

By settling, are we not reinforcing (enabling) the very behavior that stands at odds with innovation in the first place? Are we not making our next discussion even more difficult by setting a precedent?

The only way I know from experience to go against this cycle is to show value; but showing technical value might never be enough to overcome a political barrier.

Wednesday, February 4, 2015

More things I learned at GTAC (Day 2)

Editor's Note: I'm sorry that this took so long for me to click "Publish" on!

Here are my notes from the second day of Google's Test Automation Conference (GTAC) back in October.

Enthused About: Test Suite Reduction

Software testing can be a bit like drinking from a fire hydrant. For every "degree of freedom" in an application, the number of potential test cases for that application increases exponentially. To cope with this explosion, one of the most challenging (and consequently, the most interesting) activities carried out by a tester is to reduce the number of test cases required for "proper" verification.

All of this boils down to a rather classic question from software testing: How much testing is "enough"? As with most interesting questions, the answer is some form of "it depends." At a high level, though, I like to define an application-specific notion of "enough" in some terms of coverage.

Wednesday, October 29, 2014

A bunch of things I learned at GTAC 2014 (Day 1)

[ Note 0: This became very long.

Note 1: I will try to come back and properly link to the efforts/projects mentioned in this post, but for now, you can "google" most of these and find them successfully.

Note 2: These observations have my usual biases, especially web application testing. Apologies in advance to mobile testing people doing really interesting things. ]

Monday, October 20, 2014

Short: Testing

Testing is the process of evaluating whether an artifact fulfills its requirements. Effective testing depends on the ability to clearly define expectations and clearly observe relevant domains which can inform this evaluation. You can't test what you can't see.