Working as a consultant I often find that companies struggle with improving. Its not because they don't recognise the issues, its more generally because they don't know what to do about them.
Quite recently I came across a common issue and thought it would make for a great case study.
Throughout this article I will refer to 'stories', 'acceptance criteria' and 'epics'. Although I personally opt for an Agile approach, the ideas presented should be transferrable to any methodology you choose.
The team started a planning session discussing a detail screen. The screen looked similar to the following:
The discussion was headed in an interesting direction where both the Product Owner and QA were suggesting that the whole screen should be delivered at once because it was not testable otherwise.
This screen was mostly driven by a set of view-models and had no additional networking, etc... so for the purposes of this article, I will focus on the behaviours of this screen.
This kind of problem wasn't exactly new to me, so I began by showing how the screen could be broken down into multiple sections.
- The poster image, contains a few micro interactions and an automated slideshow animation.
- The metadata section had strict acceptance criteria that defined its presentation, unique to the page.
- The cast section has a unique stack layout as well as specific acceptance criteria around clipping/truncation.
- The synopsis section initially shows an excerpt, adding an interaction for expanding its full text.
Introducing an Epic
At this point I had demonstrated an albeit trivial breakdown of a screen into multiple stories that could each be developed and tested independently.
It was then pointed out that although each feature's acceptance criteria could be tested, the screen itself has specific behaviour that is not captured by any of the above sections.
Specifically they were referring to the order of these sections as well as a presentation transition that was in the design spec.
At this point I suggested introducing an Epic for the screen that contains the associated user stories we had already defined. Through this epic we would attach any additional acceptance criteria for the animation and section ordering.
This provided a testable 'story' that had screen-specific acceptance criteria, as well as a breakdown of easier-to-estimate stories with well defined behaviours.
This kind of analysis may seem obvious to some and in hindsight I think even the team felt confident they could use the same approach across the app. However in reality I've often found it takes a bit more practice and some guidance before a team is able to translate this kind of approach effectively.
I hope you found this article helpful. This is just one of many approaches to a very common problem, so if you have something to contribute, I'd love to hear more about it in the comments.