Wednesday, March 28, 2012

What we REALLY Value is the Cost...

Today, someone in the community mentioned the idea of measuring "value points".  And the light went on... could this finally highlight our productivity problems?  It could be a totally dead-end idea, but its a hypothesis that needs testing.


When I thought "value points", I imagined a bunch of product folks sitting around playing planning poker judging the relative value of features. Using stable reference stories and choosing whether one story was more or less valuable than others. Might seem goofy, but its an interesting idea. My initial thought was that this would be way more stable over time than cost since cost varies dramatically over the lifetime of a project. And if it is truly stable, it might provide the missing link when trying to understand changes in cost over time.

For this to make sense, you gotta think about long term trends.  Suppose our team can deliver 20 points of cost per sprint. But our codebase gets more complex, bigger, uglier and more costly to change. Early on, we can do 10 stories at 2 points each.  But 2 years later, very similar features on the more complex code base require more effort to implement, so maybe these similar stories now take 5 points each and we can do 4 of them.  Our capacity is still 20 story points, but our ability to delivery value has REALLY decreased.

We often use story points as a proxy for value delivered per sprint, but think about that... We get "credit" for the -cost- of the story as opposed to the -value- of the story.   If our costs go up, we get MORE credit for the same work!

How can we ever hope to improve productivity if we measure our value in terms of our costs? How can we tell if a story that has a cost of 5 could have been a cost of 1? Looking at story points as value delivered makes the productivity problems INVISIBLE. It's no wonder that it's so hard to get buy in for technical debt...

What if we aimed for value point delivery? If you improved productivity, or your productivity tanked, would it actually be visible then? On that same project, with 20 cost points per sprint, suppose that equates to 10 value points early on, and 4 value points later.  Clearly something is different. Maybe we should talk about how to improve? Productivity, anyone? Innovation?

At least it would seem to encourage the right conversations...

Thursday, March 8, 2012

Does Agile process actually discourage collaboration and innovation?

Before everyone freaks out at that assertion, give me a sec to explain. :)

In the Dev SIG today, we were discussing our challenges with integrating UX into
development, and had an awesome discussion. I think Kerry will be posting some
notes. Most of the discussion though, went to ideas and challenges with
creating and understanding requirements, and the processes that we use to scale
destroying a lot of our effectiveness. The question we all left with, via Greg
Symons, was how do we scale our efforts while preserving this close connection
in understanding between the actual customer and those that aim to serve them?

In thinking about this, our recent discussions about backlog, and recalling past
projects, I realized some crucial skills that we seem to have largely lost. In
the days of waterfall, we were actually much more effective at it.

My first agile project was with XP, living in Oregon, and fortunate enough to
have Kent Beck provide a little guidance on our implementation. Sitting face to
face with me, on the other side of a half wall cube, was an actual customer of
our system, who had used it and things like it for more than 20 years. I could
sit and watch how they used it, ask them questions, find out exactly what they
were trying to accomplish and exchange ideas. From this experience I came away
with a great appreciation for the power of a direct collaborative exchange
between developers and real customers.

My next project, was waterfall. One of the guys on my team, wickedly smart, his
background was mainly RUP, and he just -loved- requirements process. What he
taught me were techniques for understanding, figuring out the core purpose,
figuring out the context of that purpose, and exploring alternatives to build a
deeper understanding of what a user really needs. Some of these were
documentation techniques, and others were just how you ask questions and
respond. I learned a ton. On our team, the customers would make a request, and
the developers were responsible for working with the customers to discover the
requirements.

With Scrum-esque Agile process, this understanding process is outsourced to the
product owner. As we try to scale, we use a product owner to act as a
communication proxy, and with it create a barrier of understanding between
developers and actual customers. Developers seldom really understand their
customers, and when given the opportunity to connect with them, the number of
discoveries of all the things we've been doing that could have been so much
better, are astounding.

I've done agile before on a new project, sitting in the same room with our real
users, understanding their problems, taking what they asked for and figuring out
what they needed, and also having control of the architecture, design, interface
and running the team to build it - the innovation of the project and what we
build was incredible. Industry cutting-edge stuff was just spilling out of
everything we did. And it all came out of sitting in a room together and
building deep understanding of both the goals, and the possibilities. This was
agile with no PO proxy. The developers managed the backlog, but really wrote
very little down... we did 1 week releases.

Developers seldom have much skill in requirements these days. And are often
handed a specification or a problem statement that is usually still quite far
from the root problem.

In building in this understanding disconnect, and losing these skills, are we
really just building walls that prevent collaboration and tearing down our
opportunities to innovate?

Manufacturing of a Complex Thought

Imagine that the software system is a physical thing. Its shape isn't really
concretely describable, its like a physical version of a complex thought. All
of the developers sit in a circle, poking and prodding at the physical thought -
adding new concepts, and changing existing ones.

Just like we have user interfaces for our application users, the code is the
developer's interface to this complex thought. Knowledge processes and tools
help us to manipulate and change the thought.

If I want to make a change, I need to first understand enough of the thought to
know how it would need to change. If I can easily control and manipulate parts
of the thought, and easily observe the consequences, its easier and faster to
build the understanding I need. Once I understand, I can start changing the
ideas, and again if I misunderstood something, it would be nice to know as early
as possible what exactly my mistake was, so that I can correct my thinking. If
there were no misunderstandings, the newly modified complex thought is then
complete.

In order to collaboratively work on evolving this complex thought, we must also
maintain a shared understanding - more brains involved increases the likelihood
of misunderstandings and likewise mistakes.

So with that model of development work, then think about all of the ideas and
thinking that have to flow through the process in order to support the creation
of this physical idea. Inventory in this context is a half-baked idea, either
sitting in the shelf, or currently in our minds being processed. These ideas
are what we manufacture, but since each idea has to be weaved into a single
complex thought - our tools that we use to control and observe the thought, the
clarity and organization of the thought, the size of the thought, all have a
massive impact on our productivity.

The tools are not the value, the ideas that get baked into this complex thought
are. All of the tooling is just a means to an end. We should strive to do just
enough to support the manufacturing of the idea.

If you think about creating tests from the mindset of supporting the humans and
these knowledge processes, a lot of what we do with both automated and manual
testing can clearly be seen as waste. An idea that is clear to everyone, for
example, is not one likely to cause misunderstandings and mistakes. We should
first aim to clarify the idea to prevent these misunderstandings. We should
then aim for controllable and observable, as these characteristics allow us to
come to understand more quickly. And when misunderstandings and mistakes are
still likely, we should then use alarms to alert us when we've made a mistake. 
False alarms quickly dilute the effectiveness of the useful alarms... thus
taking care in trying to point the human clearly to the mistake, and not raise
unnecessary false alarms is what makes effective tests.

Now think about things like code coverage metrics in this light. This metric
tends to encourage extreme amounts of waste. We forget all about the humans and
fill our systems with ever-ringing false alarms. We tend to only think about
tests breaking in our CI loop, but their real effect is their constancy in
breaking while we try to understand and modify this shared complex thought. 
With our test-infected mindsets, we quickly bury the changeability of our ideas
in rigidity, and lose the very agility that we are supposedly aiming for.