May 28, 2006 § Leave a comment
Transparency is the antidote to requests for big planning upfront. Transparency offers project managers the ability to do risk management through the project life time. If you aren’t reporting cumulative flow, or burn down or burn up, or (gulp) earned value, every day to every stakeholder then you aren’t embracing the core foundation of agile development techniques – high trust.
From my consulting experiences: nakedness and transparency can work wonders. And Walking Talk and Talking Walk can replace reporting with embodied communication for numerous “small” systems up until on-demand systems.
And what if we are building or making adaptations to a big system in production, in a corporate environment, serving many different customers in an ongoing manner? This requires anticipatory stances. How can we support management talking healthy strategies that do not stifle us walking our active minds and creative joy?
We can’t have all these stakeholders aboard the team, as I suggested we could do in Walking Talk and Talking Walk . Not without serious loss of progress and expansive professional power. The communication graph just gets too complex and takes up too much of our energy and time.
Don’t make unfounded assumptions
Supposing we trust management: How to undo harmful wishful thinking in time? How to manage expectations of management? Management that is continuously flying high in the air all over the world, fully dressed, with only short stops on Earth for Talking here and there? What measurables and observables could work and can we perhaps adapt to in agile ways in this case (while still leaving us reasonably naked)? How to achieve some kind of transparancy of the system for the head?
In natural human(e) systems,
around 80% of information goes to the head
from the body (negative feedback loops),
and only 20% of information goes from the head
to the body (positive feedback loops)
Do not repeat mistakes (make some new ones instead)
Because of an intense preoccupation in our industry with the removal of defects, we tend to discourage multiple reporting of all failure instances. Multiple reporting is often replaced by resolving multiple reports to a single failure type. And of course we compensate for that simplification by attaching some kind of severity rating for taking its frequency into account. Now we can safely see through a single mind. Or can we?
None of these reports are actually the same, and the differences between them could give (more) clues to what is actually going on in the system. The cost of not doing multiple reporting can be quite higher than the cost of doing them. Not to mention the associated cost of lost opportunities. Get measurables and observables as congruently as possible, or the flow will stop.
And for undoing the unnatural discouragement of multiple reporting, the following might work? (feedback for contemplating more/different minimal measurements verrry welcome!)
Be impeccable with your word
- What if “continuity” could mean the “probability a system operates without interruption, under stated conditions, for a stated interval of time”?
- With the above meaning of “continuity”, could “availability” mean “the proportion of time that a system is delivering service”, and can it incorporate notions of recovery and repair time as predictive measures of outage duration?
- And perhaps something like “maintenance rate” could mean “the number of hardware and software defects detected per system in the first year of its operation”?
With above meanings, the continuity of a system is both determined by the nature of a failure type and the re-occurrences of that failure.
Make as many mistakes as necessary
- Allowing for monitoring trouble field data and representing those as the sum of the trouble rates due to all past and present systems releases: multiple reporting
- Explicit filtering of raw data. When filtering trouble field data to reduce different instances of the same failure caused by the same fault to one single report, the filtered data effectively represents software defect detection rate and hardware fault detection rates.
- Monitoring defect removal rates in similar ways to software defect detection rates
- Monitoring hardware repair rates likewise.
When monitoring as described above, management can expect the system to always do its best and …
- Earlier releases to have lower contributions to a cumulative trouble curve than more recent ones
- Peaks in a cumulative trouble curve when a new version is released
- Peaks in a cumulative trouble curve when a system is duplicated, like for a new customer
- An overall increasing trend in a cumulative curve because of the increasing number of (managed) systems
- Normalized data to the number of systems, to result in decreasing trends
- Software detection curves to display a distinct skewing in time due to defect repair delays
- The ratio between raw trouble reports (Talk) and and real defects (Walk) to remain constant
Accepting mistakes as the greatest teachers
If any of these turn out not as expected the (development) system would need immediate management attention, and further investigation.
Would this minimal measurement system be helpful for (executive) management effectively managing risks? What you think?
Photo by cubicgarden
… it may be a start for (re)connecting corporate heads, flying loosely about, to their Woolly Mammoth bodies on the ground?
- Linden Lab’s Rosedale Considers ‘Scrum’ Method in Newsrooms (pbs.org)
- Social Surfaces: Transparency, Camouflage, Strangeness (echovar.com)
- Transparency, accountability, and IT success (zdnet.com)
- A CMO and a CTO Walk Into a Bar … (designmind.frogdesign.com)
- Re-Engineering In Agile Development Can Just Be Refactoring (regulargeek.com)