Saturday, March 17, 2012

IT buzzwords and memes of the moment: cloud & consumerizarion

Peter Kretzman, IT Consumerization, the Cloud and the Alleged Death of the CIO:
Let me be clear once again: this frequent linking of cloud and IT consumerization to the looming demise of the CIO and IT is not just misguided, but actually gets it completely backwards. In fact, I argue that IT consumerization and the cloud will actually elevate the importance of IT within a company, as both a service and a strategic focus.

Let’s list and then discuss some of the ways that combining these memes (IT consumerization, cloud, and the ensuing heralded death of the CIO) falls down when measured against common sense and reality:

It fails to understand the full range of what a CIO (or IT) actually provides for modern-day companies.
It fails to recognize the profound pitfalls of a decentralized and fragmented approach for company systems and technologies.
It erroneously equates IT consumerization with the BYOD trend, missing the larger important picture that underscores the strategic need for IT.
It misunderstands the interplay of commoditization and competitive strategic advantage.

Writing in "Wired Cloudline sponsored by IBM."


IT is hard enough already - why do things you don't need to?

Galen Gruman in Infoworld:

I don't get why IT itself takes on so many management challenges unrelated to technology operations or strategy.


Yes, it's not a good use of limited resources. But I don't think the problem of taking on things that don't need to be done is unique to IT.

Looking at IT for an answer to this is misplaced; instead, I'd start by looking at psychology, both organizational and individual.

Close Encounters of the Collaborative Kind

Good article in this month's IEEE Computer magazine, Close Encounters of the Collaborative Kind:
The participants in a collaborative interdisciplinary project found that developing a shared, project-specific communication style helped them overcome cultural barriers, understand the nuances of each other's work, and enhance the accuracy, interpretability, and utility of their models.


Wednesday, February 29, 2012

Taming Complexity and Tesler's Law

I always have a good think when I read Don Norman. Just started reading his Living with Complexity and it's holding true to form. It's worth reading the whole book just to be reminded of Tesler's Law.
Complexity can be tamed, but it requires considerable effort to do it well. Decreasing the number of buttons and displays is not the solution. The solution is to understand the total system, to design it in a way that allows all the pieces fit nicely together, so that initial learning as well as usage are both optimal. Years ago, Larry Tesler, then a vice president of Apple, argued that the total complexity of a system is a constant: as you make the person's interaction simpler, the hidden complexity behind the scenes increases. Make one part of the system simpler, said Tesler, and the rest of the system gets more complex. This principle is known today as Tesler's law of the conservation of complexity. Tesler described it as a tradeoff: making things easier for the user means making it more difficult for the designer or engineer. “Every application has an inherent amount of irreducible complexity. The only question is who wil have to deal with it, the user or the developer.” (Tesler and Saffer, 2007) With technology, simplifications at the level of usage invariably result in added complexity of the underlying mechanism.
If you are a Don Norman newbie, start with The Design of Everyday Things, that's a classic. I liked the first edition's title better, Psychology of Everyday Things. He called it POET for short.

I also just read an article Don wrote for core77: Act First, Do the Research Later, where he demonstrates that pragmatism matters, and there are many paths to good design.
Today we teach the importance of doing design research first, then going through a period of ideation, prototyping and iterative refinement. Lots of us like this method. I do. I teach it. But this makes no sense when practical reality dictates that we do otherwise. If there is never enough time to start with research, then why do we preach such an impractical method? We need to adjust our methods to reality, not to some highfalutin, elegant theory that only applies in the perfect world of academic dreams. We should develop alternative strategies for design.
Why it is not necessary to start with design research: Here are five very different arguments to support the practical reality of starting by designing, not through design research. First, the existence of good design that was not preceded by research. Second, the argument that experienced designers already have acquired the knowledge that would come from research. Third, the research effort of a company ought to be continually ongoing, so that results are available instantly. Fourth, and most controversial, research might inhibit creativity. And fifth, when the product is launched and the team assembled, it is already too late. 
That's particularly fun given that I'm taking a course right now which is all about design research. I do enjoy holding two opposed ideas in my head at the same time. (No one should think F. Scott Fitzgerald was literally setting this as a true, singular test of a first-rate intellect; it's a necessary quality, but not sufficient on its own.)

Hey! I just found the whole quote, and there's two more sentences to it that I've not seen before.
Before I go on with this short history, let me make a general observation – the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function. One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise. This philosophy fitted on to my early adult life, when I saw the improbable, the implausible, often the "impossible," come true.
Seeing that things are hopeless and yet being determined to make them otherwise. Yup. That's worth doing.

If you want to investigate more about reconciling opposing ideas, I suggest The Opposable Mind.

Sunday, February 26, 2012

Jamming for Joy with Jaco


Watching this is bringing me joy.

Jaco Pastorius is one of my favorite musicians; but I never saw him play live before his untimely death in '87. In fact, I've never even seen video of him playing. Until tonight.

I'm watching him play Montreaux '82 with Randy Brecker. Streaming on Netflix, natch.

You might know Jaco as the bass player on several of Joni Mitchell's albums, like Hejira (the one with Coyote on it).

He also played on several of Pat Metheny's best albums like his debut, Bright Size Life, which some say is one of the 100 Greatest Jazz albums of all time.

Jaco was a member of Weather Report with Wayne Shorter and Joe Zawinul - listen to Birdland on Heavy Weather.


Finally check out his masterpiece, Word of Mouth, with an all-star big band of fusion jazz greats, from Herbie Hancock to Toots Thielman and Jack DeJohnette. I rank it with Sergeant Pepper and Uh-huh as one of my personal favorite albums.


Sunday, February 19, 2012

Business metrics as solution requirements

In my day job, my team has started using a few old-school tools in our infrastructure architecture practice. One of these is Quality Function Deployment (aka QFD, aka House of Quality) which has its roots in Six Sigma manufacturing quality practices.
QFD House of Quality graphic from iSixSigma.com
While looking for a bit of information, I stumbled across an article titled “Retiring the House of Quality.” Since we're just beginning to use QFD, I wanted to see what the problem was. Turns out the article wasn't critiquing QFD itself, but the (mis)use of QFD in “innovation processes.”

The article ties into many of the themes we’re investigating in my current graduate school class on Evidence-Based Design (aka Human-Centered Design or User-Centered Design).

I liked the distinction made between concept innovation and technical innovation; I found that quite useful.
Distinguishing initial concept innovation from downstream technical innovation - from Retiring the House of Quality
But more importantly, I really appreciated the reframing around requirements where there were existing business processes with defined success metrics.

Consider the traditional approach of assuming that the technical solution team can identify certain technical requirements for the solution, and assuming that what is built meets those solution requirements are met, then the solution will address the business problem.

Instead of that approach, the authors suggest having the existing business process success metrics directly become the solution requirements.

The outcome-driven innovation methodology uses customer-defined metrics (desired outcome statements) to guide the formulation, evaluation, and selection of new product and service concepts. The resulting concepts are tied directly to the customer’s desired outcomes—and the job the customer is trying to get done—increasing the likelihood that the customer will value the new concepts’ features. Because the inputs used to guide concept innovation are tied directly to the customer’s actual inputs, no translation is required.

Taking the business process requirements as the solution requirements rather than having to invent intermediate solution requirements is a great insight.

Literally, this is disintermediation, but instead of disintermediating two parties by removing a middleman, it disintermediates a set of, well, intermediate requirements.

And because its exactly in that translation process of generating the intermediate requirements where technical solutions go wrong so often, it looks very promising for improving overall business satisfaction with solutions.

If you teach people, they have this miraculous capability...

I greatly enjoyed reading Architecture for Humanity's book Design Like You Give A Damn. It is full of wonderful, creative responses to hairy problems, often with incredible design constraints and stakes of life and death. 

One of my favorite lines was this quote from Maurice Cox:
I have come to believe that if you teach people what their options are, they have this miraculous capability to make the decision that is in their best interest. It was amazing to watch this unfold. 
I liked DLYGAD so much, I added it to my list of (physical) architecture books for IT architects

A second volume of DLYGAD is coming out soon. I can't wait to read it.

Thursday, October 13, 2011

Technical Debt

I saw a great Steve McConnell (author of the crucial book Code Complete) webcast on Technical Debt and wanted to get these links published: blog post, webcast replay. The webcast replay has a link to a slide deck you can download - just register for the webcast. Highly recommended. Here's Steve's opening blurb:
The term technical debt was coined by Ward Cunningham to describe the obligation that a software organization incurs when it chooses a design or construction approach that's expedient in the short term but that increases complexity and is more costly in the long term. Ward didn't develop the metaphor in very much depth. The few other people who have discussed technical debt seem to use the metaphor mainly to communicate the concept to technical staff. I agree that it's a useful metaphor for communicating with technical staff, but I'm more interested in the metaphor's incredibly rich ability to explain a critical technical concept to non-technical project stakeholders.


Update Feb 20 2012: the webcast replay links above no longer work, but Construx has posted the webcast on YouTube. 

And they posted the slides on SlideShare:
Managing Technical Debt
Last but not least, Construx has the same content in whitepaper form.

Saturday, October 08, 2011

#OccupySesameStreet

99% of da wurldz cookeez r eatn by 1% of da wurldz monsterz. OM NOM NOM NOM.

Learn to be a better troubleshooter



The very best technical talents often have massive troubleshooting chops. But troubleshooting isn't inherently a technical skill; it's a set of tools to achieve clear thinking and knowledge. 


This is science!


For your consideration:  the clearest expositions of technical troubleshooting strategies and tactics since Sun Tzu did it for war.


In no particular order, here are links and a few choice excerpts.


ESR, the ninja-slicing, recursive-software-naming, Free Software advocate who is a key figure in the culture of open-source, wrote one of the foundational documents of hackerdom. As of this writing, it's at version 3.7, last updated December 2010. The beauty of it is, in telling you how to ask questions the smart way, it also teaches you troubleshooting.  




How To Ask Questions The Smart Way
Eric Steven Raymond
  • Be precise and informative about your problem
  • Describe the symptoms of your problem or bug carefully and clearly.
  • Describe the environment in which it occurs (machine, OS, application, whatever). Provide your vendor's distribution and release level (e.g.: â€œFedora Core 7”“Slackware 9.1”, etc.).
  • Describe the research you did to try and understand the problem before you asked the question.
  • Describe the diagnostic steps you took to try and pin down the problem yourself before you asked the question.
  • Describe any possibly relevant recent changes in your computer or software configuration.
  • If at all possible, provide a way to reproduce the problem in a controlled environment.
Do the best you can to anticipate the questions a hacker will ask, and answer them in advance in your request for help. 
Giving hackers the ability to reproduce the problem in a controlled environment is especially important if you are reporting something you think is a bug in code. When you do this, your odds of getting a useful answer and the speed with which you are likely to get that answer both improve tremendously.


ESR also says: 


Simon Tatham has written an excellent essay entitled How to Report Bugs Effectively. I strongly recommend that you read it.

I agree! Check it out:

  • The first aim of a bug report is to let the programmer see the failure with their own eyes. If you can't be with them to make it fail in front of them, give them detailed instructions so that they can make it fail for themselves.
  • In case the first aim doesn't succeed, and the programmer can't see it failing themselves, the second aim of a bug report is to describe what went wrong. Describe everything in detail. State what you saw, and also state what you expected to see. Write down the error messages, especially if they have numbers in.
  • When your computer does something unexpected, freeze. Do nothing until you're calm, and don't do anything that you think might be dangerous.
  • By all means try to diagnose the fault yourself if you think you can, but if you do, you should still report the symptoms as well.
  • Be ready to provide extra information if the programmer needs it. If they didn't need it, they wouldn't be asking for it. They aren't being deliberately awkward. Have version numbers at your fingertips, because they will probably be needed.
  • Write clearly. Say what you mean, and make sure it can't be misinterpreted.
  • Above all, be precise. Programmers like precision.

But the first place I send people when I want them to understand what I'd like to get as a good bug report is Joel Spolsky's story of Jane, the very, very good software tester. 

It's pretty easy to remember the rule for a good bug report. Every good bug report needs exactly three things.
  1. Steps to reproduce,
  2. What you expected to see, and
  3. What you saw instead.
Seems easy, right? Maybe not. As a programmer, people regularly assign me bugs where they left out one piece or another.
If you don't tell me how to repro the bug, I probably will have no idea what you are talking about. "The program crashed and left a smelly turd-like object on the desk." That's nice, honey. I can't do anything about it unless you tell me what you were doing.

If you don't specify what you expected to see, I may not understand why this is a bug. The splash screen has blood on it. So what? I cut my fingers when I was coding it. What did you expect? Ah, you say that the spec required no blood! Now I understand why you consider this a bug.
Part three. What you saw instead. If you don't tell me this, I don't know what the bug is. That one is kind of obvious.

Check out this book: Are Your Lights On: How to find out what the problem really is, by Gause and Weinberg. They wrote the book on requirements, too. 



Here's some random guy's 2 minute video review of it: 



KB555375 might be Microsoft's best KB article of all time - but by all means, if you know a better one, say so in the comments.


Microsoft Support Knowledgebase Article ID: 555375 - Last Review: July 22, 2005 - Revision: 1.0
How to ask a question 
Author: Daniel Petri MVP
Good examples of questions will include information from most of the following categories: - What are you trying to do?- Why are you trying to do it?- What did you try already, why, and what was the result of your actions?- What was the exact error message that you received?- How long have you been experiencing this problem?- Have you searched the relevant forum/newsgroup archives?- Have you searched for any tools or KB articles or any other resources?- Have you recently installed or uninstalled any software or hardware?- What changes were made to the system between the time everything last worked and when you noticed the problem? Don't let us assume, tell us right at the beginning.
In fact, if you know of ANY other top-notch sources of troubleshooting wisdom, put a link in the comments!


There's one I'm trying to find that I had as a mousepad - it was about 10 troubleshooting tips - one of them was something like "Problems don't just go away on their own. If you haven't fixed the problem, the problem isn't fixed." Anybody know what that's from?


(I'll try to fix the formatting on this post later, ok?)