Go to Top

I’ll Take Waste For $400, Alex


The geeks are giddy at the news of Watson - IBM's multi-million dollar entrant in Jeopardy - thoroughly trouncing a couple of smart guys.  Some in the investment community are pretty jazzed up at the potential for computers Watson's rousing victory promises, as well.  Most noteworthy is a quote from a guy named Ross Mauri who said, "The implications of this technology in the coming years are going to be phenomenal".  What makes his observation significant is that he is IBM's Vice President for Enterprise Process Transformation.  I would suggest that the implications are phenomenally disastrous if enterprise leadership cannot distinguish trivia from useful information, and Watson has only proven to be the ultimate Rain Man of trivia.

Just about everyone is familiar with the Einstein quote that not all that can be counted counts, and vice versa.  The term for that which can be counted but does not count for much is 'trivia', and this is the area in which computers in general, and Watson in particular, excel.

Put another way, last week I was doing a One Day Assessment at an aluminum extruder and, as I walked through the plant, the engineer giving me the tour and I talked about the unhealthy supremacy of direct labor cost in management decision making.  His observation was right on the money - it is given so much weight because it is so easy to count.  The implications of Watson - potentially dangerously phenomenal implications - are an increase in reliance on things that are easy to count - but don't really count for much.

Expertise at trivia is no great accomplishment.  By the nature of my job I am pretty hard to beat at Jeopardy and Trivial Pursuit.  Just last week I learned how potatoes are planted, flag poles are made and some cool stuff about DNA processing technology I have sworn not to discuss.  I can tell you how cattle are groomed for competition, what happens inside the device that fires Tomahawk missiles, how to skin the bark off a tree at very high speed and how cartons of orange juice are formed, filled and sealed.  I know an amazing amount of such arcane stuff simply because I have been in over 75 factories in the last year.  While all of that makes me a fascinating conversationalist at cocktail parties, it would be a serious mistake to interpret any of it as evidence of wisdom or even common sense.

In Simple Excellence Adam Zak and I describe the way in which the most profitable companies we hae seen distill their business down to the basics - the opposite of the complexity computers seek to master.  More signficant, we describe how those companies make very effective decisions with equal inputs of data, common sense and practical wisdom, and an ethical concern for all of their stakeholders - the things Einstein's quote described as counting even though they can't be counted.

Business is already way out of kilter with a bias toward 'fact based' management, which limits decision making to only those things that can be quantified and counted, largely disregarding the critical things that are a lot harder to neatly dump into a data base for the likes of Watson to cull through.

Watson's victory is little more than a parlor trick - mild amusement for management.  To read any more into it is to head further down a destructive path management has already followed for too long.

Share Button

10 Responses to "I’ll Take Waste For $400, Alex"

  • Doug Meyer
    17 February 2011 - 11:13 am

    I’m troubled by this post, Bill. I’ve been a subscriber to your newsletter for several years and I read it regularly, mainly for the wise insights I expect to find there.

    I was shocked to read this entry, though. (Perhaps you might consider auditioning for Ricky Gervais’ show?_

    First though, full disclosure. I am one of those rather universally disparaged geeks; I actively make my living writing software for clients who value my services to support me at a reasonable standard of living. And I’m a veteran (in many ways): I’ve been doing this software gig for over forty years, retreading myself at an increasingly rate.

    I have had to make it my business to at least understand, if not master, the foundations on which my application software must rest.

    I have to tell you unequivocally, Bill, that you have missed the point of the entire Jeoparty demonstration.

    If the show that you saw was simply about encyclopedic recall, then I would be the first to agree with you that what we saw was an exercise in excess and futility. In fact, when I first learned of the proposed show last week on Nova, that very thought was the first thing that came into my mind.

    But the salient point is this: the software that the IBM researchers built demonstrated not intelligence, but a form of semantic understanding. We have never before seen anything made of silicon and steel that could show such a capability. This product shows that software alone can be built that will enable a system with such encyclopic recall to _make sense_ out of human language. This is the only significance of the exercise. But it is truly a profound one. It likely will, in our own lifetimes, do away with the decades-old complaint that ‘the computer did what I told it to do, not what I wanted it to do.’ To quote the Vice President, ‘This is a big f****** deal!’.

    BTW, I concur (and have been agreeing with your point for thirty years in manufacturing companies) about how management is swayed so easily by things that they can count. But that’s just accounting. It’s not management.

  • Bill Waddell
    17 February 2011 - 12:48 pm


    My understanding of IBM’s “accomplishment” with Watson is like this:

    Existing applications are very good at:

    Ask: What is fact X?
    Computer: Go get fact X and give it to user

    Watson, however, can do:

    Ask: What is answer to question giving only clues?
    Computer: Follow clues to get Fact A, leading to Fact B, leading to Fact C, leading to Fact n … ultimately to Fact X … give Fact X to user as answer to question.

    That is very good. However, the point of my post is that Fact X must be known if the computer is to provide answers. Watson is no more capable of determining the answer to the question if Fact X has not previously been defined than the first abacus was. In far too many cases Fact X is not only unknown, it is unknowable. IBM and the enterprise software community seem to be under the illusion that every aspect of business can ultimately be quantified. … that the task of management can eventually be boiled down to logic and data, which is an absurd proposition. The effect is that great weight is given to inputs to decisions where Fact X is known, and little weight is given to inputs that cannot be quantified. Watson is dangerous to the extent it convinces management that it will inherently improve their decision making when it offers nothing to provide weight to that which has not already been defined.


  • Pete
    17 February 2011 - 4:45 pm

    Bill, so you don’t see a potential future application as a diagnostic assistant in the medical field (for example)? AI has failed at this for some time, but here is an approach that might have a chance. Doctors make errors all of the time and are constrained with respect to answers by their particular expertise and their breadth of experience. A patient might begin by stating symptoms. The diagnostician then must start asking questions and using the answers to differentially follow a path to a more limited set of ailments. At some point tests will need to be performed to help with the differentiating diagnosis, but which are the critical ones? The AI assistant could be kept current with many medical advances more readily than a human. Watson “learned” answers by “listening” to answers in the setup games. In the competition it was required to identify the most important parts of the clues in order to focus on potential answers. Sorry, but I see this as a small but important step forward in applying computing power to real, important problem solving. I’m with Doug on this one.

  • JM
    17 February 2011 - 5:21 pm


    Watson appears to be nothing more that an super advanced search algorithm (actually multiple algorithms) that runs at lightning speeds. There’s no deductive reasoning…it simply helps find information faster and more intelligently. Same stuff Google engineers work on day in and day out.

    I think if you carefully read the article, you will notice they use the word “help” several times when describing Watson’s potential applications. It won’t “replace” anything, especially a human’s ability to think and reason. It’ll simply help humans to make rational decisions more consistently by presenting more information almost instantaneously.

    I have to agree with Doug…you are missing the point on this one.

  • Bill Waddell
    17 February 2011 - 5:37 pm


    I don’t presume to know whether this will have some application somewhere. It may well facilitate medical analysis or be the super search engine driver. I specifically wrote about the the use of this as an enterprise tool. The quote was from the IBM guy in charge of such applications. I take issue with this as turbo-charging ERP, which is already relied on far too much and leads to poor enterprise management.

  • Jim Fernandez
    18 February 2011 - 8:38 am

    All this talk is making me very nervous! We went through a Lean Kaizen event last month and discovered that the root cause of our problem is, we have no control over our capacity planning for our machine shop. I suggested using simple forecasting software where you input your current capacity and the shipping schedule. Then you can see when you need to move shipment dates, increase capacity or purchase parts from outside vendors. We have our current ERP/MRP computer system running on manual. We currently do all of our planning manually by hand. And we do no capacity planning at all.

    Next week we have some software people coming in to show us how to fully utilize our current ERP/MRP system. I’m scared about the prospect of letting a computer tell us how to run the company. My experience with “most” software programs is that there are constraints on how it operates. And “most” software people do not listen to what is actually needed. They are always trying to figure out how they can fit what you need into the existing framework of the computer program. As evidenced by Doug’s comment; “I have had to make it my business to at least understand, if not master, the foundations on which my application software must rest”.

  • Stephen Hodgson
    18 February 2011 - 9:34 am

    Good quote Jim. We too are implementing a total ERP solution. I’m worried it will just cause more layers over bad processes, but it isn’t my call so I’m along for the ride!

  • Jim Toomey
    19 February 2011 - 2:28 am

    People are missing the point on Watson. The real thing to take away is not that it could retrieve an answer. Rather it was that it could decode standard language (not computer speak) and understand what was being asked. The fact that it did it quickly enough to find the answer and beat the Jeopardy contestants to the buzzer is basically just showing off.

    Plain language communication with computers will be as revolutionary as the GUI was 30 years ago.

    People should really see the Nova program about Watson for more insight.

    Full disclosure, I am an IBMer but a hardware not a software guy and not at all involved with this project.

  • Bill Waddell
    19 February 2011 - 3:04 am


    Thanks for weighing in on this. I am old enough to remember the pre-GUI days, and if Watson represents a comparable improvement in usability, and user productivity it will certainly have earned its keep.

  • mrpinto
    22 February 2011 - 4:43 pm


    Generally a big fan, but I must echo others that you’re a bit off on the point here.

    Trivia recall is what makes the game show hard for humans. Humans are bad at recalling things.

    Computers are excellent at recalling things. Knowledge of this sort of trivia would be… trivial to program into a computer. Hook it up to Wikipedia and take an early lunch.

    Language processing is what makes the show hard for computers. Humans are great at language processing. It’s pretty much our single greatest cognitive achievement actually. Computers suck at it. Irregular constructions, puns, metaphors and all that are hard.

    Computers that can better process language are actually a pretty big deal. Computers are still pretty hard for humans to deal with. IBM (and Google) are making big progress there and I think it’s safe to assume that business will benefit in the end from developments in this area.