Throughout my forty-six year software development career, I have tried to avoid working on products that can kill people if the software has certain kinds of bugs. I deeply sympathize with the programmer of the Three-Mile Island alarm detection system, who failed to anticipate how the alarm terminal would work when the software was overwhelmed by hundreds of concurrent alarms. (The terminal printed a stream of question marks, rather than any sensible error message.) Some years later, I interviewed this fellow for a job; I think he's a good programmer and a nice guy. But an unanticipated disaster really put his work on the spot.
In the current financial crisis, I've been reading complaints about how people must have underestimated the most extreme (very unlikely but real) risks that their deals created. But I believe that most of these people understood the rare risks. They also understood that these risks were other people's risks. (I believe that most of the financiers who brought us to our current crisis are still better off than 99.9% of the US population. For them, this disaster is mostly 'other people's disaster'.)
The bottom line of this meandering introduction is that it's praiseworthy to be concerned, in your work, about risks to others that you may create; but an awful lot of people don't worry about other people's risks. That's human nature, and it's very hard, as China with its tainted milk supply is discovering, to do much about it. And now, my story:
In the 1970's I worked on a computer system, one of the first of its kind, to automate Electro Cardiogram analysis. This work came to me, I didn't choose it. I worked at it carefully, always worried about the chance that my software would cause someone's EKG analysis to produce a false negative. (A cardiologist always reviewed the results of the computer analysis, but you never know: what if the doctor was distracted that day?)
One customer came to us with a special request: that we program our computer to receive EKG data from some old EKG machines the customer already owned. We did not like those machines because the data they produced was very noisy and led to many incorrect results. We pointed this out to the customer. Our customer contact felt that this was our problem. He worked for a big company that manufactured many high tech products, and he brought in the company's top trouble-shooting team to figure out why our analyses, using his machines, were poor. The troubleshooters argued convincingly that there was nothing wrong with our software. The culprit was the noisy data coming out of the old EKG machines. The customer paid us for our work, and then, holy s—t! He set up a service bureau to process EKGs using his noisy machines.
I and my coworkers felt that whatever happened would not be our fault, but the situation really bothered us. Our name, and the prestigious name of the guy who developed the analysis program, stood behind this faulty service bureau. Who was going to die by relying on the quality of our work? I wondered if we had a moral obligation to try, somehow, to expose this faulty service bureau. Any such action could produce nasty litigation, and could only be a desperate last resort. But what should we do?
Our salesman nosed around the industry, talking to cardiologists and other EKG service bureaus. He reported that the poor quality of the analyses by this worrisome service bureau was obvious to everyone. They were getting business, but only of a special kind: medical schools and some hospitals used their system for training purposes, to get people used to processing remote EKG analyses. We breathed a great sigh of relief.
Thursday, October 30, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
A classical moral hazard problem. I'm happy you guys were concerned about it, because all too often it's written off as 'somebody else's problem'. Financial markets are only one example of this - politics in certain systems is another one (e.g. parliamentary systems without specific jurisdictions or 'ridings').
I work in research for medical products that could conceivably kill a lot of people - elevated cancer risk is a big worry, along with a whole host of other concerns (liver/kidney failure, debilitation, persistent infections, etc, etc.). We aren't working on life-critical systems (generally ones affecting mobility, pain, quality of life, etc.), though, and we often question whether certain risks are worth it. Since we work in research, though, any commercialized products from our work completely disaggregate the associated risks from our 'gains' (not so much money as publications, grants, etc.).
It's a very tough issue, but at least there is *somebody* who is going to feel quite a bit of risk (i.e., the company involved) which will theoretically induce them to be more careful. Strict regulation also helps - along with a general spirit of checking for safety and efficacy before working on the bottom line.
Post a Comment