There was understandable euphoria late last century when the Soviet Union collapsed along with the notion of ideological blocs. Peace and cooperation appeared to be spreading around the globe. A number of events, however, including in the terrorist attacks of 11 September 2007, burst the rosy vision of a world at peace. Since then, we have seen a complex security environment emerge. Now, Adi Shamir, a professor at the Weizmann Institute of Science in Israel and one of the world’s most prominent cryptographers, warns that bad math should be added to the list of challenges [“Adding Math to the List of Security Threats,” by John Markoff, New York Times, 17 November 2007]. Shamir insists that a math error embedded in widely used computing chips could place the security of the global electronic commerce system at risk. In a research note written for colleagues:
“He wrote that the increasing complexity of modern microprocessor chips is almost certain to lead to undetected errors. Historically, the risk has been demonstrated in incidents like the discovery of an obscure division bug in Intel’s Pentium microprocessor in 1994 and, more recently, in a multiplication bug in Microsoft’s Excel spreadsheet program, he wrote. A subtle math error would make it possible for an attacker to break the protection afforded to some electronic messages by a popular technique known as public key cryptography.”
Most of us think about Enigma machines or other various devices used by spies when we think of cryptography. The fact is, however, that most of us rely on encryption every day of our lives. I have written in other posts that the global economy functions mainly on trust (loans, credit cards, brokered deals, and the like). Just as Ronald Reagan used to say “trust but verify,” encryption is used to keep unscrupulous people from getting at sensitive information. As Markoff writes:
“Using this [public key cryptography] approach, a message can be scrambled using a publicly known number and then unscrambled with a secret, privately held number. The technology makes it possible for two people who have never met to exchange information securely, and it is the basis for all kinds of electronic transactions.”
So what’s the problem?
“Mr. Shamir wrote that if an intelligence organization discovered a math error in a widely used chip, then security software on a PC with that chip could be ‘trivially broken with a single chosen message.’ Executing the attack would require only knowledge of the math flaw and the ability to send a ‘poisoned’ encrypted message to a protected computer, he wrote. It would then be possible to compute the value of the secret key used by the targeted system. With this approach, ‘millions of PC’s can be attacked simultaneously, without having to manipulate the operating environment of each one of them individually,’ Mr. Shamir wrote. The research note is significant, cryptographers said, in part because of Mr. Shamir’s role in designing the RSA public key algorithm, software that is widely used to protect e-commerce transactions from hackers.”
As I noted in a recent post [Algorithms and Business], algorithms and automated business processes are going to play increasingly important roles in our lives. The burden this places on those generating algorithms or equations is enormous. Although Shamir’s scenario is hypothetical, clearly the unintended consequences of “bad math” could ripple through the global economic system and result in its collapse as money is lost, liability is assessed, and so on. Analysts are inclined to take Mr. Shamir seriously.
“‘The remarkable thing about this note is that Adi Shamir is saying that RSA is potentially vulnerable,’ said Jean-Jacques Quisquater, a professor and cryptographic researcher at the Université Catholique de Louvain in Belgium. Mr. Shamir is the S in RSA; he, Ronald Rivest and Leonard Adleman developed it in 1977. Because the exact workings of microprocessor chips are protected by laws governing trade secrets, it is difficult, if not impossible, to verify that they have been correctly designed, Mr. Shamir wrote. ‘Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers,’ he said, ‘there are many smaller manufacturers of microprocessors who may be less careful with their design.'”
The type of challenge that Shamir identified is not new. The fact that it was Mr. Shamir who was raising the warning, not just warning itself, that made his concerns big news.
“The class of problem that Mr. Shamir described has been deeply explored by cryptography experts, said Paul Kocher, who is president of Cryptography Research, a consulting and design firm in San Francisco. However, he added that it illustrated how small flaws could subvert even the strongest security. An Intel spokesman noted that the flaw was a theoretical one and something that required a lot of contingencies. ‘We appreciate these and we look at everything,’ said George Alfs, an Intel spokesman. In e-mail correspondence after he sent the note, Mr. Shamir said he had no evidence that anyone is using an attack like the one he described.”
I take personal interest in such concerns because my company, Enterra Solutions®, is in the rule set automation and information sharing business. Security and trust are critical elements to both. Like others, we trust chip manufacturers to provide us with products that contain no critical flaws. All of the recent toy recalls involving toys made in China must give pause to companies dependent on microchips being manufactured in China. Although I’m sure that microchip manufacturing has much better oversight than toy manufacturing, standards and enforcement are essential parts of maintaining trust in the global economy. That is why the WTO sets standards, why the ISO sets standards, and why there is so much haggling over standards in most industries. Standards can be a pain to deal with; but the fact that the negative consequences of having no or low standards can be devastating — as Mr. Shamir aptly demonstrates — makes the pain they cause tolerable.