Home » Artificial Intelligence » Gallimaufry III

Gallimaufry III

July 19, 2006

Gallimaufry is a hash made from leftovers. Over the years the term has also come to mean a hodgepodge, hence the title of this post. This is the third in the series of such posts.

 

Monolith or Matrix

 

A Fortune magazine article by Vivienne Walt [“France Sings a Different Tune,” 17 July 2006] discusses a new French digital rights bill (yet to be signed into law by President Jacques Chirac) legislation that could require Apple to disclose its secret underlying code for iTunes software. The thrust of the article is that if France is pushing open source software it could be a turning point for the rest of Europe. As Francisco Mingorance, director of public policy in Europe for the Business Software Alliance, whose members include Microsoft, Intel, and Symantec, puts it, “France is a country that is so respectful of authors. Giving software free on the Internet? That would be the end of copyright protection.”

 

The article highlights a growing tension of the information age and a dilemma concerning business strategies. During the dawn of the information age, companies believed that they were best served by proprietary systems that not only protected their intellectual property and sunk investments, but almost guaranteed customer loyalty since interoperability was nearly impossible to achieve if a client jumped ship to another vendor. Apple fits pretty solidly into this category, which I call the monolith. You have to give Steve Jobs credit for sticking with business that model and making it work when everyone else believed it was dead.

 

Microsoft represents the matrix business model. Code is shared and interoperability is the goal. Companies that once clung fiercely to the monolith began to embrace the matrix when they realized that customer loyalty was more likely to be won by helping make connectivity and interoperability easier not harder. The French law is just another signpost that the monolith is not a good business model for a Resilient Enterprise. When something new comes along (in this case something that will replace the iPod), a market can be lost overnight. While the French law allows the government to force Apple and other companies to disclose their protected code, it does have an escape clause.

Under the law, those requesting underlying code will need to apply to a newly created government body, which will consider each case according to its benefit to the software’s owner. And the law includes a loophole for companies like Apple by giving copyright holders – such as record labels and musicians – the right to say they don’t want interoperable systems.

In this day and age, saying you don’t want interoperability goes against almost every developing trend.

Can iTunes and others survive the mounting challenges? Not only free-software advocates have their doubts. “Apple is living in a fool’s paradise, but then the whole media market is doing that,” says Gilles Gravier, chief technology strategist for Sun Microsystems, who believes that digital-rights management should end. Gravier says technology companies fail to realize that opening their software could create a boom for online purchases, since millions more people would be able to play downloads, increasingly on their mobile phones. “When records came out, they said it would close concert halls,” Gravier notes. “But they are still there.”

Apple is likely to find that content providers are fickle when newer (more profitable) opportunities present themselves. I’m certainly convinced that the future lies in moving from monolith to the matrix.

 

Cognition is Coming

 

Science fiction novels are filled with stories of cognitive machines running amok and threatening the human race. I believe, however, that computer cognition is not only coming but will be a crucial building block for new architectural framework that will make organizations much more resilient. An article in the New York Times [“Brainy Robots Start Stepping Into Daily Life,” by John Markoff, 8 July 2006], avers that in the field of artificial intelligence “reality is finally catching up to the science-fiction hype.” The article points to activities such as robot cars driving themselves across the desert, electronic eyes performing lifeguard duty in swimming pools, and virtual enemies with humanlike behavior battling video game players. The article continues:

Though most of the truly futuristic projects are probably years from the commercial market, scientists say that after a lull, artificial intelligence has rapidly grown far more sophisticated. Today some scientists are beginning to use the term cognitive computing, to distinguish their research from an earlier generation of artificial intelligence work. What sets the new researchers apart is a wealth of new biological data on how the human brain functions. “There’s definitely been a palpable upswing in methods, competence and boldness,” said Eric Horvitz, a Microsoft researcher who is president-elect of the American Association for Artificial Intelligence. “At conferences you are hearing the phrase ‘human-level A.I.,’ and people are saying that without blushing.”

Cognitive computing holds the hope for an exciting breakthrough that bears watching. We sit on the cusp of an era when automated rules can be linked with cognitive computing to create an entirely new business platform. I’ll write more on that in what I’m going to call my first Thought Probe. How close we are to that breakthrough is not clear.

Cognitive computing is still more of a research discipline than an industry that can be measured in revenue or profits. It is pursued in various pockets of academia and the business world. And despite some of the more startling achievements, improvements in the field are measured largely in increments: voice recognition systems with decreasing failure rates, or computerized cameras that can recognize more faces and objects than before. Still, there have been rapid innovations in many areas: voice control systems are now standard features in midpriced automobiles, and advanced artificial reason techniques are now routinely used in inexpensive video games to make the characters’ actions more lifelike.

A related New York Times article [“Maybe We Should Leave That Up to the Computer,” by Douglas Heingartner, 18 July 2006] discusses the work of Chris Snijders of the Eindhoven University of Technology. Snijders “is convinced that computer models, by and large, can do a better job of [making decisions, as long as you have some history and some quantifiable data from past experiences]. He even issued a challenge late last year to any company willing to pit its humans against his algorithms.” Snijders backs up his bravado with two years of experimentation.

Some of Mr. Snijders’s experiments from the last two years have looked at the results that purchasing managers at more than 300 organizations got when they placed orders for computer equipment and software. Computer models given the same tasks achieved better results in categories like timeliness of delivery, adherence to the budget and accuracy of specifications.

The article notes that computer models have a multitude of fans as well as their critics. Critics mostly note that not every business activity can be successfully modeled. Even activities that can be modeled can lead to catastrophic consequences if variables exceed normal parameters. The article points famously to the collapse of Long Term Capital Management.

Many in the field of computer-assisted decision-making still refer to the debacle of Long Term Capital Management, a highflying hedge fund that counted several Nobel laureates among its founders. Its algorithms initially mastered the obscure worlds of arbitrage and derivatives with remarkable skill, until the devaluation of the Russian ruble in 1998 sent the fund into a tailspin. “As long as the underlying conditions were in order, the computer model was almost like a money machine,” said Roger A. Pielke Jr., a professor of environmental studies at the University of Colorado whose work focuses on the relation between science and decision-making. “But when the assumptions that went into the creation of those models were violated, it led to a huge loss of money, and the potential collapse of the global financial system.” In such situations, “you can never hope to capture all of the contingencies or variables inside of a computer model,” he said. “Humans can make big mistakes also, but humans, unlike computer models, have the ability to recognize when something isn’t quite right.” Another problem with the models is the issue of accountability. Mr. Forsythe of Schwab pointed out that “there’s no such thing as a 100 percent quantitative fund,” in part because someone has to be in charge if the unexpected happens. “If I’m making decisions,” he said, “I don’t want to give up control and say, ‘Sorry, the model told me.’ The client wants to know that somebody is behind the wheel.” Still, some consider the continuing ascendance of models as inevitable, and recommend that people start figuring out the best way to adapt to the role reversal. Mark E. Nissen, a professor at the Naval Postgraduate School in Monterey, Calif., who has been studying computer-vs.-human procurement, sees a fundamental shift under way, with humans becoming increasingly peripheral in making routine decisions, concentrating instead on designing ever-better models.

Business Revolutions

 

As an entrepreneur and someone trying to usher in a new organizational paradigm, I found a New York Times article about Blackboard, Inc., very interesting [“Business Revolutionaries Learn Diplomacy’s Value,” by William C. Taylor, 16 July 2006].

Blackboard, Inc., sells software that enables colleges and universities to put all their essential activities online: course reading, homework assignments, class discussions, tests and lab projects. Blackboard’s software, now used by well over 10 million students and their professors, has all the earmarks of a “disruptive technology” — a blend of computing and Internet-based connectivity with the potential to transform how established institutions operate.

I have consistently touted the power of connectivity and, along with Tom Barnett, see connectivity as one of the most important objectives of government and corporate policies in the age of globalization. The reason that Blackboard, Inc., has all the earmarks of a Resilient Enterprise is because it blends computing, connectivity, and people. Another important strategy highlighted in the article is Blackboard’s evolutionary approach to change.

“How long have people been predicting that higher education was going to experience fundamental change or fall apart?” [Michael L. Chasen, the company’s president and chief executive] asked during an interview at his company’s headquarters in Washington. “But we are teaching and learning in much the same way we have for centuries. We don’t aim to ‘replace’ the classroom. We’re not looking to ‘revolutionize’ education. We help schools deliver more effectively what they are good at already.”

The dot.com era rightfully made most investors and businesses leery of revolutionary ideas. Resilient companies build on the best of the past by convincing clients that what they have to offer makes them better, not necessarily different. Of course, once disruptive technologies are successfully implemented, organizations are different (and better), but they became better because they appreciated the evolutionary consequences of adapting new technologies. True visionaries — those who can see a future that is not extrapolated from the past — are few and hard to find. Most decision makers, however, can appreciate (and are willing to invest in) evolutionary improvements. The article spotlights a number of companies who have taken a conservative approach to success and generated revolutionary results. It’s worth a read.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!