Two frequent topics of this blog have been connectivity and innovation. I’m in favor of both. An article in The Economist, however, raises the intriguing question: “Is the web narrowing scientists’ expertise?” [“Great minds think (too much) alike,” 19 July 2008 print edition]. Most pundits have praised the World Wide Web for making more information available to more people than at any time in history. That is what makes The Economist‘s question all the more ironic and intriguing. The article admits that the Web has made research easier — and that is the crux of the problem.
“Online databases of scientific journals have made life easier for scientists as well as publishers. No more ambling down to the library, searching through the musty stacks and queuing up for the photocopier. Instead, a few clicks of a mouse can bring forth the desired papers and maybe others that the reader did not know of—the ‘long tail’ of information that the web makes available. Well, that is how it is supposed to work, but does it? James Evans, a sociologist at the University of Chicago, decided to investigate. His conclusion, published in this week’s Science, is that the opposite is happening. He has found that as more journals become available online, fewer articles are being cited in the reference lists of the research papers published within them. Moreover, those articles that do get a mention tend to have been recently published themselves. Far from growing longer, the long tail is being docked.”
The problem, it appears, reflects more a lack of effort than a lack of available data on the Web. It may also reflect an interesting twist on the socialization of science.
“Dr Evans based his analysis on data from citation indexes compiled by Thomson Scientific (part of Thomson Reuters). In a world in which researchers must publish or perish, such indexes are the firing squads. They record how often one article is cited as a source by others, and thus measure a paper’s influence. Those used by Dr Evans cover 6,000 of the most prominent academic journals, some going back to 1945. By cross-referring these to a database called Fulltext Sources Online, he was able to work out when each of these journals became available on the web—and whether a journal had posted back-issues electronically as well. The result was a set of 34m research papers, which he was able to mine in search of his answers. For each research paper he looked at, he calculated the average age of the articles cited as references. He then calculated, for each of those cited articles, the number of back-issues of the journal it had been published in which were available on the web at the time when it was cited, and averaged that too. Finally, he looked for correlations between the two averages. What he discovered was that, for every additional year of back-issues of a journal available online, the average age of the articles cited from that journal fell by a month. He also found a fall, once a journal was online, in the number of papers in it that got any citations at all. Indeed, he predicts that for the average journal today, five extra years’ worth of online availability will cause a precipitous drop in the number of articles receiving one or more citations—from 600 to 200 a year. Rather than measuring the length of the tail, then, it seems that modern science is actually focusing on a tiny bit of it.”
That all seems a bit bizarre and The Economist admits that “why this should be so remains unclear. It does not seem to have anything to do with economics. The same effect applied whether or not a journal had to be paid for.” The article surmises that when researchers were, in the past, forced to work a little harder at their research they tended to use the results of that exertion to a greater extent.
“One explanation could be that indexing works by titles and authors alone, as happened with printed journals, forced readers to cast at least a cursory glance at work not immediately related to their own—or even that the mere act of flicking through a paper volume may have thrown up unexpected gems. This may have led people to make broader comparisons and to integrate more past results into their research.”
I can think of a couple of alternative explanations. One alternative involves scientific egos. By reducing the number of papers cited in a study, the fewer “Google hits” scientific rivals are likely to receive. That explanation, however, does not account for the fact the articles that are cited are generally more recent. If egos were the primary motivation for fewer references, one would think that references to older articles might have actually increased. Another explanation may be an underlying belief that recent articles have the most relevance because the advance of science is progressing so rapidly. I know of one researcher who insisted that he could not be held accountable for anything he had written more than five years ago. The Economist concludes that it is what it is, but it remains intrigued about whether the trend is good or bad.
“It is not yet clear whether this change is for good or ill. Electronic searching means that no relevant paper is likely to go unread, but narrowing the definition of ‘relevance’ risks reducing the cross-fertilisation of ideas that sometimes leads to big, unexpected advances. As a wag once put it, an expert is someone who knows more and more about less and less until, eventually, he knows everything about nothing. It would be ironic if that is the sort of expertise that the world wide web is creating.”
I couldn’t agree more. Readers of this blog know that I’m a big fan of what Frans Johansson calls the Medici Effect — an “effect” is created when experts from different disciplines come together to discuss and solve cross-sector challenges. Few challenges in life are confined to a single discipline which means the best solutions for those challenges are unlikely to emerge from within a single discipline. The Web should (and has) made collaboration easier and there is no reason that it should narrow scientific thought.