Every year the MIT Technology Review publishes a list of technologies it believes have the potential to change the world. This year is no different and the list of technologies for 2010 was recently published [“10 Emerging Technologies 2010,” May/June 2010]. The magazine notes that “the winners are chosen based on the editors’ coverage of key fields. The question that we ask is simple: is the technology likely to change the world? Some of these changes are on the largest scale possible: better biofuels, more efficient solar cells, and green concrete all aim at tackling global warming in the years ahead. Other changes will be more local and involve how we use technology: for example, 3-D screens on mobile devices, new applications for cloud computing, and social television. And new ways to implant medical electronics and develop drugs for diseases will affect us on the most intimate level of all, with the promise of making our lives healthier.” Let’s go through the list:
TR10: Real-Time Search [by Nicholas Carr] — “[Amit] Singhal is leading Google’s quest to incorporate new data into search results in real time by tracking and ranking updates to online content–particularly the thousands of messages that course through social networks every second. Real-time search is a response to a fundamental shift in the way people use the Web. People used to visit a page, click a link, and visit another page. Now they spend a lot of time monitoring streams of data–tweets, status updates, headlines–from services like Facebook and Twitter, as well as from blogs and news outlets. … What’s really hard about real-time search is figuring out the meaning and value of those fleeting bits of information. The challenge goes beyond filtering out spam, though that’s an important part of it. People who search real-time data want the same quality, authority, and relevance that they expect when they perform traditional Web searches. Nobody wants to drink straight from a fire hose. … Singhal’s view of real-time search is very much in line with Google’s strategy: distilling from a welter of data the few pieces of content that are most relevant to an individual searcher at a particular point in time. Other search providers, including Google’s arch rival, Microsoft, are taking a more radical view. Sean Suchter, who runs Microsoft’s Search Technology Center in Mountain View, CA, doesn’t like the term real-time search, which he considers too limiting. He thinks Microsoft’s Bing search engine should not just filter data flowing from social networks but become an extension of them. Ultimately, says Suchter, one-on-one conversations will take place within Bing, triggered by the keywords people enter. Real-time search, he predicts, will be so different from what came before that it will erase Google’s long-standing advantages. ‘History doesn’t matter here,’ he says. After a pause, he adds, ‘We’re going to wipe the floor with them.’ Amit Singhal has heard such threats before, and so far they haven’t amounted to much. But even he admits that real-time search comes as close to marking ‘a radical break’ in the history of search as anything he’s seen. Keeping Google on top in the age of chatter may prove to be Singhal’s toughest test.”
What Singhal and Suchter are discussing comes ever closer to what people in the past have called the semantic web or Web 3.0. For more on that subject, read my 2006 post entitled Web 3.0. The real time search capabilities being developed rely heavily on social networks and so does the next technology.
TR10: Social TV [by William Bulkeley] — “The viewership for live television broadcasts has generally been declining for years. But something surprising is happening: events such as the winter Olympics and the Grammys are drawing more viewers and more buzz. The rebound is happening at least in part because of new viewing habits: while people watch, they are using smart phones or laptops to swap texts, tweets, and status updates about celebrities, characters, and even commercials. Marie-José Montpetit, an invited scientist at MIT’s Research Lab for Electronics, has been working for several years on social TV–a way to seamlessly combine the social networks that are boosting TV ratings with the more passive experience of traditional TV viewing. Her goal is to make watching television something that viewers in different places can share and discuss–and to make it easier to find something to watch. Carriers, networks, and content producers hope that making it easier for viewers to link up with friends will help them hold on to their audiences rather than losing them to services like Hulu, which stream shows over the Internet. And opening TV to social networking could make it easier for companies to provide personalized programming. … Montpetit wants to unite different communication systems–especially cellular and broadband services–to create an elegant user experience. She’s been sharing ideas about that sort of system with BT, which provides broadband connections to 15 million people in the United Kingdom and Ireland, including nearly a half-million digital-TV subscribers. … Montpetit anxiously awaits U.S. deployment of social TV: her daughter, with whom she watches certain shows, heads off to college next fall. Engineering and business issues aside, she wants social TV to help friends and family stay connected, even as they move apart.”
I call this the “American Idol Effect.” Although television shows have asked people to call or write them in the past, American Idol made watching television an interactive social event. Montpetit is hoping to build on that phenomenon; but I would hardly call it world changing. I suspect social TV will appeal more to younger generations than older ones. Many older generations simply want people to be quiet so they can watch their favorite show! As long as we’re discussing the media, let’s move to the next technology, which is also media related.
TR10: Mobile 3-D [by Annalee Newitz] — “The Samsung B710 phone looks like a typical smart phone, but something unexpected happens when the screen is moved from a vertical to a horizontal orientation: the image jumps from 2-D to 3-D. The technology that produces this perception of depth is the work of Julien Flack, CTO of Dynamic Digital Depth, who has spent more than a decade perfecting software that can convert 2-D content to 3-D in real time. It could help solve the biggest problem with 3-D: the need for special glasses that deliver a separate image to each eye. Flack’s software synthesizes 3-D scenes from existing 2-D video by estimating the depth of objects using various cues; a band of sky at the top of a frame probably belongs in the far background, for example. It then creates pairs of slightly different images that the viewer’s brain combines to produce the sensation of depth. The technology can be used with the much-hyped 3-D televisions announced in January (which require glasses), but its biggest impact will be as a way to create content for mobile devices with auto stereoscopic 3-D displays, which work by directing light to deliver different versions of an image directly to each of a viewer’s eyes. The effect works best over a narrow range of viewing angles, so it is ill suited to television or cinema screens. But phones are generally used by one person at a time and are easily held at the optimum angle. That’s why mobile multimedia devices are likely to win the race to bring 3-D into the mainstream. … The most exciting area for Flack right now is games. Hundreds of games actually simulate 3-D spaces internally to handle mechanics such as the path of a missile, and then convert those 3-D spaces into 2-D to display to the player. With his technology, he says, the 3-D geometry ‘available inside the game itself’ can be made accessible to the display. … It’s applications like mobile games and video that will drive the widespread adoption of 3-D screens. And that, in turn, could lay the groundwork for a new generation of surprising interfaces and applications, just as large 2-D screens on mobile devices spawned developments such as touch-based interfaces and augmented reality.”
I suspect that is one of those technologies that will find valuable uses far beyond those imagined by its creators. For example, doctors may be able to adapt the technology so they can make better long-distance diagnoses of patients in remote locations. Gaming won’t change the world; but other uses of the technology might. Since I raised the issue of health care, let’s move to three technologies that might change how doctor’s treat us when we get sick.
TR10: Implantable Electronics [by Katherine Bourzac] — “The next generation of implantable medical devices will rely on a high-tech material forged not in the foundry but in the belly of a worm. Tufts University biomedical engineer Fiorenzo Omenetto is using silk as the basis for implantable optical and electronic devices that will act like a combination vital-sign monitor, blood test, imaging center, and pharmacy–and will safely break down when no longer needed. Implanted electronics could provide a clearer picture of what’s going on inside the body to help monitor chronic diseases or progress after surgery, but biocompatibility issues restrict their use. Many materials commonly used in electronics cause immune reactions when implanted. And in most cases today’s implantable devices must be surgically replaced or removed at some point, so it’s only worth using an implant for critical devices such as pacemakers. Silk, however, is biodegradable and soft; it carries light like optical glass; and while it can’t be made into a transistor or an electrical wire, it can serve as a mechanical support for arrays of electrically active devices, allowing them to sit right on top of biological tissues without causing irritation. Depending on how it’s processed, silk can be made to break down inside the body almost instantly or to persist for years. And it can be used to store delicate molecules like enzymes for a long time.”
TR10: Dual-Action Antibodies [by Sabin Russell] — “At Genentech’s sprawling headquarters south of San Francisco, senior scientist Germaine Fuh has been genetically redesigning two of the company’s most lucrative cancer drugs. One, Herceptin, is a monoclonal antibody that shuts down HER2, a growth accelerator in about 20 percent of breast tumors. The other, Avastin, is an antibody that blocks a protein that stimulates the formation of tumor-feeding blood vessels. Last year the drugs had combined sales of $11 billion; a full course of Herceptin at wholesale costs about $43,000, while treating a breast cancer patient with a full course of Avastin costs about $55,000. Fuh’s goal: to show she can provide greater benefit for people fighting breast cancer by combining the action of the antibodies in one molecule. Last year, she and her coworkers showed that a modified version of the Herceptin antibody not only shut down the HER2 receptor in mice but also locked onto VEGF, Avastin’s target. Designing such ‘dual-specific’ antibodies could help solve a major problem with chemotherapy drugs: cancer cells can become resistant to them, mutating in ways that allow them to dodge the medication’s action. Doctors often mix various chemotherapy drugs in an effort to kill cancers before they can exploit this escape mechanism. Having a single drug that can hit the cancer from multiple directions would simplify treatment. A single monoclonal antibody that could do the work of two is also attractive from a business perspective. It might cost half as much to manufacture as two separate antibodies, and the path to regulatory approval might also be shorter and less expensive, involving one set of clinical trials instead of multiple trials for two separate drugs in various dosage combinations. … The implications of Fuh’s research are indeed far-reaching. If the concept proves successful, antibodies that stick to two targets might be used to treat infectious diseases as well as cancer–offering the promise of drugs that work better and cost less.”
TR10: Engineered Stem Cells [by Emily Singer] — “The small plastic vial in James Thomson’s hand contains more than 1.5 billion carefully coddled heart cells grown at Cellular Dynamics, a startup based in Madison, WI. They are derived from a new type of stem cell that Thomson, a cofounder of the company, hopes will improve our models of human diseases and transform the way drugs are developed and tested. Thomson, director of regenerative biology at the Morgridge Institute at the University of Wisconsin, first isolated human embryonic stem cells in 1998. Isolating these cells, which are capable of maturing into any other type of cell, marked a landmark in biology–but a controversial one, since the process destroys a human embryo. A decade later, Thomson and Junying Yu, then a Wisconsin postdoc, reached another milestone: they developed a way to make stem cells from adult cells by adding just four genes that are normally active only in embryos. (Japanese researcher Shinya Yamanaka simultaneously published a similar approach.) Dubbed induced pluripotent stem cells (iPS cells), they have the two defining characteristics of embryonic stem cells: they can reproduce themselves many times over, and they can develop into any cell type in the human body. Because no human embryos are used to create them, iPS cells solve two problems that had long plagued researchers: political protest and shortages of material. Much of the excitement over iPS cells, and stem cells in general, arises from the possibility that they could replace damaged or diseased tissue. But Thomson thinks their most important contribution will be to provide an unprecedented window on human development and disease. Scientists can create stem cells from the adult cells of people with different disorders, such as diabetes, and induce them to differentiate into the types of cells damaged by the disease. This could allow researchers to watch the disease as it unfolds and trace the molecular processes that have gone awry.”
I don’t think that there is much doubt that stem cell therapy holds great promise for the future. People with degenerative diseases and spinal column injuries can’t wait for the day that damaged tissues can be replaced by new, healthy material produced by their own bodies. Another “healthy material” highlighted by the magazine is “green concrete.”
TR10: Green Concrete [by David Bradley] — “Making cement for concrete involves heating pulverized limestone, clay, and sand to 1,450 °C with a fuel such as coal or natural gas. The process generates a lot of carbon dioxide: making one metric ton of commonly used Portland cement releases 650 to 920 kilograms of it. The 2.8 billion metric tons of cement produced worldwide in 2009 contributed about 5 percent of all carbon dioxide emissions. Nikolaos Vlasopoulos, chief scientist at London-based startup Novacem, is trying to eliminate those emissions with a cement that absorbs more carbon dioxide than is released during its manufacture. It locks away as much as 100 kilograms of the greenhouse gas per ton. Vlasopoulos discovered the recipe for Novacem’s cement as a grad student at Imperial College London. … Other startups are also trying to reduce cement’s carbon footprint, including Calera in Los Gatos, CA, which has received about $50 million in venture investment. However, Calera’s cements are currently intended to be additives to Portland cement rather than a replacement like Novacem’s, says Franz-Josef Ulm, director of the Concrete Sustainability Hub at MIT. Novacem could thus have the edge in reducing emissions, but all the startups face the challenge of scaling their technology up to industrial levels.”
I first discussed “green concrete” a year ago in post entitled Go Green, Save Lives. In that post, I discussed a product called TX Active cement that neutralizes air pollutants such as benzene, carbon monoxide, nitrogen oxide, and others. The next technologies have to do with energy production. The first one, called solar fuel, is a fuel that uses the sun in a very interesting way.
TR10: Solar Fuel [by Kevin Bullis] — “When Noubar Afeyan, the CEO of Flagship Ventures in Cambridge, MA, set out to invent the ideal renewable fuel, he decided to eliminate the middleman. Biofuels ultimately come from carbon dioxide and water, so why persist in making them from biomass–corn or switchgrass or algae? ‘What we wanted to know,’ Afeyan says, ‘is could we engineer a system that could convert carbon dioxide directly into any fuel that we wanted?’ The answer seems to be yes, according to Joule Biotechnologies, the company that Afeyan founded (also in Cambridge) to design this new fuel. By manipulating and designing genes, Joule has created photosynthetic microörganisms that use sunlight to efficiently convert carbon dioxide into ethanol or diesel–the first time this has ever been done, the company says. Joule grows the microbes in photobioreactors that need no fresh water and occupy only a fraction of the land needed for biomass-based approaches. The creatures secrete fuel continuously, so it’s easy to collect. Lab tests and small trials lead Afeyan to estimate that the process will yield 100 times as much fuel per hectare as fermenting corn to produce ethanol, and 10 times as much as making it from sources such as agricultural waste. He says costs could be competitive with those of fossil fuels. If Afeyan is right, biofuels could become an alternative to petroleum on a much broader scale than has ever seemed possible.”
This technology really could be a game changer for the world. I suspect that Joule Biotechnologies won’t have too much trouble finding investment capital. Another group that is using micro-organisms to create energy is the U.S. Navy’s Office of Naval Research (ONR). Scientists there have developed a method for using microbial fuel cells (MFCs) to convert chemical energy to electrical energy. “Think of it as a battery that runs on mud,” ONR Program Manager Dr. Linda Chrisey said. [“ONR’s microbial fuel cell generates electricity from mud,” by Darren Quick, Gizmag, 20 April 2010]. Although ONR has some uses in mind for its new technique, it has nowhere near the earth-changing potential of Joule Biotechnologies work. The next technology promoted by Technology Review as world changing deals with a more common method of producing energy from sunlight.
TR10: Light-Trapping Photovoltaics [by Bob Johnstone] — “In 1995, finishing her undergraduate degree in physics, Kylie Catchpole decided to take a risk on a field that was nearly moribund: photovoltaics. … But her gamble paid off. In 2006 Catchpole, then a postdoc, discovered something that opened the door to making thin-film solar cells significantly more efficient at converting light into electricity. It’s an advance that could help make solar power more competitive with fossil fuels. Thin-film solar cells, which are made from semiconductor materials like amorphous silicon or cadmium telluride, are cheaper to produce than conventional solar cells, which are made from relatively thick and expensive crystalline wafers of silicon. But they are also less efficient, because if a cell is thinner than the wavelength of incoming light is long, that light is less likely to be absorbed and converted. … Thus, larger installations are required in order to produce the same amount of electricity, limiting the number of places the technology can be used. Catchpole, who is now a research fellow at the Australian National University in Canberra, began work on this problem in 2002 at the University of New South Wales in Sydney. ‘It was a case of “start at the beginning: can you think of a completely different way to make a solar cell?”‘ she says. ‘One of the things I came across was plasmonics–looking at the strange optical properties of metals.’ Plasmons are a type of wave that moves through the electrons at the surface of a metal when they are excited by incident light. Others had tried harnessing plasmonic effects to make conventional silicon photovoltaics more efficient, but no one had tried it with thin-film solar cells. Catchpole found that nanoparticles of silver she deposited on the surface of a thin-film silicon solar cell did not reflect back light that fell directly onto them, as would happen with a mirror. Instead, plasmons that formed at the particles’ surface deflected the photons so that they bounced back and forth within the cell, allowing longer wavelengths to be absorbed. Catchpole’s experimental devices produce 30 percent more electrical current than conventional thin-film silicon cells.”
I first discussed this research last August in a post entitled Update on Solar Energy and again in March of this year in a post entitled Updates on Alternative Energy Sources, Part 6: Solar. In the latter post, I included a graphic about the process. The final technology is also one I’ve written about before: cloud computing.
TR10: Cloud Programming [by Erica Naone] — “Cloud computing offers the promise of virtually unlimited processing and storage power, courtesy of vast data centers run by companies like Amazon and Google. But programmers don’t know how best to exploit this power. Today, many developers are converting existing programs to run on clouds, rather than creating new types of applications that could work nowhere else. And they are held back by difficulties in keeping track of data and getting reliable information about what’s going on across a cloud. If programmers could solve those problems, they could start to really take advantage of what’s possible with a cloud. … At the University of California, Berkeley, Joseph Hellerstein thinks he can make it much easier to write complex cloud applications by developing software that takes over the job of tracking data and keeping tabs on what’s happening. His big idea is to modify database programming languages so that they can be used to quickly build any sort of application in the cloud–social networks, communication tools, games, and more. Such languages have been refined over the years to hide the complexities of shuffling information in and out of large databases. If one could be made cloud-friendly, programmers could just think about the results they want, rather than micromanaging data. The challenge is that these languages process data in static batches. They can’t process data that is constantly changing, such as readings from a network of sensors. The solution, Hellerstein explains, is to build into the language the notion that data can be dynamic, changing as it’s being processed. This sense of time enables a program to make provisions for data that might be arriving later–or never. The result is called Bloom. … By lowering the complexity barrier, these languages should increase the number of developers willing to tackle cloud programming, resulting in a wave of ideas for new types of powerful applications. Hellerstein’s group is getting Bloom ready for a release in late 2010. They and others are also working on demonstrating how the techniques can be used for real-time applications such as online multiplayer games, or to watch for the warning signs of an earthquake or tsunami.”
For about cloud computing, see my posts entitled The Coming Age of Cloud Computing and The AIR is getting Blurry with Clouds — Computing that is. In the latter post, I wrote: “I suspect that the potential savings promised by cloud computing (through the use of thin clients) and its phenomenal access to data will be offset by continued security concerns.” My feelings haven’t changed; especially in the light of recent revelations that hackers penetrated the computer networks of Google and more than 30 other large companies [“Google hackers duped system administrators to penetrate networks, experts say,” by Ellen Nakashima, Washington Post, 21 April 2010]. Google admitted that “intruders had penetrated its network and compromised valuable intellectual property.” In addition, “The New York Times reported on its Web site … that the Google theft included source code for a password system that controls access to almost all of the company’s Web services.”
Not all of the technologies identified by Technology Review are likely to change the world dramatically. Once proven, however, most of them will have a positive impact on a great number of lives.