Home » Connectivity » The Big Shift: Organizations in the Age of Information

The Big Shift: Organizations in the Age of Information

February 1, 2010

supplu-chain

In a recent post entitled The Age of Cyberwars, I referenced an op-ed column by Thomas Friedman [“Is China an Enron? (Part 2),” New York Times, 20 January 2010]. In that column about China’s future, Friedman touched on the writings of John Hagel, a noted business writer and management consultant, who last year, as Co-Chair of the Deloitte Center for the Edge, unveiled the first version of what it calls the “groundbreaking Shift Index.” Friedman wrote:

“John Hagel … argues in his recently released ‘Shift Index’ that we’re in the midst of ‘The Big Shift.’ We are shifting from a world where the key source of strategic advantage was in protecting and extracting value from a given set of knowledge stocks — the sum total of what we know at any point in time, which is now depreciating at an accelerating pace — into a world in which the focus of value creation is effective participation in knowledge flows, which are constantly being renewed. … Therefore, the more your company or country can connect with relevant and diverse sources to create new knowledge, the more it will thrive. And if you don’t, others will.”

I’m not sure how groundbreaking those observations are, but I do believe they are accurate. Several years ago I began talking about this “shift” as moving from the monolith to the matrix. I drew the imagery of the monolith from the writings of Albert-László Barabási, one of the seminal mathematicians in the relatively new field of network science. In his interesting book entitled Linked: How Everything Is Connected to Everything Else and What It Means, Barabási wrote:

“Despite its pervasiveness, there are many problems with the [industrial age] corporate tree. First, information must be carefully filtered as it rises in the hierarchy. If filtering is less than ideal, the overload at the top level, where all the branches meet, could be huge. As a company expands and the tree grows, information at the top level inevitably explodes. Second, integration leads to unexpected organizational rigidity. A typical example comes from Ford’s car factories, one of the first manufacturing plants to fully implement the hierarchical organization. The problem was that they got too good at it. Ford’s assembly lines became so tightly integrated and optimized that even small modifications in automobile design required shutting down factories for weeks or months. Optimization leads to what some call Byzantine monoliths, organizations so overorganized that they are completely inflexible, unable to respond to changes in the business environment.”

Friedman was arguing that countries, like corporations, will either move from industrial age monolithic structures to the more matrixed structures required by the information age or they will start lagging behind. Other characteristics of the monolith began showing up as the information age dawned. During the infancy of the information age, companies held tightly to the belief that developing proprietary processes was the only way to protect their products (both hard and soft) from those who would steal them and, thereby, undeservedly (if not criminally) profit from the company’s hard work. In order to create the greatest possible barriers against corporate espionage and theft, they trademarked, copyrighted, and patented their products. In addition to protecting their investments, companies believed that proprietary systems would increase profitability. Since only the company was privy to the proprietary data behind the systems, only the company (or a licensee) could maintain them.

 

In other words, if something was inside the company box (the monolith), it was secure, trusted and, hopefully, profitable. The drawback, of course, was that connectivity between proprietary systems from competing companies (and sometimes proprietary systems within the company itself) presented an almost insurmountable challenge. As noted above, when it came to generating profits, companies didn’t mind this lack of connectivity – in fact, they counted on it. They knew that customers would be reluctant to change vendors knowing that proprietary software bought elsewhere would be incompatible with software they already had and that attempts to integrate proprietary systems would be either costly or impossible. Hence, the monolith created a kind of forced customer loyalty. The fact is this strategy worked – perhaps too well. Just how well it succeeded was pointed out in an on-line article by computer scientist Federico Zoufaly [“Issues and Challenges Facing Legacy Systems,” 1 November 2002]. Zoufaly wrote:

“Despite the availability of more cost-effective technology, about 80% of IT systems are running on legacy platforms. International Data Corp. estimates that 200 billion lines of legacy code are still in use today on more than 10,000 large mainframe sites. The difficulty in accessing legacy applications is reflected in a December 2001 study by the Hurwitz Group that found only 10% of enterprises have fully integrated their most mission-critical business processes.”

This worked fine for manufacturers of IT-related systems until the advent of the Internet and the World-Wide Web. Within a matter of years, connectivity became the lifeblood of the globalized economy and any system that hampered connectivity became an obstacle. Companies realized that customer loyalty was more likely to be generated by making it easier for customers to connect with others using their software than harder. That is why Zoufaly’s statistics were so amazing. He saw organizations built around these systems in the same terms as Barabási:

“Monolithic legacy architectures are antitheses to modern distributed and layered architectures. Legacy systems execute business policies and decisions that are hardwired by rigid, predefined process flows, making integration with customer relationship management (CRM) software and Internet-based business applications torturous and sometimes impossible. In addition, IT departments find it increasingly difficult to hire developers qualified to work on applications written in languages no longer found in modern technologies.”

There are a number of internal tensions that keep companies from moving faster to replace legacy systems with more modern architectures. First, of course, is cost. Starting from scratch and trying to replace business functionality while maintaining important data-base content can be expensive not to mention risky. Risk is, in fact, the second reason that organizations hesitate. If an organization’s critical processes are all driven by legacy systems, it has every reason to take a cautious approach when thinking about replacing it. Often it is difficult to fully test a system until it comes on line. Finally, time becomes a significant factor. Crossover from a legacy platform to a newer platform isn’t instantaneous. The legacy system continues running while its replacement is being developed. New data is being added to the old system even as older data is being transferred to the new system. Zoufaly notes that several different approaches have been taken to lengthen the life of legacy systems. The first approach involves screen scrapers or “frontware.” This option is probably the least expensive because frontware simply adds a graphical user interface to character-based mainframe and minicomputer applications. It’s the least expensive option because it provides Internet access to legacy applications without having to make any changes to the underlying platform. Zoufaly points out that such solutions, because they are non-intrusive, can be deployed in days and sometimes hours, but their scalability can be an issue because most legacy systems cannot handle nearly as many users as modern Internet-based platforms.

 

Another non-intrusive approach Zoufaly notes is legacy wrapping. This technique builds callable Application Program Interfaces (APIs) around legacy transactions, providing an integration point with other systems. APIs are calling conventions that define how a service is invoked through software. An API enables programs to communicate with certain vendor-supplied software. Zoufaly notes that among the shortcomings of this approach is the fact that legacy wrapping fails to provide a way to fundamentally change the hardwired structure of the legacy system. As a result, it often becomes a piece of a larger integration method that uses Enterprise Application Integration (EAI) frameworks. The benefits of an EAI approach is that it “moves away from rigid application-to-application connectivity to more loosely connected message- or event-based approaches. The middleware also includes data translation and transformation, rules- and content-based routing, and connectors (often called adapters) to packaged applications. … EAI tools are considered the state-of-the-art of loosely coupled modern architectures.”

 

While non-intrusive methods of dealing with legacy systems are less expensive than methods discussed below, they ultimately may prove costly as expenses associated with legacy system upkeep increases. The next approach that Zoufaly discusses is Enterprise Resource Planning (ERP) software offered by companies like Oracle and SAP. Some approaches, like SAP’s, are costly and complicated, requiring extensive operator training and certification. Zoufaly notes that this approach makes sense when the code quality of the legacy system is poor. But the downside is that an organization either has to customize ERP software to match its business processes or conform its business processes to those already used by the ERP software. Given all these factors, it is little wonder that organizations are looking for a new information-age architecture even while clinging to legacy systems that have limited future utility. Even the best Enterprise Resource Planning software is going to have a difficult time keeping legacy systems useful, especially since some of them were developed using computer languages that are about as dead as Latin. Companies that have more than just an eye on the future are already starting to install service- and process-oriented architectures. All of this they are doing in advance of moving to grid computing to eventually become part of the matrix. Or as Hagel puts it, the shift is on.

 

So what does the matrix entail? An emerging connected grid (or more correctly grids) is what makes up the matrix. The matrix is the combination of the global information highway (bandwidth), connected grids, sometimes referred to as the global information grid (GIG), and internal organizational grids established for security and trust. The matrix requires pervasive data sharing in order for companies to succeed and grow. Such pervasive data sharing requires open architecture, commercial-off-the-shelf (COTS) software, and lots of bandwidth. Grids are different than networks. Computer scientist Ian Foster explains why [“The Grid: A New Infrastructure for 21st Century Science,” Physics Today, February 2002].

“What many term the ‘Grid’ offers a potential means of surmounting these obstacles [i.e., lack of speed and processing power] to progress. Built on the Internet and the World Wide Web, the Grid is a new class of infrastructure. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, the Grid promises to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed groups to work together in ways that were previously impossible.”

You can start to see how often people are talking about things that have never been possible before. Instead of grids, today companies talk about cloud computing. I first wrote about cloud computing two years ago in a post entitled The Coming Age of Cloud Computing. Foster underscores the fact that the Grid (or cloud) goes beyond sharing and distributing data and computing resources. He continues:

“Grid architecture can be thought of a series of layers of different widths. At the center are the resource and connectivity layers, which contain a relatively small number of key protocols and application programming interfaces that must be implemented everywhere. The surrounding layers can, in principle, contain any number of components.

In past posts I’ve discussed architectural layers and the importance of service-oriented architectures. Grids build on this notion of layering to achieve their power. Foster notes successful grid operations require smooth and efficient authentication and authorization of requests. In fact, such authentications and authorizations are the sine qua non of the grid. The only way you can generate the necessary trust without diminishing the speed and power of the grid is to use embedded rule sets. Moving from the monolith to the matrix turns the challenges faced in the early days of the computer age on their heads. As the challenges of connectivity decrease, challenges involving trust increase. Each node represents a potential danger. So a system must be developed that determines when a node and the connections between them can be trusted. In other words, information assurance within the matrix becomes the principal concern.

 

The challenges don’t end there. Even if you get connectivity right, you must still worry about the software used on the grid, the people who create it, and the people who use it. One continuing worry, according to John Markoff and Ashlee Vance, is that programmers are building Trojan horses into the software that we all use and rely on [“Fearing Hackers Who Leave No Trace,” New York Times, 20 January 2010]. Hagel is correct that we haven’t fully made the shift from the monolith to the matrix, but we’ve started the journey. We have already passed some of the markers along the way, such as the Internet, World Wide Web, architectural layers, parallel processing, and the genesis of grids and clouds. Barabási notes that the secret of grids lie as much in their structure and topology as in an organization’s ability to navigate them. He writes: “A string of recent breathtaking discoveries has forced us to acknowledge that amazingly simple and far-reaching natural laws govern the structure and evolution of all the complex networks that surround us.”

 

The matrix is not about living in some virtual environment like movie called the Matrix, it’s about a better way to structure enterprises and connect all parts of an organization with the information and nodes necessary for their success. For more about why keeping information in organizational siloes is bad, read my blog entitled The Curse of Silo Thinking. The matrix harmonizes human endeavor rather than replaces it. Howard Rheingold looks around the world and sees components of the matrix everywhere, then concludes: “When you piece together these different technological, economic, and social components, the result is an infrastructure that makes certain kinds of human actions possible that were never possible before” [Smart Mobs: The Next Social Revolution (New York: Perseus Publishing, 2002), p. xii.] Welcome to the matrix.

Related Posts: