For the past year or so, the term peer-to-peer (P2P) has become synonymous with Napster, the controversial file-sharing program created by a 20-year-old software whiz called Shawn Fanning and now the subject of numerous lawsuits. Napster, much like its close cousin, Gnutella, allows users to transfer music files among themselves, circumventing many legal controls over copyright and creating a massive network of music libraries scattered about the Internet. Napster is a clever twist on a time-worn architecture dating back to the early days of the Internet. Now, a number of start-up firms are hoping to harness the same technology in the corporate world, promising to use the computing architecture to empower workers, unleash their creativity and solve communication problems.
As the embattled Napster struggles to switch to a subscription-based business model, companies such as Groove Networks, NextPage and XDegrees are trying to introduce the peer-to-peer style of computing into applications that allow workers to collaborate on joint projects (groupware), swap information, and share network resources such as storage space and other costly bits of equipment. Other firms, including Entropia and United Devices, are developing supercomputing applications that use the unbridled power of computers hooked to networks (see “Computing Power on Tap“). All of them believe fervently that the corporate world is fertile ground for P2P computing. The shift will, they say, do for computing in the 2000s what the PC did in the 1980s.
Sceptics have dismissed peer-to-peer as pure hype, pointing to unanswered questions about security and reliability that continue to dog the architecture, as well as the number of P2P start-ups that have failed in recent months. At a time when the corporate world is tightening its belt and sticking to tried-and-true ways of doing business, peer-to-peer computing is going to be a tough sell.
Despite attempts to distance themselves from Napster, the P2P start-ups have much to thank the embattled music service for. Napster gave credibility to the technology, proving that large-scale P2P networking using lowly personal computers is not only possible but can also be extremely powerful.
Actually, the concept is far from new. When the Internet’s original architects built the “network of networks”, computers were connected in peer fashion. Many of the services that made the Internet what it is today — eg, the Domain Name Service (DNS) directory, Usenet newsgroups and countless other features — were based on peer-to-peer architectures. But in those days, computers were hulking mainframes and there were far fewer of them tethered together.
The return of peer-to-peer came by way of a messaging platform. In 1996, a young Israeli firm called Mirabilis launched its popular ICQ (“I seek you”) instant-messenger service using a peer-to-peer architecture to send messages between PCs connected to the Internet. Napster went several steps further, thanks to the faster microprocessors, greater storage capacity and faster connection speeds that had then become available, tying PCs together to share files over the Internet.
The most important lesson of Napster is that, surprisingly, people are willing to open their computers to, and share files with, complete strangers when they see value in doing so. In the process, they have shown how really large computer networks can be created rapidly through the piecemeal contribution of millions of individual PCs, each of which functions as a server as well as a client. Looked at this way, Napster has, in effect, 40m servers and systems administrators, keeping the network’s management and infrastructure costs to a minimum.
Compared with corporate software, however, Napster is a simple program that does one thing — sharing music files. It is also not a pure P2P system. It uses a central server to link computers together to avoid the complexity of other peer-to-peer programs such as Gnutella, a file-sharing program popular among the high-tech community. Compared with Napster, Gnutella is nowhere near as easy to use and illustrates the danger of sharing files, including viruses that can spread quickly throughout a network.
But what exactly is peer-to-peer computing? Like many other networking ideas, peer-to-peer is not a single concept, but a wide array of technologies. For some, peer-to-peer means that the PC and the server are one and the same thing. For others, it means that each PC’s resources are shared with all other PCs. And for still others, it means that the network is itself the “computer”. Purists define peer-to-peer as an architecture devoid of any form of centralisation. Pragmatists say it is simply a computer architecture that uses a central server but with peers that are all independent.
Perhaps the most articulate definition is offered by Clay Shirky, a venture capitalist and leading light in the peer-to-peer community. Mr Shirky links peer-to-peer’s rise to the constraints of the DNS, the database that assigns addresses for computers connected to the Internet. At its most basic, says Mr Shirky, peer-to-peer is a swathe of applications that harness resources at the far edges of the Internet, where the machines have complete freedom (or, at least, significant autonomy) from the central servers. However, because such resources — PCs today, but mobile phones, PDAs (personal digital assistants) and other appliances in the near future — are often attached to modems that are allocated new addresses each time they dial their Internet service providers, Mr Shirky insists that P2P systems must use something other than DNS to link their machines together.
Under this definition, Napster is peer-to-peer because of the alternative DNS arrangement it employs (essentially the user’s login name) and because of the independence of the computers involved. So, too, are instant-messenger services such as ICQ and America Online’s Instant Messenger (AOL acquired ICQ in 1998), both of which transcend the traditional DNS architecture and devolve connection management to the individual machines. E-mail, by contrast, is not a P2P system because it relies on a central server.
Four Applications in One
Altogether, peer-to-peer encompasses four separate activities: collaboration among users, interaction between software applications, efficient use of network resources, and supercomputing. The most prominent among these are the collaboration systems being developed by newcomers such as Groove Networks of Boston, Massachusetts, Endeavors Technology of Irvine, California, and Ikimbo of Herndon, Virginia. These systems combine Napster’s file-sharing abilities with the instant-messaging capability of ICQ, all in a secure environment. Their main attraction is in encouraging ad hoc file sharing and communications among work-groups.
With Groove, for instance, workers can connect to colleagues in “virtual environments” to pursue all manner of collaborative work, ranging from brainstorming and event-planning to sharing documents and surfing the Internet together. Groove’s system, like others, informs each user when colleagues (“buddies”) are online, identifies them, and allows a user to connect to them from anywhere. Most important for the business world, once the software is launched, the service creates a secure space for users to communicate — be they on the Internet or on a private Intranet behind a corporate firewall. There is no need to involve the company’s service engineers; no need to establish some form of central organisation; and little concern about strangers gaining access to the corporate network.
Most of the other start-up firms developing collaborative P2P software offer variations on the same theme. What differentiates the likes of Groove and OpenCola of San Mateo, California, is their emphasis on building a peer-to-peer platform, upon which other software developers can add further functions. One way of doing this is to use the powerful XML (extensible mark-up language) protocol that allows developers to specify not only the layout of web-pages but also the nature of their content.
Despite all the attention given to collaboration, systems that allow software applications to interact with one another in a P2P fashion may be among the most promising. Systems from Oculus Technologies, NextPage and OpenDesign tie distributed data together for e-business, product design or knowledge management. Such programs use peer-to-peer as a means for sending data inputs and outputs from one application to another, or for linking countless machines into one giant database, maintaining the original producer’s ownership of, say, research data or price lists while allowing contractors to use them.
Software-interaction technology permits companies to break down complex problems into smaller, more manageable ones, says Chris Williams, the chief executive of Oculus in Boston. The best part about such systems is that the process of collating data and ensuring that they are current is handled by those who produce them, which ensures that they are accurate and up-to-date. That makes it ideal for such applications as online exchanges and stock trading. Liquidnet of New York and WorldStreet of Boston, are both building buy-side systems aimed at Wall Street. The former aims to deliver a peer-to-peer trading system for a large (anonymous) pool of buyers, while the latter is developing tools for portfolio managers, analysts and traders.
Another area where interaction software’s ability to deliver up-to-date information counts is searching the Internet. Typical search engines in use today deliver content that is at best 24 hours old. Even then, most search engines dredge up only a fraction of the information that is available on the Internet. OpenCola and Infrasearch, formerly in San Mateo but recently acquired by Sun Microsystems, are developing the next generation of search engines. These will use peer-to-peer techniques to deliver more timely and comprehensive information for media groups and other large content owners.
Meanwhile, a number of young firms, including MojoNation of Mountain View, California, are working to create resource-utilisation programs that harness P2P’s ability to store files, distribute content and share the processing power of other machines. The goal here is partly to cut costs on such hardware as storage, servers and other equipment, but also to help manage traffic on the network. Of all the potential services offered by P2P, this could be the hardest sell. There may be too many problems associated with security and complexity — not to mention the plummeting cost of storage and servers — to make such peer-to-peer services practical. Also, such services appear to be going up against the likes of Akamai, a firm whose caching technology helps speed the performance of websites. As the sceptics point out, offering savings on things that are only marginal costs anyway is hardly a viable business model.
Finally, there are the distributed computing services that deliver supercomputing power to companies needing massive number-crunching capacity occasionally but unwilling to pay millions of dollars for it. Essentially, the technology being developed by firms such as United Devices of Austin, Texas; Entropia of San Diego, California; and Applied Meta of Cambridge, Massachusetts, breaks down large computations into small parcels that can be distributed among computers tethered to a network. Each PC simultaneously computes the data and returns the results to a central computer that assembles the parts into a whole.
The process can be used, for instance, to farm out individual frames of digital animation to different PCs for simultaneous rendering, and then to recombine the rendered frames into a fluid sequence. Once you add up the thousands or even millions of computers that can be roped in to do such calculations, the result is a parallel supercomputer with many teraflops (trillions of “floating point” operations per second) for a fraction of the cost of a supercomputer such as IBM’s chess-playing champion, Deep Blue, or its forthcoming protein-folding colossus, Blue Gene.
The supercomputing start-up firms reckon that there are fortunes to be made from selling this kind of service to companies. The applications could be anything from genetic research to financial Monte Carlo probability simulations. Most of the firms in question are addressing problems that require brute force number-crunching — such as digital rendering, engineering design, pharmaceutical research and financial modelling.
Ironically, one of their bigger advantages is that distributed computing can be made to work with a wide variety of Unix and Windows programs that are available off the shelf, opening the door to far more potential customers. Supercomputers, by contrast, require specialised, custom-made software that can take years to write and cost millions of dollars.
Still, questions remain about whether companies are prepared to trust anonymous computer users around the world who may accidentally or maliciously tamper with their precious data. Not surprisingly, most of the young P2P firms are designing their bread-and-butter applications for enterprises that are large enough to do their distributed processing in-house. Most of the fledgling firms offering distributed supercomputing claim to have eager customers lining up. But analysts question how large this particular market can really become. Indeed, the closure of one start-up, Popular Power, in March, hints at the hardships that lie ahead. However, the distributed-computing folk insist that today’s venture-capital market should not be taken as a measure of their future prospects.
As businesses go, peer-to-peer could not have happened at a worse time. Venture capital has dried up and companies have been less willing to spend on such projects. Despite the recent meltdown in high-tech investment, some $300m has poured into peer-to-peer companies, reckons Larry Cheng, a partner at Battery Ventures, a venture-capital firm based in Wellesley, Massachusetts. The best funded so far have been Groove, which has already garnered $60m; Entropia, which has collected $29m; NextPage, which has received $20m; and United Devices, which has $13m. Apart from these four companies, there is going to be little fresh capital for many of the other peer-to-peer ventures.
Dropping Like Flies
Would that signal the end of commercial P2P? Not necessarily. Even as the first swarm of peer-to-peer companies drop like flies, the know-how will permeate all manner of applications developed for large enterprises by Microsoft, Oracle, Sun Microsystems and others. Those behemoths have worked to emphasise their backing of the technology in recent months. Intel, the first big company to back peer-to-peer, has thrown its weight behind the technology, investing in several start-ups and helping guide the peer-to-peer working group to develop standards. Meanwhile, Microsoft unveiled in April its Project Hailstorm, as part of its software-as-service initiative (“.Net”), which features peer-to-peer services prominently. Not to be outdone, Sun Microsystems unveiled its JXTA (pronounced “Juxta”) set of standards for an open-source P2P platform. Sun also has begun investing in the area — most recently by acquiring Infrasearch for a reported $10m.
But for all the problems associated with the technology and financing, it is the psychology of the modern workplace that will make peer-to-peer a force to be reckoned with. The past decade has seen a dramatic shift in the nature of work. Company boundaries have grown wider, tying customers and suppliers ever closer, and increasing their reliance on temporary workers and consultants, while depending ever more on ad hoc work-groups. According to Daniel Pink, the author of “Free Agent Nation”, such a workplace is increasingly devoid of fixed structures and clearly defined social and professional roles. Peer-to-peer matches the behaviour of modern corporations rather well.
Bonnie Nardi, an anthropologist at Hewlett-Packard in San Jose, California, who has studied the way people work today, has identified the rise of ad hoc networks as one of the most fundamental innovations in the workforce in recent years. These networks, which Ms Nardi calls “intensional networks”, because of their intentional nature and inherent tension with other structures, have become a significant extension of work — and, increasingly, of the corporation itself. In turn, such self-managed messaging systems as ICQ and other peer-to-peer tools for collaboration then become the centre point of such networks, allowing users to contact peers freely, easily, and on their own terms.
Preliminary case studies of peer-to-peer applications confirm such trends in the business world. For example, United Technologies of Hartford, Connecticut, has used Oculus’s interaction system to tie its engineering and design systems together, linking numerous databases and allowing its disparate design and development teams to communicate better. In the process, the system reduced the flow of wrong or out-dated information among the design teams, saving millions of dollars. In a similar manner, Ford Motor Company of Dearborn, Michigan, has used the Oculus system to speed up its evaluation of design changes needed for improving fuel efficiency. The ability to link information across all of its applications and systems has allowed Ford to perform its design iterations much faster, saving the company anywhere from $5m to $15m per vehicle design programme.
Does all this mark the death of the corporate server? Far from it. Computer scientists note that, in the commercial world, peer-to-peer is appropriate only for applications that require direct communication. Indeed, the server will continue to reign supreme on the company’s network for managing personnel and payrolls, enterprise planning and much more. In future, however, the server will provide higher level services instead of such menial chores as simply doling out files. After all, peer-to-peer is not a business model but a computer architecture — in short, a way of thinking. In the scramble to cash in on the P2P phenomenon, many seem to have forgotten that fact.
Copyright © 2001 The Economist Newspaper and The Economist Group. All rights reserved.