When CFO began publishing, back in the primordial ooze of 1985, each issue contained a sizable amount of technology coverage. The editorial slant made sense. The arrival of IBM’s original personal computer just a few years earlier, and the subsequent release of Lotus 1-2-3, had turned the finance function on its head. Suddenly liberated from the drudgery of manually tabulating figures, controllers and finance chiefs found they could close the books in days, not lunar cycles. Moreover, groundbreaking new programs like Quicken and Hardisk Accounting made rolling up columns into the general ledger a snap.
Not surprisingly, many, if not most, of the products we covered two decades ago seem quaint today. For instance, a mobile telephone that barely fits in the trunk of a car hardly qualifies as mobile now; likewise, a 28-pound Compaq Portable computer isn’t all that portable. Nevertheless, a number of the first-generation products we’ve reported on over the years — accounting software, laptop computers, and, later, E-mail and enterprise resource planning software — have become standard operating equipment in the office of the 21st century.
What will be the revolutionary technologies of the next 20 years? As any futurist will admit, there’s simply no clear answer. Experts say exponentially faster processors, coupled with a vastly improved communications network, could usher in the era of pervasive computing. It could just as easily usher in an era of pervasive irritability, as information overload becomes commonplace. Wild cards such as nanotechnology and phenotropics (software) may take things in completely unexpected directions.
That said, we decided to read the tea leaves and predict which innovations will radically transform commerce over the next two decades. Of course, we also consulted with analysts, scientists, and CIOs. While all had differing opinions on what the next big things will be, a few technologies kept coming up in our conversations, and we settled on those. Not content merely to identify the technologies, we also forecast the years when they will be widely adopted. (If we’re wrong, talk to us in 2025.)
[Adopted 2008-2010] When we first encountered solid-object printing (eCFO, Summer 2001), it was still early days for the manufacturing process. At the time, 3-D printers were expensive (upward of $800,000), error prone, and still something of a novelty. Companies mostly used the machines to create prototypes of things like casting tools or next year’s big toy.
A lot has changed since then. Solid-object lasers now cost as little as $25,000, and a number of businesses have begun to deploy the machines to manufacture actual end-use products. One such company, Boeing subsidiary On-Demand Manufacturing, prints out tubes that go directly into jet fighters.
The fact that 3-D printing has, in five short years, moved from toy planes to real planes speaks volumes about where the technology is headed. An offshoot of rapid prototyping and computer-aided design, solid-object printing dramatically shortens the time it takes to build prototypes of new products or, in some cases, the products themselves. The process is similar to ink-jet printing, except the ink is replaced by polymers or powders or metal filaments. Typically about the size of a high-end copy machine, a 3-D printer deposits layer after layer of the chosen material. After each pass, heat is applied to bind and strengthen the layers (a process called sintering). In time, a solid object emerges.
Admittedly, the approach is far from perfect. Solid-object printing loses its cost advantage during long production runs, so it’s not likely to replace extrusion or casting for the manufacturing of high-volume, low-margin products. And the technology needs some refining. “A lot of material goes where you don’t want,” says manufacturing and technology guru Richard Morley.
Still, the potential of 3-D printing is mind-boggling. Already, Renault’s Formula 1 racing team uses sintering machines to produce parts for wind-tunnel testing. And the U.S. Army is reportedly building mobile 3-D printing units that can fabricate replacement parts on the spot. Some experts believe that within the next 20 years, consumers will be able to go to a specialty store and get individual items built while they wait (“Your doorknob is printing, sir”).
RightÂÂ What’s a Qubit?
[Adopted 2020-2022] For more than four decades now, advances in microprocessing have followed the script set out by Intel co-founder Gordon Moore. Moore’s Law, perhaps the best known postulate in the Digital Age, posits that the number of transistors per integrated circuit doubles every 18 months. The cramming of ever-tinier components can’t go on forever, though. As transistors approach the size of atoms, computer designers will have to take account of quantum-mechanical effects; the quirky behavior of subatomic particles could cause errors.
But that behavior will also be the source of unimaginable computing power. In quantum computing, atoms, not transistors, will store information. Thanks to the phenomenon of super-position, a quantum bit — or qubit — will simultaneously represent the binary numbers 0 and 1 (unlike conventional bits, which must be either 0 or 1). And storage will scale up exponentially: two qubits will store four binary numbers, three cubits eight, and so on. One hundred qubits will store 2100 numbers simultaneously. Quantum computers, in short, will be far more powerful than today’s fastest supercomputers.
Thus, while factoring extremely large numbers into their primes is an impossible task for a conventional computer, a quantum computer will be able to do it in seconds. That poses an obvious risk to current “uncrackable” public-key encryption codes, like the RSA crypto-system, which is based on factoring lengthy composite integers.
Corporate interest in quantum computers has also grown exponentially over the past 10 years. So far, very rudimentary quantum processors have been constructed in laboratories. To date, such computers are based on a few qubits. Using current technology, the upper limit of a working quantum computer would likely be about 10 qubits. But quantum machines with 50 to 100 qubits of processing power could revolutionize computing. Says Timothy Spiller, head of Hewlett-Packard’s quantum information processing group in Bristol, England: “There may be things we can do with quantum processors that we don’t even know yet.”
[Adopted 2025] Few science fantasies fan the imagination more than three-dimensional, computer-generated universes, à la the Holodeck on “Star Trek: The Next Generation.” There’s just one catch: virtually replicating the real world is really, really hard. As Henry Fuchs, Federico Gil Professor of Computer Science at the University of North Carolina, notes, “The initial excitement [about 3-D collaboration] has passed without successful results.”
He should know. Fuchs, along with scientists at Brown University and the University of Pennsylvania, has been struggling for years to bring the third dimension to communications, a technology broadly known as teleimmersion. At first blush, the hurdles appear insurmountable. With existing technology, conducting a 3-D conference requires compressing dozens of data-laden video streams and processing the images over at least 1,000 processors. Even I2, the next generation Internet, can’t accommodate such a flood of information.
Despite the barriers, experts are confident that teleimmersion will eventually come to pass. Unlike virtual reality, teleimmersion doesn’t involve the handling of computer-generated objects. Hence, challenging technologies like haptics are not required. Moreover, advances in quantum information processing and fiber-optics will undoubtedly speed the 3-D plow.
So, too, will demand for the service. Certainly, the prospect of such supervideoconferencing, where employees see hologramlike projections of other employees, holds tremendous appeal. Some experts believe 2-D video phones, likely to be in widespread commercial use in 10 years, will pave the way for teleimmersion. In between, wide-field-of-view screens will satisfy the need for face-to-face meetings.
Then, somewhere around 2025, rudimentary 3-D conference rooms should start appearing in corporate offices. “It’s difficult to predict,” admits Fuchs. “But it’s hard to imagine that we’ll still be looking at only TV pictures in 20 years.”
Signs of Life
[Adopted 2008-2010] Great Duck Island possesses one of the most sophisticated wireless networks on the planet. The tiny island off the coast of Maine boasts an unplugged mesh network composed of hundreds of palm-size nodes, each one featuring microcontrollers, memory, low-powered radios, and batteries. Some of the devices transmit real-time weather data; others cull information from sensors buried in the rocky soil. All in all, the windswept island is probably the most well-connected 220 acres on the planet.
Too bad nobody lives there. Great Duck Island’s mesh network (a research project co-sponsored by the Intel Research Laboratory at Berkeley and the College of the Atlantic) is designed to monitor the habitat of the elusive storm petrels that nest on the island. Researchers believe data culled from the sensors will help them better protect the endangered seabirds.
Increasingly, experts believe the concept can work in the business world as well as the natural one. They say that sensory networks, which marry analog measuring devices (or in some cases ID tags) with wireless relays, will alter how companies protect assets and manage supplies in coming years. While much has been written about passive RFID chips for supply-chain management, advances in mesh networks and microprocessing will usher in a new era of intelligent objects, predicts Jackie Fenn, vice president at IT consultancy Gartner. As sensors get smarter, inventory will begin to track and route itself.
The real promise of sensory networks, however, may lie in the monitoring of high-value assets. BP, for one, already keeps tabs on some of its railcars and pipelines using sensory devices. And recently, the energy giant completed a pilot application of 150 wireless vibration sensors on one of its tanker ships.
[Adopted 2015-2017] A digital world generates a vast amount of detritus. According to a survey conducted by the School of Information Management and Systems at the University of California at Berkeley, the world’s stored information grew at an annual clip of 30 percent between 1999 and 2002. The business world contributed mightily to that tally. Customer analytics, demand-driven supply-chains, and regulatory requirements all require companies to store ever-increasing amounts of data.
Housing all this information presents problems. Tape drives are fine for archiving material, but they’re slow. Disk drives are faster, but they become unstable as more information is crammed into less space.
The likely solution? In the long run, holographic storage. First conceived in the 1940s, holographic storage exited the realm of science fiction when lasers arrived on the scene in the 1960s, and the concept finally took flight with the advent of cheap, mass-produced lasers for CD players and other gizmos. In January, Longmont, Colorado-based InPhase Technologies announced the manufacture of a prototype holographic storage drive. A rival, Aprilis Inc., in Maynard, Massachusetts, hopes to have a holographic drive on the market in two years that will hold 200 gigabytes (DVDs hold a tenth of that). Another vendor, Colossal Storage, intends to unveil a 3-D disk by 2009 that holds a staggering 100 terabytes.
Terabytes aside, the transfer rates of holographic storage devices may prove to be the real plus for data miners. The Aprilis drive is expected to retrieve data at 75 megabytes per second. Experts believe transfer speeds for holographic drives may one day reach 10 gigabytes per second. At that rate, a data search that would tie up a conventional engine for 40 seconds or longer will take just a third of a millisecond to complete. A whole millisecond would be fast enough for us.
John Goff is technology editor of CFO.