Building The Data High Way

Many of the technologies and players needed to construct the information infrastructure are already in place. But the precise definition of the data highway is in the eye of the beholder. Who builds it could dramatically affect how it works--and how it's used.

By Andy Reinhardt
BYTE

March 1994

One fact must be made clear about the national information infrastructure: The government is not planning to dig a trench from New York to San Francisco, fill it with fiber-optic cables, and call it a data highway. Rather, the information highway will be privately built, owned, and operated; the Feds will encourage its development only through research funding, standards efforts, and changes in regulations.

In fact, much of the data highway already exists in the vast web of fiber-optic strands, coaxial cables, radio waves, satellites, and lowly copper wires now spanning the globe. What's needed now are better on- and off-ramps--that is, better and faster links from businesses, schools, and homes to the communications backbone--as well as new vehicles, more destinations, and better guidebooks on how to get there. Hundreds of billions of private and public dollars will be required over the next decade to weave together the world's communications systems and create these new software and hardware navigation tools.

What will be the benefit of all this investment? For business users, the data highway represents the holy grail of connectivity: a ubiquitous internetwork that allows them easily and inexpensively to connect with customers and suppliers, improve communications among employees, and gather competitive data. Applications facilitated by the highway, such as videoconferencing, document sharing, and multimedia E-mail, could reduce travel spending and encourage telecommuting. Businesses might also save big on reduced health-care costs if the data highway improves distribution of medical records and enables new techniques such as remote diagnostics. "We're very excited about it," says Ward Keever, the coauthor of a report on the data highway from SIM (Society for Information Management) and senior vice president for information services at the Medical Center of Delaware in Wilmington.

There's little disagreement over the grand vision of the data highway. It will be, as U.S. vice president Al Gore calls it, "a network of networks," a massive client/server and peer-to-peer mesh capable of carrying gigabits, and eventually terabits, of data per second on its trunk lines. The back-end servers, networking technologies, client devices, and software applications will be utterly heterogeneous--the most secular network ever constructed. And if it succeeds as envisioned, the data highway could help businesses find information more easily, open up new modes of research and education, and give consumers a wide choice of services.

It's in the details that opinions start to diverge, and these differences could have a profound effect on how the information infrastructure is designed and used. "Every technology company out there can define the information highway for you," joked James Abrahamson, chairman of the board at Oracle, recently. "[It's] the strategic vision for whatever the company happens to sell."

The parties vying to create the data highway--telephone companies, cable distributors, computer makers, content providers (e.g., publishers, studios, and on-line services), and the worldwide Internet community--bring to the table different technologies and points of view. Forecasting the ultimate form and function of the data highway requires examining these conflicting technical perspectives. For instance, cable companies tend to see the data highway as a distribution vehicle for video and audio; were they solely responsible for linking users to the backbone, their data highway might favor information delivery over two-way communication.

Others, including Mitch Kapor, founder of Lotus and now chairman of the Washington, D.C.-based Electronic Frontier Foundation, see the creation of the data highway as an opportunity to give citizens access to a vast wealth of information. Kapor's data highway might be less commercial- or entertainment-oriented, and its architecture would encourage individuals to become information creators, not just consumers.

In interviews with nearly 100 industry executives, engineers, analysts, users, and policymakers, BYTE has explored how the national and international information infrastructure is likely to be built. Below is a summary of those competing views, along with our own opinions of the optimal direction for the data highway of the future.

What Is It?

Oracle's Abrahamson contends that the highway is simply the logical conclusion of today's convergence of hardware, software, and networking technologies. The driving force for this convergence is the increasing digitization of data; as Nicholas Negroponte, the head of MIT's Media Laboratory, says, "Bits are bits." Service providers are fighting over how to build the data highway. However, once video or speech or geological data becomes strings of 1s and 0s, users won't care what pipe they traverse to get from one computer to another.

The data highway's backbone will use every wide-area communication technology now known, including fiber, satellites, and microwaves, and the on- and off-ramps connecting users to the backbone will be fiber, coaxial cable, copper, and wireless. Data servers will be supercomputers, mainframes, minicomputers, microcomputers, and massively parallel machines, while a great diversity of clients will populate the end points of the network: conventional PCs, palmtops and PDAs, smart phones, set-top boxes, and TVs. Software used on the network will include operating systems, networking protocols and services, user interfaces, databases, data sources (or content), and a new generation of smart middleware (e.g., General Magic's agent-based Telescript) that will help users navigate the network.

Unresolved technical arguments about the data highway's architecture boil down to two main categories: protocols and bandwidth. The protocol problem concerns the ultimate role of TCP/IP, the lingua franca of the Internet and Unix-based LANs. Buttressed by the engineering resources of the IETF (Internet Engineering Task Force), TCP/IP has continuously evolved. But it suffers drawbacks for real-time use that could threaten its position as an internetworking standard when multimedia traffic plays a greater role on the data highway. An emerging alternative is ATM (Asynchronous Transfer Mode), a hybrid circuit-switched and packet-switched networking scheme that performs well in real-time applications but lacks TCP/IP's software base. One potential solution is to run TCP/IP over ATM.

Bandwidth equals data transmission capacity. Conventional telephones need very little, while HDTV needs large amounts--20 Mbps or more per channel. How much bandwidth is necessary to connect businesses, homes, schools, and governmental bodies to the data superhighway will depend on the applications they end up using: on-ramps will need a lot more bandwidth if users demand interactive digital video than if they use the highway to send E-mail. A yet more subtle problem is how to allocate bandwidth into and out of customer sites: A system biased to data delivery--i.e., with a high ratio of downstream to upstream bandwidth--implies information consumption, whereas one with symmetrical or dynamically assigned capacity implies communication.

The Players

To meet the needs of society, the data highway has to be ubiquitous, affordable, easy to use, secure, multipurpose, information rich, and open. If it's to be economically viable, service providers have to be able to bill customers for the time they spend on the network or for the data they use. Each of the precursors of the data highway meets these criteria with varying success. The different heritages of the players are reflected in how they define the information infrastructure.

Cable companies. Steeped in broadcasting analog video through a wire, cable companies see the data highway largely as synonymous with enhanced entertainment services. They want to layer onto the video stream new consumer offerings such as interactive TV (e.g., video-on-demand, home shopping, viewer polling, and information-on-demand). But the cable companies want to provide important business services, too, such as voice telephony, data communications, and access to on-line services. Most of all, they see the data highway as a chance to exploit their primary asset: broadband coaxial cables stretching into an estimated 60 million U.S. homes and millions more around the world.

A major challenge for cable companies is that their systems tend to be proprietary and not interconnected. Constructing a nationwide network will require adopting common standards, installing giant gateways, and leasing backbone capacity from long-distance carriers (or spending big money to lay their own digital fiber trunk lines).

Telephone companies. Where the cable companies are weak, phone companies--both local and long-distance--are strong. Cable has traditionally used a one-to-many, trunk-and-branch topology with little or no provision for "upstream," or return, communications. The phone system was designed for point-to-point communications and has evolved into the world's largest switched, distributed network, capable of handling millions of phone calls simultaneously, tracking each one, and billing customers precisely for their usage. The phone system's legacy as a public utility has given it a degree of reliability and openness unmatched in the cable world.

The phone companies want to send data, especially video, over their vast networks. But the phone system suffers a bandwidth shortage: Although the trunk lines crisscrossing the country are of high-capacity fiber, the local loops into businesses and homes are typically two- or four-wire unshielded copper with limited bandwidth.

The Internet. Riding on the shoulders of the phone system is a remarkable worldwide computer cooperative, the Internet, a government-subsidized experiment in distributed computing, electronic community, and controlled chaos. The Internet doesn't own the pipes it passes through, and nobody owns the Internet, but it is growing by as many as 150,000 new users per month. If the wires and cables of the communications industry are the data highway's foundation, the Internet may provide its language, culture, and customs.

A unification of phone and cable systems could bypass the Internet and threaten its relevance, but given the Internet's rich human and informational capital, the more likely scenario is that its technology will be harnessed for the highway. A number of companies are working to make the resources of the Internet, which has a notoriously arcane interface and command structure, more accessible to businesses and individuals.

Policymakers. Washington is working to resolve policy issues concerning the data highway. Proposed legislation to ease regulations on cable and phone companies has the support of the Clinton administration. The most significant remaining challenge is how to ensure universal access to the information infrastructure. A principle enshrined in phone regulation since the 1934 Communications Act, universal service requires telecommunications companies (or telcos) to cross-subsidize the cost of serving poor, rural, or other less-profitable customers with higher-margin clients such as downtown businesses. If the data highway is to become a national asset and the basis for an information society, access to it must be affordable to all (see the text box "Government Policy on the Data Highway").

Meanwhile, efforts are under way in countries outside the U.S. to build similar national networks. Canada, Germany, and Japan have major projects (see the text boxes "Data Highway Lags in Japan " and "Europe's Many Data Highways"). Many U.S. firms are rushing into foreign markets to gobble up newly privatized telecommunications franchises or to conduct technical trials of systems they hope to replicate back home. Connecting all these regional initiatives, many argue, will be the existing, de facto international information infrastructure, the Internet.

User Views

The data highway "will have a tremendous effect on how we work and do business with banks and our customers and suppliers worldwide," says Barry Coleman, senior economist for Texaco's alternate energy and resources department. "The dollars saved using electronic data interchange will be tremendous," he adds.

Yet despite the potential improvements in productivity and communications promised by the data highway, some corporate users remain wary. Bruce Smith, MIS manager for ENSR Consulting and Engineering, says he is worried about security. "I'm nervous about things like that; it's a double-edged sword." (See the text box "Highway Safety: The Key Is Encryption.")

For some users, security is of such paramount concern that they don't even outsource their communications needs today to private network providers, much less anticipate using a public backbone to exchange sensitive information. "A public information highway doesn't mean a thing to me," says Ben Fishman, LAN manager for the Wells Fargo Bank in San Francisco, California. But Fishman acknowledges that on the customer side of the bank's business, on-line banking services over the data highway could be an attractive time- and money-saver.

A SIM report examines nine areas of concern for business users of the data highway: standards for connectivity and interoperability, competitive versus regulatory forces, access, protection of rights, pace of development, funding of research, transport media, speed of data transfer, and the role of the Internet. As a rule, SIM advocates openness and competition wherever possible, but with adequate legal protections to guard against monopolies and ensure equal access. As for the Internet, Keever of the Delaware Medical Center says: "We're concerned about its ability to scale up, both technically and administratively, as well as to provide appropriate security to end users."

Long-Distance View

Long-distance carriers provide the high-speed long lines that now interconnect regional and national phone systems--and that form the backbone of the Internet--but they have higher aspirations for their role in the information infrastructure: They are starting to get into the local-access business, and they plan to offer services (e.g., mail, directories, and information) and hardware/software products for cruising the data highway.

According to the terms of the 1984 breakup of Ma Bell, the seven regional Bell operating companies, or RBOCs (i.e., Ameritech, Bell Atlantic, BellSouth, Nynex, Pacific Telesis, Southwestern Bell, and US West), provide regulated local service and are concerned with the local loop--how data, voice, and video services get in and out of customer sites. Long-distance (or interexchange) carriers, such as AT&T, MCI, and Sprint, focus on the backbone and on value-added services. These roles, however, are starting to blur.

One job now performed by long-distance firms will remain the same in the data-highway era: They will provide the trunks, or long lines, that carry telephone traffic across the boundaries separating local service areas in the U.S. and into other countries. These lines are almost entirely fiber now, and most use SONET (Synchronous Optical Network), a CCITT/ITU standard that defines various levels of digital telephony service over fiber.

Trunk lines range in capacity from T1 rates (1.544 Mbps) up to OC-48 (2.4 Gbps) and beyond. Today's Internet backbone, for instance, is built on T3 (45 Mbps) lines operated by MCI. Local access from users to hosts and hosts to the backbone occurs at rates ranging from 2400 bps to 19.2 Kbps for dial-up, or via leased lines at multiples of 56 Kbps or 64 Kbps up to the T1 rate of 1.544 Mbps.

The complex telco regulatory structure permits other roles for long-distance carriers as well. They can manufacture equipment, which RBOCs cannot do. Long-distance companies also offer value-added services, such as formatted data handling; both AT&T and Sprint, for instance, have begun to sell ATM backbone service directly to customers who install private lines from their facilities to nearby long-distance points of presence, or POPs.

Most important, long-distance companies are not enjoined from offering local phone service in competition with the RBOCs. The biggest news in this context is AT&T's pending $12.6 billion acquisition of McCaw Cellular Communications; wireless technology provides a means to bypass the RBOCs and connect customers directly to long-distance POPs. To the extent that wireless communications become a key on-ramp to the data highway, the long-distance carriers--as well as wireless providers such as RAM Mobile Data and the IBM/Motorola Ardis joint venture--want a piece of the action.

AT&T has yet more irons in the fire. It now owns the Eo and Go technologies for pen-based computing, signifying its intention to compete in the market for mobile end-user devices. And it has obtained agent-based communications software from General Magic, which forms the basis for an advanced messaging service called PersonaLink that AT&T unveiled in early January.

The current regulatory environment works to the advantage of interexchange carriers, because they are freer than the RBOCs to move into new services. But with pending policy changes in Washington, telecommunications competition could turn into a free-for-all. The result for customers could be fierce price competition and an explosion of service options.

RBOC Realignment

The local phone companies are the vanguard of data-highway construction; their lines into homes and businesses are the access ramps to the backbone. Yet their perspective is different from that of the interexchange carriers because RBOCs have, in effect, been treated as utilities for the last decade.

Two critical restrictions imposed on the RBOCs by the 1984 breakup of the Bell system were that they could not own information services and could not deliver video content within their designated service areas. The judicial, executive, and legislative branches of the federal government are now racing to see who can lift these provisions fastest: Last August, a federal court decision on behalf of Bell Atlantic wiped out the video restriction, pending appeal; and both White House initiatives and Congressional legislation have been introduced to ease regulation.

Until new regulatory structures are in place, the RBOCs are growing by buying cable properties outside their regions; the best-known deal is the pending $25 billion merger of Bell Atlantic and Tele-Communications, Inc., or TCI, the nation's largest cable provider. At the same time, to protect themselves from expected competition from other RBOC/cable partnerships, they are retrofitting their local systems to support video.

Several approaches are being used. Bell Atlantic has conducted trials in northern Virginia and central New Jersey. The Virginia test harnesses ADSL (Asymmetrical Digital Subscriber Line), a new technology that lets conventional copper wires carry up to 1.54 Mbps of data--enough to deliver one channel of precompressed movies to a single user. The data is sent through the switched phone network to a set-top box that decompresses it and converts it back to NTSC analog video for delivery to the TV. The Virginia trial, which began with a few Bell Atlantic employees, is evolving into a market test of some 2000 consumers in northern Virginia.

ADSL is a quick-and-dirty way to pump digital video over the existing copper plant. It's no match for 50 channels of cable, but with a pair of set-top boxes and an A-B switch, customers could receive video feeds from both their cable company and their phone company. By late 1994, says Bell Atlantic vice president of technology John Seazholtz, ADSL is expected to support up to 6 Mbps of video plus ancillary services, as well as multiple users per premises and, when real-time compression arrives in 1995, live TV.

ADSL can also play a role in nonvideo applications. For instance, Bell Atlantic is considering bundling together Internet access software, ADSL compression, and ISDN service, to give customers easy, high-speed access to the Internet. The 1.54-Mbps downstream data rate would make downloading image files hundreds of times faster than over a modem, while upstream data transfers, at ISDN speeds, would be upwards of 25 times faster. This raises the interesting possibility of CompuServe or Internet ftp sites becoming multimedia service providers.

Bell Atlantic's New Jersey trial uses a much more ambitious and expensive approach, based on technology from BroadBand Technologies (BBT) of Durham, North Carolina. BBT's system consists of several pieces. A host digital terminal combines telephony feeds from central phone offices and digital video feeds from cable headends and sends them over a single paired-fiber cable (for two-way transmission) to an optical network unit. (A cable headend is the central point at which TV signals downlinked from satellites and supplied by local stations are modulated onto the cable. A BBT device converts this analog video to digital.) The optical network unit, located at or near the customer site, then splits the signal back into digital video and analog telephony components and sends them, respectively, via coaxial cable to a digital set-top box and via copper wire to a standard phone. Returning signals follow the reverse path. BBT's set-top box is being developed with Philips Consumer Electronics and Compression Laboratories.

Although BBT's architecture requires installing fiber nearly to the customer site, most telecommunications and cable companies were already doing this anyway. BBT's advantage is that it uses only a single fiber for voice and video data. Furthermore, it adds star-topology switching to the video distribution system, providing customers with guaranteed but asymmetrical downstream and upstream bandwidth.

Bell Atlantic's BBT trial builds on a basic premise: that there will be two wires reaching into the home--the copper and coaxial cable already found in over 60 percent of U.S. households. Coaxial cable has ample bandwidth to support the applications envisioned so far for the data highway, especially if the backchannel is provided through the switched phone system.

Others foresee only one wire: a coaxial cable or a single fiber. Cable companies like TCI hope to provide both video and voice/data service over a single coaxial cable. Pulling fiber everywhere is too expensive to justify (estimates range as high as $400 billion to do every business, home, and school in the U.S.) until demand for broadband services is better understood and content offerings have matured.

Besides, as Seazholtz points out, all-fiber connections pose a tricky technical problem: how to keep phone service alive during a power failure. The lasers used to drive fiber optics in the office or home would have to draw AC current and thus would fail during an outage. BBT's hybrid architecture, in which active optics stay at the curb while buildings remain domains of passive electronics, appears to be a safer solution for customers.

On the opposite coast of the U.S., Pacific Bell has announced an ambitious multibillion-dollar plan to go it alone, without a cable partner, and rewire California with fiber and coaxial cable. It aims to provide not just a "video dial tone" (its right to do so relying on the Bell Atlantic precedent or new FCC regulations), but video telephony and data access as well. Says Keith Cambron, Pac Bell's director of systems engineering for consumer broadband: "Video telephony, because it's symmetrical and point-to-point, requires more of a telephony model than a CATV model."

Pac Bell's design presumes, as a starting point, a heterogeneous mix of end-user devices. Cambron identifies a minimum of eight: standard analog phones; standard cellular phones; home computers linked to the network by analog modems or digital ISDN ports; RF modems that attach CATV to an individual PC (via an add-in card) or a network of PCs (via an RF-to-Ethernet converter); conventional analog set-top boxes; advanced digital set-top boxes; and plain old cable-ready TVs and VCRs. In other words, the network envisioned by Pac Bell doesn't make existing equipment obsolete, and it adds new digital services incrementally.

The architecture is similar to that of other switched systems. Central office switches communicate by digital fiber to neighborhood nodes that serve roughly 500 customers. From the fiber nodes to the customer site is shared coaxial cable, which terminates at an NIU (network interface unit) attached to the side of the building. From there, separate signals are fed by coaxial cable and copper to video and telephone devices. Cambron contends that there is enough upstream bandwidth in this design to permit video telephony.

At the back end is where Pac Bell's legacy as a common carrier becomes most evident. The central switch communicates with a video gateway, which according to Cambron still needs development. This gateway provides the user's first-level menu selections; the second-level menus are for each particular service provider. All these interfaces are open and work cooperatively. "We want to encourage as many suppliers as possible to get onto our network with gateways to their video services," Cambron says.

From Trunks to Stars

Cable companies already have the most bandwidth into American homes, but they haven't wired up many schools or businesses. They also have the most to gain from retrofitting their networks to become data-highway access roads: While holding onto their video delivery business, they could unseat the RBOCs by providing local access to long-distance carriers. The key technical need is to push fiber closer to the final delivery point. TCI and others are doing that, following models similar to the Bell Atlantic/BBT and Pac Bell projects. Time Warner is trying a more radical approach that employs ATM.

Cable systems are moving in roughly the same direction as RBOCs, which is why their partnerships seem so logical. TCI, for instance, announced in 1993 that it would spend $2 billion over the next few years to upgrade its system with fiber nodes and support for video compression. The upgrade was widely misreported to mean that TCI would supply 500 channels of cable; the truth, according to vice president of TCI Technology (TCI's technology subsidiary) Bruce Ravenel, is that TCI will have enough capacity for 500 channels for a variety of business and consumer services, including traditional broadcasts, pay-per-view or video-on-demand, videoconferencing, voice telephony, and on-line access.

Traditional cable systems use a distributive architecture antithetical to two-way communications. Area headends, or cable programming distribution points that serve thousands of subscribers, receive programming via satellite or feeds from local broadcasters and shunt them onto coaxial cables that run into neighborhoods, with cable drops to individual homes. Channels are broadcast in 6-MHz bands between the frequencies of 50 MHz and 450 MHz, although newer systems can go up to 750 MHz or higher.

The two biggest changes since cable emerged 40 years ago were the development of addressable channel selectors (i.e., set-top boxes with individual IDs that can accept messages broadcast through the cable system) and the discovery of a way to modulate analog video over fiber media. Cable systems have dramatically improved picture quality by shipping source programming around on interference-free fiber instead of coaxial cable. Their potential for two-way communications, however, is still constrained by an analog trunk-and-branch topology in which the source signal passes through 30 to 50 cascaded amplifiers before reaching each destination.

Unlike with the switched phone system, all the nodes in a given area share a common cable, as on a "shared media access" LAN such as Ethernet. In theory, with a contention access scheme like Ethernet's CSMA/CD, customers ought to be able to vie for upstream bandwidth and send messages to each other or to a server; unfortunately, each drop on the cable produces electrical noise (i.e., noise ingress), which is progressively magnified on its way back up the cascade to the point where it overwhelms the signal. Even without the noise problem, the thousands of customers on each cable segment would rapidly deplete the available upstream bandwidth.

According to Mario Vecchi, vice president of network design and architecture for Cable Television Laboratories (CableLabs)--the cable industry's equivalent to Bellcore, the joint research facility of the RBOCs--the solution to this problem, now being implemented by cable providers, is to reduce the number of amplifiers in the cascade and the number of drops on each segment. By pushing fiber further into a given area, each segment can serve up to 500 customers, and only 3 or 4 amplifiers come between the fiber node and the user. At the same time, new super headends connected by SONET rings will serve 100,000 or more customers. This standards-based architecture will let cable systems interconnect more easily with telcos and data services. "Point-to-point links just to serve our own needs are no longer possible," says CableLabs' Vecchi. Existing frequencies between 50 MHz and 750 MHz will be used for downstream broadcast, while the subsplit frequencies from 5 MHz to 42 MHz are available for upstream data. At 6 MHz per channel, this translates into six channels of full upstream video, or many more subchannels of text or other data. The local loop will still share media, and the contention access protocol hasn't been determined, although one likely contender is DQDB, which is an IEEE standard used in MANs (metropolitan-area networks).

TCI Technology's Ravenel says TCI is implementing a scheme like Vecchi describes, with an added twist: The company will actually install two coaxial cables to each feeder, one of which will be "dark" until sometime in the future. The primary wire will be configured with asymmetrical bandwidth: downstream from 50 MHz to 750 MHz or higher, and upstream in the subsplit frequencies. Ravenel asserts--and many agree--that in the near term, information and consumer services will be heavily biased in favor of data delivery. The upstream bandwidth available on the first cable will be enough to support voice phones, two-way data, PCS (personal communications services, the new wireless spectra that will be auctioned off by the FCC this year), and video telephony, which needs 384 Kbps to 1.54 Mbps of bandwidth, Ravenel says.

TCI's second cable will be midsplit, with the "free" portions of 500 MHz of bandwidth allocated in each direction (the second, digital-only, unamplified cable is subject to RF ingress). When activated, this cable would allow TCI to provide more than just telephony: It would empower subscribers to become originators, not just consumers, of content--in effect, to become broadcasters in their own right. "You're more likely to be enfranchised as a user if your cable service allows you to communicate with other users," says Allison Mankin, a researcher at the U.S. Naval Research Laboratory and a cochair of the IETF's "IP:next generation" project.

Cable companies are serious about using upstream bandwidth to compete with the RBOCs in local phone access. The top five cable firms--TCI, Time Warner Cable, Continental Cablevision, Cox Cable Communications, and Comcast--jointly own TCG (Teleport Communications Group), the leading alternative access provider, which connects local users to interexchange-carrier POPs. Large corporate customers are already contracting with TCG for primary or backup access from their phone systems directly to long-distance carriers. Now cable companies want to offer the same option to smaller businesses and individuals. Access to the long-distance network through the cable system could lower costs for customers and give RBOCs a run for their money.

Future Networks

A technology that may bring cable and phone companies even closer together is ATM, which spans the gap between packet-switched and circuit-switched technologies by using elements of each. ATM splits data into small chunks, like a packet service, but the cells are all of equal size. Then, instead of routing each cell individually, ATM sets up a virtual circuit and streams them across the network. Aside from its scalability and ultrafast switching performance (622-Mbps ATM products are available now), what makes ATM so attractive for video applications is its ability to allocate bandwidth on demand and assign priority levels to cell streams. This means ATM can guarantee nearly real-time delivery of digital video data. "ATM is the right kind of switching technology for interactive video," says TCI's Ravenel.

To test this hypothesis and to push ATM technology to its limit, Time Warner Cable is conducting an interactive TV and video-on-demand trial in Orlando, Florida, together with AT&T, Silicon Graphics, and others (see the text box "The Tools for New TV"). This trial uses ATM end-to-end, from the massive video servers at the back end to the set-top boxes in subscriber homes. It's a costlier solution than TCI or Pac Bell is trying, but it may be more forward-looking. Ravenel says he's not sure ATM to the home is necessary but he's grateful to Time Warner for trying it out.

Some are even less certain about ATM, cautioning that it has been overhyped. This view is particularly found in the Internet community, which tends to be skeptical about one-solution-fits-all technologies. "We don't have to have a single, universal architecture," says Tony Rutkowski, an Internet pioneer and director of technology assessment at Sprint. "It's nonsense to think that everything will run over ATM."

Craig Partridge, a senior scientist at Bolt Beranek and Newman (BBN) and author of Gigabit Networking (Addison-Wesley, 1994), says that although enterprise ATM switches from Fore Systems and Lightstream (a joint venture of Ungermann-Bass and BBN) can deliver their rated 155-Mbps speeds, some high-end 622-Mbps switches have failed from a lack of adequate flow control. "ATM switches are really designed for steady, not bursty, traffic. But data communications is bursty, so they drop bits all over the floor," he says.

ATM also doesn't now support multicasting, which means that all transactions are point-to-point. This could prove very inefficient for broadcast video content, such as live news or sports events, when millions of users are watching the same source and don't need to be individually addressed. Still, cable and telco executives almost unanimously conclude that ATM will be a vital backbone technology for the data highway, weaving together the cable, telco, and data service providers.

Linking with Content

Another problem that both cable companies and telcos confront in promoting their visions of the data highway is that neither owns meaningful data content. The cable companies bring to the party lots of entertainment properties and Hollywood relationships, but these are a far cry from business-oriented information sources such as demographic databases or parts catalogs.

To gain access to this information, cable and phone companies are forging links with on-line services. In late 1993, Comcast and Viacom announced they would test delivery of Prodigy and America On-Line over cable systems in Castro Valley, California, and elsewhere. Continental Cablevision began a trial last December in Exeter, New Hampshire, that lets some home and business cable customers connect directly to CompuServe; and in early 1994, Continental will offer cable subscribers in Cambridge, Massachusetts, a direct link to Performance Systems International, or PSI (Herndon, VA), a leading commercial Internet provider.

For all these arrangements, customers hook up an RF modem to their cable and from there connect to a PC, Mac, workstation, or network. The result of using broadband coaxial cable instead of twisted-pair copper for data delivery is a thousand-fold increase in speed. Says Daniel F. Akerson, chairman and CEO of set-top box maker General Instrument, "A high-resolution image that now takes 15 minutes to download over a conventional modem will be accessible within a few seconds."

Zenith Electronics (Glenview, IL) makes one such interface, called HomeWorks, that sells for $495 and consists of an external cable modem (with Ethernet output) and a PC bus card. HomeWorks is being used for the Continental/CompuServe trial in Exeter, and Zenith is also partnering with Spry (Seattle, WA) to promote cable-based Internet access via Spry's Air Navigator, a $149 Windows-based software package.

DEC is getting into the act with ChannelWorks, a bridge that permits interconnection of Ethernet LANs over CATV systems, supporting 10-Mbps speeds at distances of up to 70 miles. According to a Datapro Information Services report, "Data over CATV," DEC has partnered with TCI, Continental Cablevision, Times Mirror Cable, and others, and is targeting ChannelWorks at businesses, hospitals, state and local governments, and educational institutions. Both the DEC and Zenith modems require two-way cable systems and support symmetrical data rates.

Start-up Hybrid Networks (Mountain View, CA) is taking a different approach with an asymmetrical architecture that uses both cable and phone lines. Its product uses RF modem technology for 10-Mbps downstream data over subsplit frequencies and sends return data over conventional phone lines at 9600 bps to 19.2 Kbps.

Hybrid has also demonstrated upstream communications over ISDN, using 2B channels, or 128 Kbps. President Howard Strachman notes that only 15 percent of U.S. cable systems are now two-way, and of those, most suffer noise ingress that makes provision of even a single 6-MHz upstream channel nearly impossible. Hybrid's product will thus serve the millions of users lacking advanced cable service.

But looking forward to the time when fiber nodes move closer to end users, Hybrid is also working with Intel and General Instrument on a device that communicates in both directions over cable. This two-way RF modem, slated to be used for the Viacom and Comcast trials this year, will be implemented on a PC add-in card, not in an external unit like HomeWorks. Strachman says the specifications for the final card aren't set but could include NTSC-to-VGA conversion; if so, he says, a single board would "cable-enable" a PC for both video reception and data I/O. Although the initial product is for the ISA bus, future versions could support other I/O buses or PCI (Peripheral Component Interconnect) or could be built directly into a set-top box.

Services such as CompuServe and Prodigy were designed around the presumption of scarce bandwidth, but with connection rates orders of magnitude higher, they will be able to offer richly detailed multimedia user interfaces and new capabilities such as interactive imaging, store-and-forward audio and video communication, and "rentals" of massive data sets. "Some believe that a good part of the required transport infrastructure [for the data highway] already exists in the cable plant," says Datapro analyst Lance Lindstrom. By adding support for data communications, he says, cable companies can leverage their bandwidth advantage over the telcos and assume the role they see for themselves as "important contributors" in building the information highway.

The Data Web

People often confuse the Internet's physical structure--its web of T3 trunks linking IBM RS/6000-based routers and its various dial-up and dedicated access lines and services--with its true identity: a set of software protocols and tools, thousands of open data servers, a community of more than 20 million users, and a process for engineering upgrades. But in theory, this identity could be divorced from today's Internet hardware and the "Net" would still exist; indeed, if the data highway turns into a melding of cable and telco infrastructures, the Internet could live on as a set of services and standards.

Whether you connect to the data highway by copper, coaxial cable, fiber, or radio, the key unanswered questions are how you will interact with the giant network and what you will find there. Being linked to everybody and everything in the world won't do much good if you can't use the system or locate services you need--or if there's no data on-line that you care about.

This is where the Internet comes in. Unlike experimental networks such as Time Warner's ATM trial in Orlando, the Internet is in place today, running the battle-tested TCP/IP protocol, offering global remote log-in and file transfer (telnet and ftp, respectively) to and from thousands of data servers, and supporting public domain networking standards such as SNMP, SMTP, and SLIP. To help users navigate this vast, interconnected mesh, the Internet community has created innovative searching and indexing schemes, such as Gopher, Archie, WAIS, and the hypertextual World Wide Web.

The physical Internet is quickly evolving: According to Steven Wolff, director of the networking division of the NSF (National Science Foundation), which oversees the Internet's core, "the NSFnet backbone is going away" in the next few years, to be replaced by a combination of linked commercial subnetworks and a restricted-access research backbone. One immediate effect of this is that the Acceptable Use Policy, which prohibited commercial data traffic across the NSFnet, will become even more moot than it already is.

Instead of providing universities and public institutions with free access to a government-sponsored network, Wolff says, the government will get out of the network business and offer these users vouchers or grants to buy access to commercial Internet providers. There are now nearly 50 of these regional mid-level network providers in North America (including PSInet, BARRnet, CERFnet, NEARnet, and NYSERnet), most linked under an umbrella called CIX, or the Commercial Internet Exchange. Many may merge or be acquired by telcos, cable companies, or on-line service providers. "We won't be buying our bandwidth from people like CERFnet and PSI 20 years from now," predicts Noel Chiappa, a member of the IETF and an independent networking researcher based in Vermont.

To carry forward the Internet's original mandate as a research tool, the government will create a new backbone under the auspices of the 1991 NREN (National Research and Education Network) Act, which was sponsored in Congress by then-Senator Al Gore. This new backbone will operate at speeds of 155 Mbps (OC-3) and will not carry routine mail or file-transfer traffic; it will exist, says Wolff, to support research on protocols, RPCs (remote procedure calls), large file transfers, and other advanced applications. NSF is also cofunding research on even higher-speed networking via so-called Gigabit Testbeds.

As the Internet's backbone is changing, so are its on-ramps. Programs such as Continental Cablevision's link to PSI are opening up the Internet to a new class of users and bringing it into the same devices people will use to view videos or make phone calls. Thus, distinctions among these services will blur. The thousands of Internet data servers and news groups, offering virtual community and free information ranging from government statistics to satellite images to crop studies, will be available from your office or living room through the same user interface you use to conduct a videoconference or order a pizza.

One major problem that could hold back growth of the Internet as a commercial venue is that no provisions exist today for usage-based billing. If you log into a server with anonymous ftp, nobody charges you (by access time, packets downloaded, records passed, or some other scheme) for the data that you obtain. By comparison, proprietary on-line services such as CompuServe were designed from the beginning to track usage.

"The thought of billing is a nightmare to 90 percent of the providers on the Internet," says Susan Estrada, former managing director of CERFnet and now president of start-up Aldea (Carlsbad, CA). Unlike commercial services, she adds, many Internet nodes, such as government servers, are required to publish their data at no charge.

Some people also bemoan the inevitable change that expansion of the Internet will wreak on its unique subculture. Says Dave Farber, an Internet founder and professor at the University of Pennsylvania (Philadelphia): "Internet people believe in free goods to everybody: Give each user a straw and let him sip on the pool of wisdom." If Internet access is no longer free and users have to pay to download data sources, the Internet will lose its communal spirit.

Protocol Quandary

Aside from its highly evolved tools and wealth of data sources, the Internet's greatest contribution to the data highway may be the TCP/IP protocol. But this is debatable, because TCP/IP wasn't originally designed for real-time data delivery, which is necessary to support any meaningful volume of audio or video traffic. "Its optimal use is to hang together a great many apps, nets, and operating systems around the world," says Sprint's Rutkowski.

TCP/IP is a routed, connectionless, datagram (or packet) protocol, which means it divides network traffic into unequally sized, individually addressed chunks that are routed through the network over a dynamically assigned path. (The Internet uses several algorithms to determine the best route at any given time.) This is analogous to sending a friend a postcard every day for a month: The cards may arrive out of order or take different routes to get there, but the friend can sort them out at the other end. By contrast, connection-oriented schemes, such as voice telephony or ATM, establish a circuit between the source and destination and send all signals or packets in sequence along the same path. Each approach has its strengths and weaknesses.

TCP/IP's greatest competitor--if, indeed, they must be at odds rather than complementary--is ATM. But ATM spans different levels of the network stack, including link-layer specifications that are out of TCP/IP's purview, and not including TCP/IP's "reliable" layer, which ensures end-to-end error checking. An IETF group is working to implement IP over ATM (this requires splitting IP's variable-length packets over ATM's fixed-length cells and then reassembling them on the other side), on the premise that IP can and should remain an internetworking standard even if the underlying transport changes from a connectionless datagram to cell switching.

But if ATM does so well with video and other real-time data, why muck around with TCP/IP at all for the data highway? The answer is heterogeneity, says Scott Bradner of the Harvard University Office of Information Technology, and cochair of the IETF's IPng committee. "Just because a large chunk of the network of the future will be running over ATM doesn't mean it will all be," he says.

TCP/IP is widely supported in applications and routers and is unmatched in universality and reliability--significant advantages in building the information infrastructure. It has also been upgraded in recent years, through the efforts of the IETF, to support multicasting, or one-to-many packet broadcasting, which ATM does not support. Multicasting reduces the burden on routers by mapping packets into predefined distribution groups.

Among other things, multicasting is now used to distribute live digital video of IETF meetings across the Internet through a program called the Multimedia Backbone, or M-Bone. The program demonstrates a potential use for the Internet, but it's a far cry from the architecture required for widespread video-on-demand or video telephony. "The way they do M-Bone now is to shoot the packets through the net and pray for the best," says CableLabs' Vecchi.

Despite its shortcomings, M-Bone's real significance is symbolic: The Internet's ability to grow and adapt to changing requirements should never be underestimated. A case in point is the current effort to expand the IP address space. Concern that the explosive growth in Internet use would exhaust the remaining IP addresses has prompted a creative two-pronged effort to plan for the future. If the efforts of the IPng task force are successful, there will be enough IP addresses to give one to every computer, telephone, fax machine, and set-top box in the world, with billions more to spare.

The IP address field is now specified at 32 bits, which theoretically ought to permit 4 billion addresses. But when the Internet was set up, addresses were divided into three classes based on the size of the attached network, and now there is a shortage of the most popular (Class B) type. The first step to combat the address crunch is to eliminate classes with new technology called CIDR (Classless Inter-Domain Routing). Coupled with more aggressive efforts to reclaim unused blocks of addresses, this may buy the Internet as many as five to 10 years of breathing room, depending on growth rates, says network researcher Chiappa.

For the longer term, the IPng will consider proposals to modify IP to support at least 1 billion networks and 1 trillion nodes. Three proposals have been formulated so far, but contenders could still emerge or drop out; all three proposals include 64-bit address fields (allowing essentially unlimited addresses) and tackle the problem of auto-configuration, or how to support mobile devices that join and leave the network at will. The IPng may also examine schemes for adding resource allocation and pseudo-guaranteed packet delivery (a "good enough" solution) to better support video over TCP/IP.

The purpose of all these efforts is to ensure TCP/IP's position as the universal internetworking protocol of the data highway. This doesn't mean it will be adopted tomorrow by makers of set-top boxes and cellular phones; most of these devices will remain analog for some time, and those that are digital could use proprietary protocols on top of ISDN or ATM. But TCP/IP protocol stacks may show up in some unusual places, such as plug-and-play "cable-enabled" PCs.

IP could also face competition from experimental "lightweight" protocols, such as XTP or HighSpeed Transport Protocol, that are designed to reduce switching overhead on the backbone. TCP/IP was written to cope with an older, more unreliable network infrastructure, and it places heavy emphasis on error control and retransmission, says William Stallings of Comp-Comm Consulting (Brewster, MA). In the context of a fiber infrastructure running fast transports like ATM, IP may be too "muscle-bound to cope," he says.

According to Stallings, XTP establishes connections more efficiently than TCP/IP, supports different priority levels and multicasting, offers greater flexibility in checksums, and is the only protocol that permits selective retransmission of missing packets. By combining functions of TCP and IP into a single, streamlined protocol, XTP manages to be both reliable and fast, he says.

A Window Seat

What will be your view onto the data highway? This is the ultimate client-side battleground, pitting against one another companies that pride themselves on their user-interface design skills, such as Apple, Microsoft, and General Magic, as well as makers of devices ranging from palmtops to smart TVs. In all likelihood, no single user interface will prevail, but standards have to be developed so that set-top boxes are interchangeable among different delivery systems and software applications run the same around the country.

Microsoft aims to be a major player in defining the user interface for interactive services, whether delivered over cable or wireless, onto desktop PCs or TVs. Its first-generation product, Modular Windows, derived from the same API as Win16 (and was promoted to developers as an easy way to leverage multimedia PC development onto TV-like devices), but it fizzled after being adopted only by Tandy for its VIS player. Microsoft is trying again with new non-Windows technologies, due to appear in pilots and trials this year, says Craig Mundie, vice president of advanced consumer technologies at Microsoft.

These consumer technologies--a separate initiative from the Microsoft At Work environment for office equipment slated to be rolled out this year--will cover the gamut from back-end servers through broadband networking to end-user products, Mundie says. "Our public position now is that we don't intend to use the Windows user interface for consumer devices."

Windows will play a key role as a portal to the data highway. For example, the new Internet-In-A-Box from O'Reilly & Associates (Sebastopol, CA) runs under Windows, providing a TCP/IP stack, automatic network registration, Internet services, and navigation tools. This product and other Windows-based interfaces to on-line services (i.e., CompuServe Navigator or the America On-Line front end) will operate in conventional set-ups such as a desktop or notebook PC connected via the phone system, or in emerging schemes such as PCs connected via RF modems through the cable system.

For radically different devices, such as PDAs or set-top boxes, different interfaces will be created. Apple has already invested heavily in the Newton, which, with the addition of needed communications capabilities, could become a hand-held data highway navigator. General Magic's Magic Cap interface, slated for use in devices from Motorola, Sony, Philips, and Matsushita, may also show up under the name MagicTV in set-top boxes. Silicon Graphics is carrying its Indigo Magic media interface from the Indy desktop into the set-top boxes that it is designing for the Time Warner Orlando trial. And Eon (Reston, VA), formerly TV Answer, has spent more than five years developing and testing a user interface for its interactive TV system, which will be designed into a set-top box from Hewlett-Packard.

To make these environments truly useful to users, software tools have to run deeper than just the surface of the screen. For instance, support for General Magic's Telescript communications language is built into Magic Cap, which means that when the user selects services by pointing at screen icons, smart agents are automatically dispatched across the network to obtain those services. Similar capabilities can be imagined for any of the set-top box interfaces, which will offer home and business users a kick-off point for cruising the data highway.

One of the finest examples of data highway middleware is Mosaic, developed at the National Center for Supercomputing Applications in Illinois. Mosaic runs under Windows, the X Window System, and the Mac, acting as a client-side browser for World Wide Web servers on the Internet. The software makes visible and easily navigable the hypertext links implicit in the World Wide Web. Thus, you could click on an icon to learn about northwestern conifers and get connected to a server in Vancouver, and then hop a link in pursuit of details on pinecones and be transparently logged into a server in Oslo. "Mosaic is the most intuitive, user-friendly, attractive user interface I've ever seen," says Sprint's Rutkowski. "It's the Internet's killer app."

Learning to best exploit these tools will be a major challenge facing businesses in the era of the data highway. "Information access is going to be a commodity," says Joe Correira, vice president of applied technology for The Travelers insurance company (Hartford, CT). "Everyone will have access to the information, [but] those with the experience will profit from the information."

Haves and Have-Nots

All these snazzy devices and rich tools will be meaningless to average citizens if accessing the data highway is too expensive or difficult. If proper precautions are not taken, the highway could become the province of the educated and economically privileged, dragging the U.S. even farther toward being a land of information haves and have-nots.

Lowering regulatory barriers between telcos and cable companies, and the rise of yet more gigantic media empires to fill the wires with content-for-hire, could lead to fierce competition over services and prices--or to new monopolies. Will all citizens be guaranteed access to the national information infrastructure? San Francisco consultant Evelyn Pine, former managing director of Computer Professionals for Social Responsibility, says that "people take phone service for granted," where "universal service has brought about great economic and political advantages." But, she says, "it's hard to visualize universal [computer] access that's not a high-priced solution." Paying $17 per month for service (an amount more in line with the cost of discretionary cable than with basic lifeline phone service) may seem like peanuts to people in the computer business, but it could be a burden for those with lower incomes, she notes.

Pine also poses the question whether cash-starved libraries and community colleges should be charged with providing universal access to the data highway (i.e., through public centers offering subsidized accounts), or whether new public institutions should be established for granting access. While the latter approach would spare colleges and libraries a burden, it could also "siphon money away [from them]," she adds.

The cable and telco businesses are racing so far ahead of regulators, judges, and legislators that their wish to invade each other's territories may be granted before proper protections are in place for the public. But if the information highway is to fulfill the grand civic vision outlined for it by President Clinton, the government must set in stone rules that carry forth the spirit, if not the substance, of the 1934 Communications Act that bound Ma Bell to be a common carrier and provide universal service. RBOCs can't be left to carry that burden alone while the cable companies and alternative carriers skim off the best customers. Perhaps the best solution is one currently floating around Washington: to create a public trust fund into which all providers pay and from which subsidies are drawn.

Like the transcontinental railroads and interstate freeways, the data highway will profoundly alter society--perhaps in ways we can't even anticipate today. No matter who controls the wires or airwaves that reach into homes and businesses in the U.S. and around the world, major technical, legal, and economic challenges remain before the data highway is as unremarkable as telephones and TVs. But when these hurdles are surmounted, enormous opportunities will be unleashed for all providers and consumers of information. Vice President Gore put it most succinctly: "Better communication has almost always led to greater freedom and greater economic growth."

BYTE's Recommendations for the Data Highway

SHORT TERM (1994-1996)

INFRASTRUCTURE
-- Backbone: existing standards (T1/T3, X.25, basic-rate ISDN); increased use
   of frame relay, SMDS, and SONET; limited penetration of ATM; TCP/IP
   for internetworking
-- Local loop: separate phone (POTS, ISDN, cellular) and cable (analog,
   pay-per-view) delivery systems; emergence of packet radio and PCS wireless
-- Trials of radio- and cable-based interactive analog TV
-- Trials of ADSL video-over-copper
-- Trials of switched digital video
-- Increasing insensitivity of "smart" backbone to end-point devices
-- Increasing commercialization of the Internet
-- Continued growth of commercial and business on-line services and
   entry of new parties
-- Proliferation of easy-to-use retrieval tools and information agents


POLICY
-- Lift restrictions on RBOC delivery of video content in and out of
   region, but retain restrictions against in-region ownership of cable
   companies by RBOCs.
-- See more competition at local loop from interexchange carriers and
   alternative-access carriers; maintain restrictions on RBOCs' providing
   long-distance service.
-- Encourage two wires into each home--coaxial cable and copper--and multiple
   services to business locations.
-- Encourage greater penetration and tariffing of ISDN to business and
   residential users.
-- Encourage interactive TV tests to homes and businesses.
-- Encourage universal access to data-highway on-ramps: open platform,
   common carrier, user subsidies.
-- Ensure data security and privacy without trapdoors.
-- Encourage upstream/downstream bandwidth symmetry.
-- Maintain multiplicity of free data sources on the Internet.


MEDIUM TERM (1996-1998)


INFRASTRUCTURE
-- Backbone: primary-rate ISDN, frame relay, SMDS, SONET, and ATM; TCP/IP.
-- Local loop: fiber-to-the-node, dual services (phone and cable on copper and
   coaxial cable) or single-provider services (via coaxial). Mix of POTS and
   ISDN, analog and digital cable. Some fiber to the home. Widely installed
   wireless.
-- Bandwidth allocation remains skewed to downstream delivery.
-- Greater use of analog interactive TV.
-- Serious investment in switched digital video, ATM to the node.
-- Early use of HDTV.
-- Widespread use of personal communicators.


POLICY
-- Lift restrictions on in-region cross-ownership between cable and telcos.
-- Lift restrictions on RBOCs providing long-distance service.
-- Increase investment in fiber to the home and desktop.
-- Maintain antitrust vigilance toward infrastructure and content providers.


LONG TERM (1998-2001)


INFRASTRUCTURE
-- Backbone: ATM over SONET, Broadband ISDN, SMDS over ATM, IPng
-- Local loop: single coaxial cable or fiber to the home, running
   end-to-end ATM
-- Ubiquitous bandwidth symmetry
-- Switched digital video infrastructure in place
-- Wide adoption of HDTV and early use of 3-D/virtual reality services

Government Policy on the Data Highway


JUDICIARY
-- MODIFIED FINAL JUDGMENT, JANUARY 1984 
   Judge Harold Greene's decision broke up the AT&T Bell System and created
   the seven RBOCs. It forbade local telcos from manufacturing equipment,
   providing long-distance service, delivering video, or owning content.
-- INFORMATION SERVICES RESTRICTIONS EASED, OCTOBER 1991
   Responding to an appeals court order, Judge Greene lifted restrictions
   against RBOCs' providing information services, allowing them to own news,
   sports, weather, and other data services distributed over their phone lines.
-- BELL ATLANTIC V. U.S., AUGUST 1993 
   U.S. District Court Judge T. S. Ellis ruled unconstitutional the provision
   of the 1984 Cable Act preventing local phone companies from providing TV
   programming in their service territories. Now on appeal. Ruling applies
   only to Bell Atlantic.


LEGISLATIVE
-- HIGH PERFORMANCE COMPUTING ACT OF 1991
   Sen. Gore's bill authorized creation of the NREN (National Research
   and Education Network) and funded research on high-speed networking
   hardware and software.
-- H.R. 1757, NATIONAL INFORMATION INFRASTRUCTURE ACT OF 1993 
   Author: Boucher (D-VA). Status: Passed House of Representatives September
   1993; no Senate equivalent, but portions are found in S.4, which is pending.
   Would expand on the High Performance Computing Act of 1991, providing
   coordinated federal program to develop and disseminate applications for
   high-performance networking and high-speed networking in education,
   libraries, health care, and provision of government information.
-- H.R. 3636, NATIONAL COMMUNICATIONS COMPETITION AND INFORMATION
   INFRASTRUCTURE ACT OF 1993 
   Sponsors: Markey (D-MA), Fields (R-TX), Boucher (D-VA), and Oxley (R-OH).
   Status: Introduced Nov. 22, 1993. Major restructuring of 1934 Communications
   Act; would permit telcos to deliver video, open local telco market to
   competition, provide for open platform, ensure universal service.
-- H.R. 3626, ANTITRUST REFORM ACT OF 1993 
   Sponsors: Brooks (D-TX) and Dingell (D-MI). Status: Introduced Nov. 23,
   1993. Would phase out limitations placed on RBOCs by Modified Final Judgment
   to 1982 consent decree breaking up AT&T. Would let U.S. Attorney General and
   FCC grant RBOCs the right to offer interstate and interexchange services,
   manufacture equipment, offer burglar alarm services, and own a partial
   interest (up to 50 percent or 80 percent, depending on conditions) in
   electronic publishing ventures.


EXECUTIVE
-- THE NATIONAL INFORMATION INFRASTRUCTURE: AGENDA FOR ACTION,
   SEPTEMBER 15, 1993 
   The Clinton administration proposed forming IITF (Information Infrastructure
   Task Force), composed of federal officials, and "U.S. Advisory Council on
   the NII," composed of 25 public- and private-sector appointees. Among its
   goals: to promote private-sector investment; reform communications
   regulation; ensure universal service; promote applications in education,
   health care, manufacturing, and government information; promote standards
   for seamless networking; ensure security and reliability; protect
   intellectual property rights; and improve management of the frequency
   spectrum.
-- VICE PRESIDENT GORE'S ADDRESS, JANUARY 11, 1994
   The administration voiced support for the Brooks/Dingell bill (H.R. 3626),
   which would allow competition between local and long-distance phone
   companies and proposes creation of a new, optional class of regulation for
   broadband interactive services, called Title VII. Three principles are
   paramount: private investment, fair competition, and open access.
   Legislation proposed by Clinton will aim to ensure universal service and
   open access. The administration will also support other NII measures,
   including networking research, applications development, and electronic
   delivery of government services.

 

Federally Funded Pilot Projects


VISTANET


Participants     University of North Carolina at Chapel Hill, North Carolina
                 State University, Bell South, GTE MCNC


Technical        Uses data-intensive files such as CAT scans and MRIs to drive
details          tests of protocols and network performance analysis
                 using HIPPI (High Performance Parallel Interface), ATM
                 (Asynchronous Transfer Mode), SONET (Synchronous Optical
                 Network), and broadband circuit switching


Goals            To examine medical uses for gigabit networking, concentrating
                 on medical imaging


AURORA


Participants     IBM, Bellcore, MIT, University of Pennsylvania, MCI, Nynex,
                 Bell Atlantic, University of Arizona.


Technical        Uses 2.4-Gbps channels from MCI to link the computer labs of
details          the other participants. The switches are configured
                 for both standard ATM and an alternative known as PTM
                 (Packet Transfer Mode).


Goals            To test the differences between the switching schemes and
                 to explore the implications of hooking up a data firehose
                 to a desktop workstation.


Comments         To handle these blistering interface speeds, future desktop
                 computers will need specialized I/O controllers with DMA and
                 direct access to video, as well as operating-system
                 improvements to greatly reduce context-switching time.


NECTAR


Participants     Carnegie Mellon University, Bellcore, Bell Atlantic, and the
                 Pittsburgh Supercomputing Center.


Technical        Uses ATM as an intermediate layer between SONET backbone
details          links and HIPPI interfaces on the computers.


Goals            Scalability; to link gigabit LANs and WANs to one another and
                 to supercomputers. To overcome current I/O bottlenecks and to
                 develop a dedicated network coprocessor to off-load protocol
                 handling from system bus.


Comments         Adding a new computer to the network only requires connecting
                 it to the ATM switch, whereas the Casa network needs a direct
                 SONET line between every pair of communicating computers.
                 Nectar's approach is much closer to the architecture that will
                 be used to link homes and businesses to the data superhighway.
                 Since most people will want to make connections with many
                 different computers in a day, they'll require the flexibility
                 of a switched network. But point-to-point, switchless
                 approaches like Casa's could be used to establish permanent
                 links between a small number of sites, such as when a company
                 connects computers in neighboring plants.


MAGIC


Participants     Earth Resources Observation System Data Center, Lawrence
                 Berkeley Laboratory, Minnesota Supercomputer Center, SRI
                 International, University of Kansas, MITRE, Army
                 High-Performance Computing Research Center, Army Battle
                 Command Battle Laboratory, DEC, Northern Telecom, Southwestern
                 Bell, Splitrock Telecom, Sprint, U.S. West


Technical        Will use SONET links and ATM to create a gigabit WAN 
details          interconnecting three high-speed ATM LANs and one HIPPI 
                 LAN, providing trunk speeds of 2.4 Gbps and access
                 speeds of 622 Mbps


Goals            Will use a military-terrain visualization application to study
                 real-time interactive data exchange among diverse,
                 geographically distributed computing and networking devices


CASA


Participants     Caltech, The San Diego Supercomputing Center, The Jet
                 Propulsion Laboratory, MCI, Pacific Bell, UCLA, US West, and
                 Los Alamos National Laboratory.


Technical        Uses SONET fiber-optic lines running at 622 Mbps to link 
details          the different sites, but doesn't use ATM because ATM's 53-byte
                 cell size proved inefficient for moving supercomputer-size
                 blocks of data that routinely grow to 64 KB or greater.


Goals            To explore methods for synchronizing massive distributed
                 simulations running on supercomputers hundreds of miles apart.


Comments         Even at gigabit speeds, the propagation delay introduced by
                 networking throws off these applications, which concern
                 problems such as modeling the global climate system.


BLANCA


Participants     Lawrence Berkeley Lab, NCSA, University of 
                 California-Berkeley, University of Illinois, University of
                 Wisconsin, AT&T, Ameritech, Astronautics, Bell Atlantic,
                 and Pacific Bell.


Technical        The test-bed links local FDDI (Fiber Distributed Data
details          Interface) LANs with SONET-based ATM switches.



Goals            To study how voice, data, and video flow in networks.


Comments         Voice is at once more forgiving and more demanding than
                 regular data. People using a phone won't even notice if one
                 packet containing a microsecond of conversation is lost; the
                 ear and brain fill in the gap. However, a long pause is
                 unacceptable. Data connections, on the other hand, are
                 time-insensitive but cannot tolerate even a single lost
                 packet, From a user standpoint, delayed data means sluggish
                 performance, but not gibberish. Put another way, voice traffic
                 is predictable and steady, whereas data communications occur
                 in unpredictable bursts that can overflow switch buffers
                 designed for voice traffic.

 

Commercially Funded Pilot Projects


NIIT (The National Information Infrastructure Testbed)


Participants     AT&T, Department of Energy, DEC, Ellery Systems, Essential
                 Communications, Hewlett-Packard, Network Systems, Novell, Ohio
                 State University, Oregon State University, Pacific Bell,
                 Sandia Labs, Smithsonian, Sprint, Sun Microsystems, SynOptics,
                 UC-Berkeley, University of New Hampshire.


Technical        Prototyping data-highway concepts using the Internet, FDDI, 
details          frame relay, and ATM.


Goals            To create real-world demonstration projects using
                 currently available products.


Comments         The first test, called Earth Data Sciences, distributed
                 environmental data over disparate systems in a collaborative
                 multimedia framework. The second test will be on medical
                 imaging.


SMART VALLEY


Participants     3Com; Hewlett-Packard; Pacific Bell; Silicon Graphics; Network
                 General; Stanford University; Regis McKenna; Mohr, Davidow
                 Ventures; and others


Goals            To promote the development of the data superhighway through
                 brokering public and private partnerships and by supporting
                 applications development


XIWT (The Cross-Industry Working Team)


Participants     Apple Computer, AT&T, Bellcore, Bell South, CableLabs,
                 Citicorp, DEC, GTE, Hewlett-Packard, IBM, Intel, MCI, McCaw
                 Cellular, Motorola, Nynex, Pacific Bell, Silicon Graphics,
                 Southwestern Bell, Sun Microsystems, and others.


Goals            To hammer out technical issues involved in bringing gigabit
                 technology to homes and business desktops. XIWT has four
                 working groups--architecture, services, portability, and
                 applications--and has assigned each to examine pertinent
                 issues and produce white papers. XIWT's overall goals for 
                 the data highway are that it be ubiquitous, affordable,
                 flexible, and easy to use.


COLLABORATORY ON INFORMATION INFRASTRUCTURE


Participants     Bellcore, all the regional Bell operating companies, Capital
                 Cities/ABC, DEC, Hewlett-Packard, JCPenney, Los Alamos
                 National Laboratory, MIT Media Lab, Microware Systems,
                 Northern Telecom, WilTel


Goals            To find solutions to practical problems such as user interface
                 and network navigation

 

Illustration: Data Highway Report Card Today's telephone system comes closest to meeting the criteria for a data superhighway, but its copper wiring can't currently support multiple channels of video or other high-bandwidth data. The Internet is hard to use, doesn't support billing or widespread distribution of real-time data, and can be both expensive and difficult to access. This is changing, however, with the ri se of commercial Internet providers and new tools. The cable system falls in the middle, but two-way capabilities are only now being added.

Illustration: Today's Telco and Cable Systems Today's phone and cable companies use different topologies and technologies to deliver their services. The phone system is switched, symmetrical, and interactive. Its backbone or "trunk" lines are typically digital fiber; analog copper wires deliver service into homes and businesses. The cable system is unswitched and distributive, built on a backbone of analog fiber and satellites, with analog coaxial cables into customer sites. In the future (far right), their local architectures will be nearly identical: Interconnected signal collection and routing points feed services via fiber to the neighborhood or the curb. From these nodes, data enters homes and businesses on a mix of coaxial cable, copper wire, and fiber to reach set-top boxes, computers, and phones. Both systems are switched and two-way, though not necessarily symmetrical or entirely digital.

Table: TOPOLOGIES AND PROTOCOLS Telephone and cable systems use dramatically different communications architectures and standards. If the RBOCs, interexchange carriers, and cable companies merge into the data highway, their systems will evolve to encompass each other's advantages. (This table is not available electronically. Please see March, 1994, issue.)

Illustration: What ADSL and DMT Provide ADSL attempts to use existing copper phone wire for broadband interactive video and other high-speed digital services. An experimental variation on ADSL, known as DMT (Discrete Multi-Tone), squeezes four one-way video channels onto ordinary twisted-pair wiring, along with a two-way interactive backchannel and two ISDN channels--still leaving room for regular analog telephone service.

Illustration: O'Reilly & Associates' Internet-In-A-Box makes use of the Mosaic data browser, developed at the National Center for Supercomputing Applications.

Contributors

Senior news editor Tom Halfhill, news editor Ed Perratore, news editor Dave Andrews, consulting editor Peter Wayner, and freelancer Frank Hayes provided additional reporting for this story. Andy Reinhardt is BYTE's West Coast bureau chief. He can be reached on MCI Mail at 536-9124 or on the Internet or BIX at mailto:areinhardt@bix.com.


Data Highway Lags in Japan

By Asao Ishizuka
BYTE

March 1994

The data highway hasn't yet come to Japan. NTT (Nippon Telephone & Telegraph), Japan's largest common carrier, has a backbone that is already 65 percent fiber, and corporations are using this fiber for intra- and intercity communications. ISDN is also available--there are more than 230,000 basic-rate ISDN circuits (64 Kbps) and 3100 primary-rate circuits (1.5 Mbps).

Implementing fiber to the home, or even fiber to the curb (also known as the Next Generation Communications Infrastructure) will be a long, tough road. NTT estimates that the cost to develop the new infrastructure will be $410 billion; if $18 billion is allocated annually for this, the new infrastructure will be built by 2015.

Japan has also experienced a very slowly developing cable business. This is due in part to widespread coverage by broadcast TV, a large number of video rental shops, and the availability of alternative entertainment sources, such as Direct Broadcast Satellite, which now dishes out NTSC and HDTV signals to nearly 6.3 million subscribers.

However, some Japanese multimedia network researchers think that the real use of the information highway will be for professional and business applications, not for the home, because of its cost. The Ministry of Posts and Telecommunications decided in December 1993 to deregulate CATV and boost the integration of broadcasting and communications by repositioning cable as a core medium. Under the new rules, cable businesses will be able to provide communications services in addition to broadcasting, and foreign carriers will be able to enter the Japanese cable business. Nynex is already getting ready for experimental CATV service in Yokohama with Japanese partners, starting in the spring of 1994. And TCI is starting an advanced CATV service in Tokyo with Suginami CATV, beginning in October 1994.

Asao Ishizuka is a senior writer in the PC Bureau of Nikkei Business Publications, Inc. (Tokyo, Japan). He can be reached on the Internet at mailto:asao@farnsworth.mit.edu, on CompuServe at 74120,1663, or on BIX as "asaoi."


Europe's Many Data Highways

By Bernd Steinbrink
BYTE

March 1994

Having built a uniform standard for Euro-ISDN that was accepted by 26 network companies in 20 countries, France Telecom and Deutsche Telekom are now trying to establish a European standard for the next generation of high-speed networks. In cooperation with British Telecom, Spanish Telefonica, Italian STET/ASST, and Swedish Telia, the companies will build a Europe-wide, high-speed digital fiber network called the Global European Network, or GEN, that should be the precursor of a future ATM (Asynchronous Transfer Mode) network. In the mid-1990s, GEN is expected to be absorbed into the METRAN or Managed European Transmission Network, which will support data transmission at rates of up to 155 Mbps across Europe.

AT&T now cooperates with most of these state companies on national ATM projects, as well as on PEAN (Pan European ATM Network), a pilot project set up by 18 European operators to test a broad palette of communication services. By mid-1994, PEAN will have nodes in Austria, Belgium, Denmark, Finland, France, Germany, the Netherlands, Norway, Spain, Sweden, and perhaps other countries; interoperability tests scheduled for then will allow transmission of video and image data across the high-speed network. PEAN members have agreed to purchase and install ATM cross-connections that meet standards and recommendations from CCITT/ITU and ETSI (the European Telecommunication Standard Institute), as well as specifications from Heidelberg-based Eurescom.

France Telecom has started yet another project with Telecom PTT Switzerland called Betel (Broadband Exchange over Trans-European Links), which began trials in September 1993 with the interconnection of several research facilities in France and Switzerland. Applications running on the Betel network include distance learning via videoconferencing and sharing supercomputers for scientific computing tasks. The platform consists of 34-Mbps fiber-optic circuits, and the different sites are equipped with FDDI (Fiber Distributed Data Interface) LANs linked to the ATM platform. Starting this year, cost-effective LAN interconnections at very high speeds via ATM networks will be offered.

Another France Telecom ATM project is Brehat, a complete communication system for videoconferencing, video transmission, LAN interconnection, and circuit emulation. The first segments of this network will be deployed in the cities of Lannion and Rennes and at several sites in the Paris region this year. Full-scale commercial launch is planned for 1995. By then, about 17,000 kilometers of fiber-optic lines will be installed in France.

Britain is a special case because it liberalized its telecommunications in 1991, allowing TV and telephone on the same network and making investment in fiber optics profitable for private companies. One of the most unusual projects involves a company called Energis, which is owned by the 12 regional electric companies in England and Wales. Energis is planning a nationwide fiber-optic network that piggybacks on the power grid. The company was granted a full telecommunications operating license last May and since then has installed 1200 km of fiber by wrapping it around the wires of overhead electrical lines. By the spring of this year, Energis's services--voice, data, image, and multimedia--will link 20 of the country's largest cities and be available to businesses and residential customers. By January 1995, Energis will extend the network to all major towns in the country.

Germany already has one of the most extensive fiber-optic networks in the world. Deutsche Telekom has installed fiber in about 80 large cities and connected them to each other via fiber. This is the basis for a network called VBN (Vermittelandes Breitbandnetz), which was first launched in February 1989. VBN allows data transfers at up to 140 Mbps for videoconferencing and is connected via satellite to international videoconferencing networks.

VBN will be the foundation for a fiber-optic network leading into customer homes. One pilot project, BERKOM (Berlin Kommunikation), has already been installed in Berlin for applications such as telepublishing, telemedicine, and city information systems.

In western Germany, the fiber-optic network will be built up through introduction of broadband communications services. A pilot ATM project called Broadband ISDN is scheduled for early this year, starting in Berlin, Hamburg, and the Bonn/Koln (Cologne) region. By 1996, the network will be made available for general use.

Group Effort

The three best-known Pan-European initiatives are RACE, ESPRIT, and IMPACT, all started by the EC (European Community) in the 1980s. RACE (Research and Development in Advanced Communication in Europe) is focused on integrated broadband communications and image/data communications. ESPRIT (European Strategic Program for Research in Information Technology) began in 1984 and is now in its third phase. Its focus is information technology, and it includes an Office and Business Systems subprogram, slated to run from 1991 to 1994, that deals with image compression techniques for interactive media.

IMPACT 1 (Information Market Policy Actions) ran from 1988 to 1990; in December 1991 the EC adopted its successor, IMPACT 2, to establish an information services market in two key areas: interactive multimedia and geographical information. At the end of 1993, another effort, Info Euro Access, was established to develop the European market for information services, especially those using broadband communications and Euro-ISDN.

Most European ATM networks will remain pilot projects until the middle of the 1990s and will likely be used for business communication afterward.

Bernd Steinbrink is a freelance journalist based in Oldenburg, Germany. He can be reached on CompuServe at 100277,3444 or on BIX c/o "editors."


Highway Safety: The Key Is Encryption

By Paulina Borsook
BYTE

March 1994

How will information sent over the data superhighway be kept safe and secure, ensuring privacy for individuals and commercial operators? This question is far from resolved, and it has provoked heated controversy about encryption regulations.

Data encryption is vital because it's the only way to ensure that data is kept strictly private--especially as communication shifts more and more to wireless pathways. Other security measures, such as requiring passwords or physically restricting access to a network, are less reliable. According to Stephen Crocker, vice president at Trusted Information Systems (Glenwood, MD) and Internet area director for security, encryption implemented in hardware will be able to keep up perfectly well with gigabit speeds, but hardware implementations may prove too costly in component prices, space, or power consumption for inexpensive consumer devices such as set-top boxes or cellular phones. On the other hand, software encryption may not be able to keep up with very high-speed applications.

At the level of technology, how to use encryption routinely has not been worked out. Yet it's essential: To feel comfortable using the data highway, consumers must be sure that information about their tastes and habits is kept private unless they authorize its release. Crocker points out that while DES, the most common U.S. encryption technology, has been recertified by NIST (National Institute of Standards and Technology) for another five years, increasingly powerful computers may soon "have enough brute force to break yesterday's code," meaning the years-old DES technology.

More secure schemes exist, and this has led to a new kink to the encryption debate: how law enforcement agencies should deal with virtually uncrackable new public-key and compound encryption techniques, such as PGP (Pretty Good Privacy). These schemes can protect people from malicious industrial competitors--or stymie law-enforcement agencies on the trail of a criminal money-laundering scheme.

The Clipper chip proposed by the U.S. government largely for telephone-based communications uses an encryption technology that provides a "back door" accessible to government agencies authorized for a wiretap. The proposal has been met with a storm of legal and technological controversy, although the government has said it is considering alternatives.

Despite Clipper, "it's not a big trick for criminals to encrypt conversations," says Crocker; they can, for instance, obtain foreign DES products. So unless the U.S. government makes Clipper mandatory on all telecommunications gear and, in Crocker's words, "outlaws stray cryptography"--two actions it has repeatedly said it will not take--there is no reason society's bad elements would use products that give law enforcement a means to entrap them.

San Francisco-based writer Paulina Borsook wrote about security in the May 1993 issue of BYTE. She can be reached on the Internet as mailto:loris@well.sf.ca.usor on BIX c/o "editors."


The Tools for New TV

By Tom R. Halfhill
BYTE

March 1994

Think of it as the world's largest WAN (wide-area network) with the world's largest database servers at one end and the world's largest number of clients at the other: That's the vision for broadband ITV (interactive TV).

The clients, of course, are ordinary TV sets, augmented by a new generation of digital set-top boxes that will rival the processing power of today's PCs and workstations. The servers are likewise a new breed of computers that not only have enough storage for vast libraries of movies, TV shows, and multimedia applications, but also are capable of feeding that data downstream to millions of users--in real time, on demand. In between, tying everything together, is the nationwide broadband network that flawlessly switches all this traffic while ensuring that every transaction is billed to the appropriate user.

The whole system is far larger and more complex than anything that exists today. Its nearest relative is the public telephone system--a low-bandwidth network that terminates into relatively simple analog devices and is designed to deliver communications instead of content. ITV is so new that critical pieces of the hardware and software technologies are still being invented. It won't come cheap, it won't come easy, and it probably won't come as quickly as some people are predicting.

Will it come at all? No question. ITV definitely isn't a technology in search of a solution. In fact, it's the technology that's the problem.

Consider, for example, the servers that will form the hub of this great network. Most of today's databases store relatively simple data (e.g., names and addresses), and their I/O model is transactional, so minor delays are common when accessing records. But headend servers on the ITV network must store full-motion video, stereo sound, and other rich data types. These "video servers" must also achieve real-time or near real-time throughput, because even brief delays will cause visible glitches on home TV screens.

Oracle (Redwood Shores, CA), which hopes to become a key player in this field, says that most of its current database customers manage 100 to 150 GB of data. Oracle's biggest customer, a credit-history company, has a database approaching 1 TB (1024 GB). But the ITV network of the future will store the world's entire movie library, estimated at 65,000 films. Each film requires 1.5 GB or more of storage when compressed in MPEG-2 format. That adds up to about 95 TB. Now add all the historical news footage and popular TV shows that will eventually be stored, too. And don't forget the other content, such as electronic catalogs and interactive encyclopedias, and things yet to be imagined.

The bulk of this material will be archived in near-line storage: banks of automated jukeboxes that can mount tape cartridges or optical disks on the video server within seconds of a user's request. The server will copy the video onto its local mass storage, probably striping the data across arrays of hard disks for redundancy and faster access. Then it will buffer the data in RAM while pumping it downstream to the user's set-top box. Frequently accessed material, such as the most popular movies and games, may be permanently maintained on local storage. Special software will track viewing habits, automatically loading It's a Wonderful Life in December.

Consumers will expect the same reliability from the ITV network that they do from the public phone system, so video servers will need careful maintenance. An array of 1000 hard disks will lose an average of one drive per day, according to MTBF (mean time between failure) statistics. Technicians will patrol rooms of servers and jukeboxes, hot-swapping failed drives on the spot, just as they used to keep ENIAC running by constantly replacing blown vacuum tubes.

If the storage requirements of video servers seem daunting, the I/O is nightmarish. During peak hours in major cities, thousands of people may be requesting videos. Today's broadcast model is synchronous: One "copy" of a movie is sent over cable or the airwaves to a mass audience, and everyone watches it at once. Pure video-on-demand is asynchronous: If 5000 people on Saturday night want to watch the latest hit film, only a few will punch in their orders at the same moment. Thus, the server must stream the same video to thousands of destinations according to different time bases, some only seconds apart.

To complicate matters still further, the video server will provide virtual VCR functions, such as pause, rewind, fast-forward, slow motion, and frame advance. So it has to update thousands of file pointers to keep pace with frequently shifting viewing patterns.

What kind of computer can do all this? "I think 'computer' may be the wrong word," says Greg Hoberg, marketing manager of the video communications division at Hewlett-Packard (Santa Clara, CA); "it's really an I/O machine." Hoberg says the problem is not computational and therefore requires an entirely new approach to hardware design. "We're trying to come up with the architecture that is appropriate to this problem. It's a problem of I/O and mass storage, not a problem of MIPS."

HP's video server, dubbed the Video Engine, is expected to be ready in about a year. Hoberg says it will be a highly scalable machine that fits into HP's vision of numerous servers distributed across a hierarchical network. Local servers will supply the most popular videos, while remote machines that serve many localities will store less-popular content. This topology could minimize headend costs without compromising access.

HP isn't alone in the scramble to gain a foothold in the high-stakes video-server market. IBM and DEC see video servers as a potential use--even a savior--for large minicomputers and mainframes. Microsoft, Intel, AT&T, Silicon Graphics, Motorola, nCube, and Oracle are a few of the other companies working on hardware and software. As with anything new, different approaches are emerging.

Unlike HP, Oracle and nCube (Foster City, CA) think video servers do need great computational power. They're designing servers using nCube's massively parallel computers and Hypercube architecture. In their view, symmetric multiprocessor systems have too much hardware overhead and will quickly fall victim to bus saturation if applied to large-scale video-on-demand.

To boost the server's I/O bandwidth, nCube interconnects large numbers of proprietary microprocessors comparable to a 386 but optimized for throughput. Oracle, which is writing the software, says an nCube-based video server with 1024 processors could supply video to 7000 homes. A larger nCube-2 computer supports up to 4096 processors and could serve 30,000 homes.

"No one knows for sure how these machines will be used," says Benjamin Linder, director of technical marketing for Oracle's Media Server project. "So Oracle is designing a system that's as general as possible. We're trying to create servers to act as living libraries on the data superhighway."

Linder says that the massively parallel approach is overkill for video I/O, even on this scale, but nevertheless makes sense because the server could handle tasks that otherwise would be shunted downstream to the user's set-top box.

This is a key point. New computers are needed for both ends of the ITV network--clients as well as servers. Digital set-top boxes are much more than simple tuners or descramblers, yet their cost must be driven down to about $300 before broadband ITV is economical.

Consider what a typical box might contain. Start with a powerful CPU, such as a 486, PowerPC, or Mips R4000. Add 1 to 3 MB of RAM; a high-speed graphics chip for screen overlays and video games; a display chip; a 1-GHz RF tuner; a demodulator; an error-correction chip; an MPEG-II decoder; logic to strip the audio soundtrack from the incoming video; a Dolby decoder; two 16-bit audio D/A converters; a video RGB converter; an RF modulator; an infrared interface for remote control; flash ROM for the operating system; a security chip to prevent theft of service; and a switching power supply.

"People have this interactive TV vision, but if the set-top box costs $1000, it's not going to be worth it," notes Roger Kozlowski, vice president and technical director for the consumer segment of Motorola (Phoenix, AZ). That's why Oracle and nCube are designing video servers that can shoulder part of the computational burden. They seem to be in the minority, however. Other companies are betting the set-top technology will be affordable by 1995 or 1996--and it'll be at least that long before the network infrastructure is ready to support it.

Illustration: Broadband Interactive TV To provide video-on-demand, multimedia encyclopedias, and other new services, ITV networks will need high-speed servers with vast amounts of mass storage. Material will be stored on digital tapes or optical disks in automated jukeboxes. When the network receives a user's request via the upstream backchannel, the server will retrieve the appropriate file from the jukebox and copy it to a hard disk array, from which the compressed video will be spooled downstream to the user's digital set-top box. The box will decode and decompress the video and then modulate an analog signal for the TV.

Tom R. Halfhill is a BYTE senior news editor. You can reach him on the Internet or BIX at mailto:thalfhill@bix.com.

 

Copyright © 1994 CMP Media LLC