45
8/14/2019 8918 http://slidepdf.com/reader/full/8918 1/45  High Performance Computing and  Networking for Science September 1989

8918

Embed Size (px)

Citation preview

Page 1: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 1/45

 High Performance Computing and  Networking for Science

September 1989

Page 2: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 2/45

Recommended Citation:

U.S. Congress, Office of ‘Technology Assessment,   High Performance Computing and   Networkig for Science-Background Paper, OTA-BP-CIT-59 (Washington, DC: U.S.Government Printing Office, September 1989).

Library of Congress Catalog Card Number 89-600758

For sale by the Superintendent of DocumentsU.S. Government Printing Office, Washington, DC 20402-9325

(Order form can be found in the back of this report.)

Page 3: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 3/45

Foreword

Information technology is fundamental to today’s research and development:high performance computers for solving complex problems; high-speed data

communication networks for exchanging scientific and engineering information; verylarge electronic archives for storing scientific and technical data; and new display

technologies for visualizing the results of analyses.

This background paper explores key issues concerning the Federal role insupporting national high performance computing facilities and in developing anational research and education network. It is the first publication from ourassessment,   Information Technology and Research, which was requested by the HouseCommittee on Science and Technology and the Senate Committee on Commerce,

Science, and Transportation.

OTA gratefully acknowledges the contributions of the many experts, within and

outside the government, who served as panelists, workshop participants, contractors,

reviewers, detailees, and advisers for this document. As with all OTA reports,however, the content is solely the responsibility of OTA and does not necessarilyconstitute the consensus or endorsement of the advisory panel, workshop participants,or the Technology Assessment Board.

- D i r e c t o r

. . .

w

Page 4: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 4/45

Performance Computing and Networking for Science Advisory Panel

Charles Bender

DirectorOhio Supercomputer Center

Charles DeLisiChairmanDepartment of Biomathematical

ScienceMount Sinai School of Medicine

Deborah L. EstrinAssistant ProfessorComputer Science DepartmentUniversity of  Southern California

Robert EwaldVice President, SoftwareCray Research, Inc.

Kenneth flammSenior FellowThe Brookings Institution

Malcolm GetzAssociate ProvostInformation Services & TechnologyWnderbilt University

Ira GoldsteinVice president ResearchOpen Software Foundation

Robert E. Kraut

ManagerInterpersonal Communications GroupBell Communications Research

John P. (Pat) Crecine, ChairmanPresident, Georgia Institute of Technology

Lawrence Landweber

ChairmanComputer Science DepartmentUniversity of Wisconsin-Madison

Carl LedbetterPresident/CEOETA Systems

Donald MarshVice President, TechnologyContel Corp.

Michael J, McGillVice PresidentTechnical Assessment & DevelopmentOCLC, Computer Library Center, Inc.

Kenneth W. NevesManagerResearch & Development ProgramBoeing Computer Services

Bernard O’LearManager of SystemsNational Center for Atmospheric

Research

William PoduskaChairman of the BoardStellar Computer, Inc.

Elaine Rich

DirectorArtificial Intelligence LabMicroelectronics and Computer

Technology Corp.

Sharon J. Rogers

University LibrarianGelman LibraryThe George Washington University

William SchraderPresidentNYSERNET

Kenneth ToyPost-Graduate Research

GeophysicistScripps Institution of Oceanography

Keith UncapherVice PresidentCorporation for the National

Research Initiatives

Al WeisVice PresidentEngineering & Scientific ComputerData Systems DivisionIBM Corp.

NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel. The views expressed inthis OTA background paper, however, are the sole responsibility of the Office of Technology Assessment.

iv 

Page 5: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 5/45

Page 6: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 6/45

Page 7: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 7/45

List Of  Reviewers (continued)

Doyle KnightPresidentJohn von Neumann National

Supercomputer CenterConsortium for Scientific

Computing

Mike LevineCo-director of the Pittsburgh

Supercomputing CenterCarnegie Mellon University

George E. LindamoodProgram DirectorIndustry ServiceGartner Group, Inc.

M. Stuart LynnVice President for

Information TechnologiesCornell University

Ikuo MakinoDirectorElectrical Machinery & Consumer

Electronics Division

Minisry of InternationalTrade and Industry

Richard MandelbaumVice Provost for ComputingUniversity of Rochester

Martin MassengaleChancellorUniversity of N’ebraska-Lincoln

Gerald W. May

PresidentUniversity of New Mexico

Yoshiro MikiDirector, Policy Research DivisionScience and Technology Policy BureauScience and Technology Agency

Takeo MiuraSenior Executive Managing DirectorHitachi. Ltd.

J. Gerald MorganDean of EngineeringNew Mexico State University

V. Rama Murthy

Vice Provost for AcademicAffairs

University of Minnesota

Shoichi NinomoiyaExecutive DirectorFujitsu Limited

Bernard O’LearManager of SystemsNational Center for Atmospheric

Research

Ronald OrcuttExecutive DirectorProject AthenaMIT

Tad PinkertonDirectorOffice of Information

TechnologyUniversity of Wisconsin-Madison

Harold J. RavechePresidentStevens Institute of Technology

Ann Redelf Manager of Information

ServicesCornell Theory CenterCornell University

Glenn RicartDirectorComputer Science CenterUniversity of Maryland in

College Park 

Ira RicherProgram ManagerDARPA/ISTO

John RiganatiDirector of Systems ResearchSupercomputer Research CenterInstitute for Defense Analyses

Mike RobertsVice PresidentEDUCOM

David RosellePresidentUniversity of Kentucky

Nora SabelliNational Center for Supercomputing

ApplicationsUniversity of Illinois at

Urbana-Champaign

Steven SamplePresidentSUNY, Buffalo

John SellPresidentMinnesota Supercomputer Center

Hiroshi ShimaDeputy Director-General for

Technology AffairsAgency of Industrial Science and

Technology , MITI

Yoshio ShimamotoSenior Scientist (Retired)Applied Mathematics DepartmentBrookhaven National Laboratory

Charles SorberDean, School of EngineeringUniversity of Pittsburgh

Harvey StoneSpecial Assistant to the

PresidentUniversity of Delaware

Dan SulzbachManager, User ServicesSan Diego Supercomputer Center

Tatsuo TanakaExecutive DirectorInteroperability TechnologyAssociation for Information Processing,

Japan

Ray Toland

presidentAlabama Supercomputing Network 

Authority

Kenneth ToloVice ProvostUniversity of Texas

at Austin

Kenneth ToyPost-Graduate Research

GeophysicistScripps Institution of 

Oceanography

August B. Tumbull. III

Provost & Vice President,Academic Affairs

Florida State University

Continued on next page 

Page 8: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 8/45

List of Reviewers (continued)

Gerald TurnerChancellorUniversity of Mississippi

Douglas Van HouwelingVice Provost for Information

& TechnologyUniversity of Michigan

Anthony VillasenorProgram ManagerScienCE NetworksOffice of Space Science and

ApplicationsNational Aeronautics and

Space Administration

Hugh WalshData Systems DivisionIBM

Richard WestAssistant Vice President,

IS&ASUniversity of California

Steve Wolff Program Director for NetworkingComputing & Information Science &

EngineeringNational Science Foundation

James WoodwardChancellorUniversity of North Carolina

at Charlotte

Akihiro YoshikawaResearch DirectorUniversity of California, BerkeleyBRIE/IIS

NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel, The views expressed inthis OTA background paper, however, are the sole responsibility of the Office of Technology Assessment,

,..

VIII

Page 9: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 9/45

 

Contents

Page

Chapter 1: Introduction and OverviewObservations . . . . . . . . . . . . . . . . . . . . . . . . . . 1RESEARCH AND INFORMATION

TECHNOLOGY-A FUTURESCENARIO . . . . . . . . . . . . . . . . . . ., . . . . . . 1MAJOR ISSUES AND PROBLEMS . . . . . . . 1NATIONAL IMPORTANCE-THE NEED

FOR ACTION . . . . . . . . . . . . . . . . . . . . . . . . 3Economic Importance ., . . . . . . . . . . . . . . . . 3Scientific Importance . . . . . . . . . . . . . . . . . . . 4

Timing . . . . . . . . . . . ... , . . . . . . . . . . . . . . . 4Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Collaborators ... , . . . . . . . . . . . . . . . . . . . . . 5Service Providers ., . . . . . . . . . . . . . . . . . . . 5

Chapter 2: High Performance Computers . . . 7WHAT IS A HIGH PERFORMANCE

COMPUTER? . . . . . . . . . > . . . . . . . . . . . . . . 7HOW FAST IS FAST? . . . . . . . . . . . . . . . . . . . 8THE NATIONAL SUPERCOMPUTER

CENTERS . . . . . . . . . . . . . . . . . . . . . . . . . , . 9The Cornell Theory Center . . . . . . . . . . . . . 9The National Center for Supercomputing

Applications . . . . . . . . . . . . . . . . . . . . . . . 10Pittsburgh Supercomputing Center . . . . . . 10San Diego Supercomputer Center . . . . . . . 10John von Neumann National

Supercomputer Center . . . . . . . . . . . . . . 11OTHER HPC FACILITIES . . . . . . . . . . . . . . 11

Minnesota Supercomputer Center . . . . . . . 11The Ohio Supercomputer Center . . . . . . . . 12Center for High Performance Computing,

Texas (CHPC) . . . . . . . . . . . . . . . . . . . . . 12Alabama Supercomputer Network . . . . . . . 13Commercial Labs . . . . . . . . . . . . . . . . . . . . 13Federal Centers . . . . . . . . . . . . . . . . . . . . . . 13

CHANGING ENVIRONMENT . . . . . . . . . . 13REVIEW AND RENEWAL OF THE NSF

CENTERS . . . . . . . . . . . . . . . . . . . . . . . . . . 15THE INTERNATIONAL ENVIRONMENT . 16

Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Page

Other Nations . . . . . . . . . . . . . . . . , . . . . . . 19Chapter 3: Networks . . . . . . . . . . . . . . . . . . . . . 21

THE NATIONAL RESEARCH AND

EDUCATION NETWORK (NREN) . . . . . 22The Origins of Research Networking . . . . 22The Growing Demand for Capability and

Connectivity . . . . . . . . . . . . . . . . . . . . , . . 23The Present NREN . . . . . . . . . . . . . . . . . . . 23Research Networking as a Strategic

High Technology Infrastructure . . . . . . . 25Federal Coordination of the Evolving

Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Players in the NREN . . . . . . . . . . . . . . . . . . 26The NREN in the International

Telecommunications Environment . . . . 28Policy Issues . . . . . . . . . . . . . . . . . . . . . . . . 28Planning Amidst Uncertainty , ., . . . . . ., . 29Network Scope and Access . . . . . . . . . . . . 29Policy and Management Structure . . . . . . . 31Financing and Cost Recovery . . . . . . . . . . 32Network Use . . . . . . . . . . . . . . . . . . . . . . . . . 33Longer-Term Science Policy Issues . . . . . 33Technical Questions . . . . . . . . . . . . . . . . . . 34Federal Agency Plans:

FCCSET/FRICC . . . . . . . . . . . . . . . . . . . 34NREN Management Desiderata . . . . . . . . . 35

 FiguresFigure Page

1-1. An Information Infrastructure forResearch . . . . . . . ., ., ... , . . . . . . . . . . . . . . . 21-2. Distribution of Federal Supercomputers . . . 14

TablesTable Page

2-1. Some Key Academic High PerformanceComputer Installations . . . . . . . . . . . . . . . . . . 12

3-1. Principal Policy Issues in Network Development . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3-2. Proposed NREN Budget . . . . . . . . . . . . . . . . 36

ix

Page 10: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 10/45

Chapter 1

Introduction and Overview Observations

The Office of Technology Assessment is conduct-ing an assessment of the effects of new informationtechnologies—including high performance comput-ing, data networking, and mass data archiving-onresearch and development. This background paperoffers a midcourse view of the issues and discussestheir implications for current discussions aboutFederal supercomputer initiatives and legislativeinitiatives concerning a national data communica-tion network.

Our observations to date emphasize the criticalimportance of advanced information technologyto research and development in the United States,the interconnection of these technologies into anational system (and, as a result, the tighter

coupling of policy choices regarding them), andthe need for immediate and coordinated Federalaction to bring into being an advanced informa-tion technology infrastructure to support U.S.research, engineering, and education.

RESEARCH AND INFORMATIONTECHNOLOGY—A FUTURE

SCENARIO

Within the next decade, the desks and laboratorybenches of most scientists and engineers will beentry points to a complex electronic web of informa-tion technologies, resources and information serv-ices, connected together by high-speed data commu-nication networks (see figure 1-1 ). These technolo-gies will be critical to pursuing research in mostfields. Through powerful workstation computers ontheir desks, researchers will access a wide variety of resources, such as:

q

q

q

an interconnected assortment of local campus,State and regional, national, and even intern-ational data communication networks that link users worldwide;specialized and general-purpose computers in-

cluding supercomputers, minisupercomputers,mainframes, and a wide variety of specialarchitectures tailored to specific applications;collections of application programs and soft-ware tools to help users find, modify, ordevelop programs to support their research;

-1

archival storage systems that contain spe-cialized research databases;experimental apparatus-such as telescopes,environmental monitoring devices, seismographs,and so on---designed to be set-up and operatedremotely;services that support scientific communication,including electronic mail, computer confer-encing systems, bulletin boards, and electronic

 journals;a “digital library” containing reference mate-rial, books, journals, pictures, sound record-ings, films, software, and other types of infor-mation in electronic form; andspecialized output facilities for displaying theresults of experiments or calculations in more

readily understandable and visualizable ways.

Many of these resources are already used in someform by some scientists. Thus, the scenario that isdrawn is a straightforward extension of currentusage. Its importance for the scientific communityand for government policy stems from three trends:1 ) the rapidly and continually increasing capabilityof the technologies; 2) the integration of these

technologies into what we will refer to as an“information infrastructure”; and 3) the diffusion of information technology into the work of mostscientific disciplines.

Few scientists would use all the resources andfacilities listed, at least on a daily basis; and theparticular choice of resources eventually madeavailable on the network will depend on how thetastes and needs of research users evolve. However,the basic form, high-speed data networks connectinguser workstations with a worldwide assortment of information technologies and services, is becominga crucial foundation for scientific research in mostdisciplines.

MAJOR ISSUES AND PROBLEMS

Developing this system to its full potential willrequire considerable thought and effort on the part of government at all levels, industry, research institu-tions, and the scientific community, itself, It willpresent policy makers with some difficult questionsand decisions.

Page 11: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 11/45

2

Figure l-l—An Information Infrastructure for Research

On-1ine

ex p e r i m e n ts

i Wo rks ta t io n s

M a i n f ra mes

\

/

t

I

t Electronic mail I

 / 

&

Bulletin in boards

1‘ — I

Digital electronic

/Iubraries

Net works  / 

&Data a rch ives

A s s o c i a t e d S e r v i c e s II 1/

\

Scientific applications are very demanding on . new methods for storing and accessingtechnological capability. A substantial R&D com- mation horn very large data archives.ponent will need to accompany programs in-tended to advance R&D use of information An important characteristic of this systemtechnology. To realize the potential benefits of this different parts of it will be funded and operated bynew infrastructure, research users need advances in different entities and made available to users insuch areas as: different ways. For example, databases could be

in for-

is that

q

q

q

operated by government agencies, professional soci -more powerful computer designs; eties, non-profit journals, or commercial firms.

 —

Computer facilities could similarly be operated bymore powerful and efficient computational

techniques and software; overly high-speed government, industry, or universities. The network,switched data communications;

itself, already is an assemblage of pieces funded oroperated by various agencies in the Federal Govem-

improved technologies for visualizing data ment; by States and regional authorities; and by localresults and interacting with computers; and agencies, firms and educational institutions. Keep-

Page 12: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 12/45

3

ing these components interconnected technologi-cally and allowing users to move smoothly amongthe resources they need will present difficultmanagement and policy problems.

Furthermore, the system will require significantcapital investment to build and maintain, as well asspecialized technical expertise to manage. How thevarious components are to be funded, how costsare to be allocated, and how the key componentssuch as the network will be managed over thelong term will be important questions.

Since this system as envisioned would be sowidespread and fundamental to the process of research, access to it would be crucial to participa-tion in science. Questions of access and participa-tion are crucial to planning, management, andpolicymaking for the network and for many of the services attached to it.

Changes in information law brought about by theelectronic revolution will create problems and con-flicts for the scientific community and may influ-ence how and by whom these technologies are used.The resolution of broader information issuessuch as security and privacy, intellectual prop-erty protection, access controls on sensitive infor-mation, and government dissemination practicescould affect whether and how information tech-nologies will be used by researchers and who mayuse them.

Finally, to the extent that, over the long run,modem information technology becomes so funda-mental to the research process, it will transform thevery nature of that process and the institutions—libraries, laboratories, universities, and so on—thatserve it. These basic changes in science wouldaffect government both in the operation of its ownlaboratories and in its broader relationship as asupporter and consumer of research. Conflictsmay also arise to the extent that governmentbecomes centrally involved, both through fund-ing and through management with the tradition-

ally independent and uncontrolled communicationchannels of science.

NATIONAL IMPORTANCE—THE NEED FOR ACTION

Over the last 5 years, Congress has becomeincreasingly concerned about information technol-

ogy and research. The National Science Foundation(NSF) has been authorized to establish supercom-puter centers and a science network. Bills (S 1067HR 3131) are being considered in the Congress toauthorize a major effort to plan and develop anational research and education network and tostimulate information technology use in science andeducation. Interest in the role information technol-ogy could play in research and education hasstemmed, first, from the government’s major role asa funder, user, and participant in research and,secondly, from concern for ensuring the strength andcompetitiveness of the U.S. economy.

Observation 1: The Federal Government needs to

is

establish its commitment to the advanced infor-mation technology infrastructure necessary for 

 furthering U.S. science and education. This need sterns directly from the importance of science and technology to economic growth, the importanceof information technology to research and devel-opment, and the critical timing for certain policydecisions.

  Economic Importance

A strong national effort in science and technology

critical to the long-term economic competitive-ness, national security, and social well-being of theUnited States. That, in the modem internationaleconomy, technological innovation is concomitantwith social and economic growth is a basic assump-tion held in most political and economic systems inthe world these days; and we will take it here as abasic premise. It has been a basic finding in manyOTA studies.

l(This observation is not to suggest

that technology is a panacea for all social problems,nor that serious policy problems are not often raisedby its use.) Benefits from of this infrastructure areexpected to flow into the economy in three ways:

First, the information technology industry canbenefit directly. Scientific use has always been a

I For ~xmple,  U,S, Congess,  Offiw of  Tu~~]o~  As~ssmen[,  Techm/ogy  ad  the  A~ri~an  Ec’ono~”(  TransitIon,  OTA-TET-283  (wti.Shlllg(Otl,

DC: U.S. Government Printing Office, May 1988) and fqformafwn  Techofogy  R&D  Criticaf Trend  andlswes,  OTA-CIT-268 (Washington, DC: U.S.Government Printing Office, February 1985),

Page 13: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 13/45

4

major source of innovation in computers and com-munications technology. Packet-switched data com-munication, now a widely used commercial offering,was first developed by the Defense AdvancedResearch Projects Agency (DARPA) to support itsresearch community. Department of Energy (DOE)national laboratories have, for many years, madecontributions to supercomputer hardware and soft-ware. New initiatives to develop higher speedcomputers and a national science network couldsimilarly feed new concepts back to the computerand communications industry as well as to providersof information services.

Secondly, by improving the tools and methodolo-gies for R&D, the infrastructure will impact theresearch process in many critical high technologyindustries, such as pharmaceuticals, airframes, chem-icals, consumer electronics, and many others. Inno-

vation and, hence, international competitiveness inthese key R&D-intensive sectors can be improved.

The economy as a whole stands to benefit fromincreased technological capabilities of informationsystems and improved understanding of how to usethem. A National Research and Education Network could be the precursor to a much broader highcapacity network serving the United States, andmany research applications developed for highperformance computers result in techniques muchmore broadly applicable to commercial firms.

Scientific Importance

Research and development is, inherently, aninformation activity. Researchers generate, organ-ize, and interpret information, build models, com-municate, and archive results, Not surprisingly,then, they are now dependent on information tech-nology to assist them in these tasks. Many majorstudies by many scientific and policy organizationsover the years-as far back as the President’sScience Advisory Committee (PSAC) in the middle1960s, and as recently as a report by COSEPUP of the National Research Council published in 1988

2—

have noted these trends and analyzed the implica-tions for science support. The key points are asfollows:

q

Ž 

q

q

Scientific and technical information is increas-ingly being generated, stored and distributed inelectronic form;Computer-based communications and data han-dling are becoming essential for accessing,manipulating, analyzing, and communicatingdata and research results; and,In many computationally intensive R&D areas,from climate research to groundwater modelingto airframe design, major advances will dependupon pushing the state of the art in highperformance computing, very large databases,visualization, and other related informationtechnologies. Some of these applications havebeen labeled “Grand Challenges.” These proj-ects hold promise of great social benefit, suchas designing new vaccines and drugs, under-standing global warming, or modeling theworld economy. However, for that promise to

be realized in those fields, researchers requiremajor advances in available computationalpower.Many proposed and ongoing “big science”projects, from particle accelerators and largearray radio telescopes to the NASA EOSsatellite project, will create vast streams of newdata that must be captured, analyzed, archived,and made available to the research community.These new demands could well overtax thecapability of currently available resources.

TimingGovernment decisions being made now and in the

near future will shape the long-term utility andeffectiveness of the information technology infra-structure for science. For example:

q

q

NSF is renewing its multi-year commitments toall or most of the existing National Supercom-puting Centers.Executive agencies, under the informal aus-pices of the Federal Research Internet Coordi-nating Committee (FRICC), are developing anational “backbone” network for science. Deci-

sions made now will have long term influenceon the nature of the network, its technicalcharacteristics, its cost, its management, serv-

Zpme] on  ~om~jon  l’lxhnoIo~  and  the Conduct of Research, Committee on Science, Engineering, and Public Policy, f~ormarion Technology

and the Conduct of Reseurch ” The User’s View (Washington, DC: National Academy Press, 1989).

Page 14: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 14/45

Page 15: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 15/45

6

viders will require access to make their productsavailable to scientific users. The service providersmay include government agencies (which provideaccess to government scientific databases, for exam-ple), libraries and library utilities, journal andtext-book publishers, professional societies, and

private software and database providers.Observation 3: Several information policy issues

will be raised in managing and using the network. Depending on how they are resolved, they could sharply restrict the utility and scope of network use in the scientific community.

Security and privacy have already become of major concern and will pose a problem. In general,users will want the network and the services on it tobe as open as possible; however, they will also wantthe networks and services to be as robust anddependable as possible-free free deliberate or

accidental disruption. Furthermore, different re-sources will require different levels of security.Some bulletin boards and electronic mail servicesmay want to be as open and public as possible; othersmay require a high level of privacy. Some databasesmay be unique and vital resources that will need avery high level of protection, others may not be socritical. Maintaining an open, easily accessible

network while protecting privacy and valuableresources will require careful balancing of legal andtechnological controls.

Intellectual property protection in an electronicenvironment may pose difficult problems, Providers

will be concerned that electronic databases, soft-ware, and even electronic formats of printed journalsand other writings will not be adequately protected.In some cases, the product, itself, may not be wellprotected under existing law. In other cases elec-tronic formats coupled with a communicationsnetwork erode the ability to control restrictions oncopying and disseminating.

Access controls may be called for on material thatis deemed to be sensitive (although unclassified) forreasons of national security or economic competi-tiveness. Yet, the networks will be accessibleworldwide and the ability to identify and control

users may be limited.

The above observations have been broad, lookingat the overall collection of information technologyresources for science as an integrated system and atthe questions raised by it. The remaining portion of this paper will deal specifically with high perform-ance computers and networking.

Page 16: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 16/45

Chapter 2

High Performance Computers

An important set of issues has been raised duringthe last 5 years around the topic of high performancecomputing (H-PC). These issues stem from a grow-

ing concern in both the executive branch and inCongress that U.S. science is impeded significantlyby lack of access to HPC

1and by concerns over the

competitiveness implications of new foreign tech-nology initiatives, such as the Japanese “FifthGeneration Project.” In response to these concerns,policies have been developed and promoted withthree goals in mind.

1.

2.

3.

In

To advance vital research applications cur-rently hampered by lack of access to very highspeed computers.

To accelerate the development of new HPC

technology, providing enhanced tools for re-search and stimulating the competitiveness of the U.S. computer industry.To improve software tools and techniques forusing HPC, thereby enhancing their contribu-tion to general U.S. economic competitive-ness.

1984, the National Science Foundation (NSF)initiated a group of programs intended to improvethe availability and use of high performance comput-ers in scientific research. As the centerpiece of itsinitiative, after an initial phase of buying anddistributing time at existing supercomputer centers,NSF established five National Supercomputer Cen-ters.

Over the course of this and the next year, theinitial multiyear contracts with the National Centersare coming to an end, which has provoked a debateabout whether and, if so, in what form they shouldbe renewed. NSF undertook an elaborate review andrenewal process and announced that, depending onagency funding, it is prepared to proceed withrenewing at least four of the centers

2. In thinking

about the next steps in the evolution of the advancedcomputing program, the science agencies and Con-

gress have asked some basic questions. Have ourperceptions of the needs of research for HPCchanged since the centers were started? If so, how?

Have we learned anything about the effectiveness of the National Centers approach? Should the goals of the Advanced Scientific Computing (ASC) and

other related Federal programs be refined or rede-fined? Should alternative approaches be considered,either to replace or to supplement the contributionsof the centers?

OTA is presently engaged in a broad assessmentof the impacts of information technology on re-search, and as part of that inquiry, is examining thequestion of scientific computational resources. It hasbeen asked by the requesting committees for aninterim paper that might help shed some light on theabove questions. The full assessment will not becompleted for several months, however; so this

paper must confine itself to some tentative observa-tions.

WHAT ISA HIGH PERFORMANCECOMPUTER?

The term, “supercomputer,” is commonly used inthe press, but it is not necessarily useful for policy.In the first place, the definition of power in acomputer is highly inexact and depends on manyfactors including processor speed, memory size, andso on. Secondly, there is not a clear lower boundaryof supercomputer power. IBM 3090 computerscome in a wide range of configurations, some of thelargest of which are the basis of supercomputercenters at institutions such as Cornell, the Universi-ties of Utah, and Kentucky. Finally, technology ischanging rapidly and with it our conceptions of power and capability of various types of machines.We use the more general term, “high performancecomputers,” a term that includes a variety of machine types.

One class of HPC consists of very large. powerfulmachines, principally designed for very large nu-merical applications such as those encountered in

science. These computers are the ones often referredto as “supercomputers.” They are expensive, costingup to several million dollars each.

lpe[~~ D, Lm ,  R~PO~ of the pamj on  ~rge.scaje  Cowtilng in ~clen~e  ~  E~glncerl)lg  (Wa.$hlngon, Dc : Na[ lOna l  science  Foundam.m, 1982).

-e of the five centers, the John von Neumann National Supercomputer Center, has been based on ETA-10 tednology  Tbc Center hw been askedto resubmit a proposal showing revised plans in reaction to the wnhdrawd of that  machme from the markt.

- 7 -

Page 17: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 17/45

8

A large-scale computer’s power comes from acombination of very high-speed electronic compo-nents and specialized architecture (a term used bycomputer designers to describe the overall logicalarrangement of the computer). Most designs use acombination of “vector processing” and “parallel-

ism” in their design. A vector processor is anarithmetic unit of the computer that produces a seriesof similar calculations in an overlapping, assemblyline fashion, (Many scientific calculations can be setup in this way.)

Parallelism uses several processors, assuming that

a problem can be  broken into large independentpieces that can be computed on separate processors.Currently, large, mainframe HPC’S such as thoseoffered by Cray, IBM, are only modestly parallel,having as few as two up to as many as eightprocessors.

3The trend is toward more parallel

processors on these large systems. Some expertsanticipate as many as 512 processor machinesappearing in the near future. The key problem to datehas been to understand how problems can be set upto take advantage of the potential speed advantage of larger scale parallelism.

Several machines are now on the market that arebased on the structure and logic of a large supercom-puter, but use cheaper, slower electronic compo-nents. These systems make some sacrifice in speed,but cost much less to manufacture. Thus, an applica-tion that is demanding, but that does not necessarilyrequire the resources of a full-size supercomputer,

may be much more cost effective to run on such a“minisuper.”

Other types of specialized systems have alsoappeared on the market and in the research labora-tory. These machines represent attempts to obtainmajor gains in computation speed by means of fundamentally different architectures. They are knownby colorful names such as “Hypercubes,” “Connec-tion Machines,“ “Dataflow Processors, “ “ButterflyMachines, “ “Neural Nets,” or “Fuzzy Logic Com-puters.” Although they differ in detail, many of thesesystems are based on large-scale parallelism. That is,their designers attempt to get increases in processing

speed by hooking together in some way a largenumber-hundreds or even thousands-of simpler,

slower and, hence, cheaper processors. The problemis that computational mathematicians have not yetdeveloped a good theoretical or experiential frame-work for understanding in general how to arrangeapplications to take full advantage of these mas-sively parallel systems. Hence, they are still, by and

large, experimental, even though some are now onthe market and users have already developed appli-cations software for them. Experimental as thesesystems may seem now, many experts think that anysignificantly large increase in computational powereventually must grow out of experimental systemssuch as these or from some other form of massivelyparallel architecture.

Finally, “workstations,” the descendants of per-sonal desktop computers, are increasing in power;new chips now in development will offer thecomputing power nearly equivalent to a Cray 1supercomputer of the late 1970s. Thus, althoughtop-end HPCs will be correspondingly more power-ful, scientists who wish to do serious computing willhave a much wider selection of options in the nearfuture,

A few policy-related conclusions flow from thisdiscussion:

q

q

q

The term “Supercomputer” is a fluid one,potentially covering a wide variety of machinetypes, and the “supercomputer industry” issimilarly increasing y difficult to identify clearly.Scientists need access to a wide range of highperformance computers, ranging from desktop

workstations to full-scale supercomputers, andthey need to move smoothly among thesemachines as their research needs dictate.Hence, government policy needs to be flexibleand broadly based, not overly focused onnarrowly defined classes of machines.

HOW FAST IS FAST?

Popular comparisons of supercomputer speeds areusually based on processing speed, the measurebeing “FLOPS,” or “Floating Point Operation PerSecond.” The term “floating point” refers to a

particular format for numbers within the computerthat is used for scientific calculation; and a floating

3T 0 ~st~W1sh  ~[w~n  t.hls  m~es[  level and the larger scale parallehsm found on some more experimental machines, some expetts refer tO  th

lirmted parallelism ~  ‘(multiprocxssmg. ”

Page 18: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 18/45

Page 19: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 19/45

20

in May, 1987. In October, 1988 a second 3090/600Ewas added. The Cornell center also operates severalother smaller parallel systems, including an InteliPCS/2, a Transtech NT 1000, and a TopologixT1OOO. Some 50 percent of the resources of North-

east Parallel Architecture Center, which include twoConnection machines, an Encore, and an AlliantFX/80, are accessed by the Cornell facility.

Until October of 1988, all IBM computers were“on loan” to Cornell for as long as Cornell retainedits NSF funding. The second IBM 3090/600, pro-cured in October, will be paid for by an NSF grant,Over the past 4 years, corporate support for theCornell facility accounted for 48 percent of theoperating costs. During those same years, NSF andNew York State accounted for 37 percent and 5percent respectively of the facility’s budget. Thisfunding has allowed the center to maintain a staff of 

about 100.

The National Center for

Supercomputing Applications

The National Center for Supercomputing Appli-cations (NCSA) is operated by the University of Illinois at Urbana-Champaign. The Center has over2,500 academic users from about 82 academicafiliates. Each affiliate receives a block grant of time on the Cray X-MP/48, training for the Cray, andhelp using the network to access the Cray.

The NCSA received its Cray X-MP/24 in October

1985, That machine was upgraded to a CrayX-MP/48 in 1987. In October 1988 a Cray-2s/4-128was installed, giving the center two Cray machines.This computer is the only Cray-2 now at an NSFnational center. The center also houses a ConnectionMachine 2, an Alliant FX/80 and FX/8, and over 30graphics workstations.

In addition to NSF funding, NCSA has solicitedindustrial support. Amoco, Eastman Kodak, EliLilly, FMC Corp., Dow Chemical, and Motorolahave each contributed around $3 million over a3-year period to the NCSA. In fiscal year 1989corporate support has amounted to 11 percent of NCSA’s funding. About 32 percent of NCSA’sbudget came from NSF while the State of Illinoisand the University of Illinois accounted for theremaining 27 percent of the center’s $21.5 millionbudget. The center has a full-time staff of 198.

  Pittsburgh Supercomputing Center

The Pittsburgh Supercomputing Center (PSC) isrun jointly by the University of Pittsburgh, Carnegie-Mellon University, and Westinghouse Electric Corp.More than 1,400 users from 44 States utilize thecenter. Twenty-seven universities are affiliated withPSC.

The center received a Cray X-MP/48 in March of 1986. In December of 1988 PSC became the firstnon-Federal laboratory to possess a Cray Y-MP.Both machines were being used simultaneously fora short time, however the center has phased out theCray X-MP. The center’s graphics hardware in-cludes a Pixar image computer, an Ardent Titan, anda Silicon Graphics IRIS workstation.

The operating projection at PSC for fiscal year1990, a “typical year,” has NSF supporting 58percent of the center’s budget while industry andvendors account for 22 percent of the costs. TheCommonwealth of Pennsylvania and the NationalInstitutes of Health both support PSC, accountingfor 8 percent and 4 percent of budget respectively.Excluding working students, the center has a staff of around 65.

San Diego Supercomputer Center

The San Diego Supercomputer Center (SDSC) islocated on the campus of the University of Califor-nia at San Diego and is operated by General

Atomics. SDSC is linked to 25 consortium membersbut has a user base in 44 States. At the end of 1988,over 2,700 users were accessing the center. SDSChas 48 industrial partners who use the facility’shardware, software, and support staff.

A Cray X-MP/48 was installed in December,1985. SDSC’s  first  upgrade, a Y-MP8/864, isplanned for December, 1989. In addition to the Cray,SDSC has 5 Sun workstations, two IRIS worksta-tions, an Evans and Sutherland terminal, 5 Apolloworkstations, a Pixar, an Ardent Titan, an SCS-40minisupercomputer, a Supertek S-1 minisupercom-puter, and two Symbolics Machines.

The University of California at San Diego spendsmore than $250,000 a year on utilities and servicesfor SDSC. For fiscal year 1990 the SDSC believesNSF will account for 47 percent of the center’soperating budget. The State of California currently

Page 20: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 20/45

provides $1.25 million per year to the center and in1988, approved funding of $6 million over 3 years toSDSC for research in scientific visualization. Forfiscal year 1990 the State is projected to support 10percent of the center’s costs. Industrial support,

which has given the center $12.6 million in dona-tions and in-kind services, is projected to provide 15percent of the total costs of SDSC in fiscal year1990.

 John von Neumann

 National Supercomputer Center

The John von Neumann National SupercomputerCenter (JvNC), located in Princeton New Jersey, ismanaged by the Consortium for Scientific Comput-ing Inc., an organization of 13 institutions from New

Jersey, Pennsylvania, Massachusetts, New York,Rhode Island, Colorado, and Arizona. Currentlythere are over 1,400 researchers from 100 institutesaccessing the center. Eight industrial corporationsutilize the JvNC facilities.

At present there are two Cyber 205 and twoETA-1 OS, in use at the JvNC. The first ETA-10 wasinstalled, after a 1-year delay, in March of 1988. Inaddition to these machines there is a Pixar H, twoSilicon Graphics IRIS and video animation capabili-ties.

When the center was established in 1985 by NSF,

the New Jersey Commission on Science and Tech-nology committed $12.1 million to the center over a5-year period. An addition $13.1 million has beenset-aside for the center by the New Jersey Commis-sion for fiscal year 1991-1995. Direct funding fromthe State of New Jersey and university sourcesconstitutes 15 percent of the center’s budget forfiscal year 1991-1995. NSF will account for 60percent of the budget. Projected industry revenueand cost sharing account for 25 percent of costs.Since the announcement by CDC to close its ETAsubsidiary, the future of JvNC is uncertain. Planshave been proposed to NSF by JvNC to purchase a

Cray Research Y-MP, eventually upgrading to aC-90. NSF is reviewing the plan and a decision onrenewal is expected in October of 1989,

OTHER HPC FACILITIES

Before 1984 only three universities operatedsupercomputers: Purdue University, the Universityof Minnesota, and Colorado State University. The

NSF supercomputing initiative established five newsupercomputer centers that were nationally accessi-ble. States and universities began funding their ownsupercomputer centers, both in response to growingneeds on campus and to increased feeling on the partof State leaders that supercomputer facilities couldbe important stimuli to local R&D and, therefore, toeconomic development. Now, many State and uni-versity centers offer access to high performancecomputers;

4and the NSF centers are only part of a

much larger HPC environment including nearly 70Federal installations (see table 2-l).

Supercomputer center operators perceive their

roles in different ways. Some want to be a proactiveforce in the research community, leading the way byhelping develop new applications, training users,and so on. Others are content to follow in the paththat the NSF National Centers create. These differ-ences in goals/missions lead to varied services andcomputer systems. Some centers are “cycle shops,”offering computing time but minimal support staff.Other centers maintain a large support staff and offerconsulting, training sessions, and even assistancewith software development. Four representativecenters are described below:

 Minnesota Supercomputer Center

The Minnesota Supercomputer Center, originallypart of the University of Minnesota, is a for-profitcomputer center owned by the University of Minne-sota. Currently, several thousand researchers use thecenter, over 700 of which are from the University of Minnesota. The Minnesota Supercomputing Insti-tute, an academic unit of the University, channelsuniversity usage by providing grants to the studentsthrough a peer review process.

The Minnesota Supercomputer Center receivedits first machine, a Cray 1A, in September, 1981. In

mid 1985, it installed a Cyber 205; and in the latterpart of that year, two Cray 2 computers wereinstalled within 3 months of each other. Minnesota

4The nm~r  cmot ~  estfia[~ ex~dy. First, it depends on the dcfiniuon of supercomputer one uses. Secondly, the number  kwps Chmghg  asStates announce new plans for centers and as large research universities purchase their own HPCS.

Page 21: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 21/45

12

Table 2-l—Federal UnctassifiedSupercomputer lnstallations

NumberLaboratory of machines

Department  of  Energy  Los Alams National Lab . . . . . . . . . . . . . . . . . . . .

Livermore National Lab, NMFECC . . . . . . . . . . . .Livermore National Lab . . . . . . . . . . . . . . . . . . . . .Sandia National Lab, Livermore. . . . . . . . . . . . . . .Sandia National Lab, Albuquerque . . . . . . . . . . . .Oak Ridge National Lab . . . . . . . . . . . . . . . . . . . . .idaho Falls National Engineering . . . . . . . . . . . . . .Argonne National Lab . . . . . . . . . . . . . . . . . . . . . . .Knolls Atomic Power Lab . . . . . . . . . . . . . . . . . . . .Bettis Atomic Power Lab . . . . . . . . . . . . . . . . . . . .Savannah/DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . .Richland/DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Schenectady Naval Reactors/DOE . . . . . . . . . . . .Pittsburgh Naval Reactors/DOE . . . . . . . . . . . . . . .

Department of DefenseNaval Research Lab . . . . . . . . . . . . . . . . . . . . . . . .Naval Ship R&D Center ., . . . . . . . . . . . . . . . . . . .Fleet Numerical Oceanography . . . . . . . . . . . . . . .Naval Underwater System Command . . . . . . . . . .

Naval Weapons Center . . . . . . . . . . . . . . . . . . . . . .Martin Marietta/NTB . . . . . . . . . . . . . . . . . . . . . . . . .Air Force Wapons Lab . . . . . . . . . . . . . . . . . . . . .Air Force Global Weather . . . . . . . . . . . . . . . . . . . .Arnold Engineering and Development . . . . . . . . . .Wright Patterson AFB . . . . . . . . . . . . . . . . . . . . . . .Aerospace Corp. . . . . . . . . . . . . . . . . . . . . . . . . . .Army Ballistic Research Lab . . . . . . . . . . . . . . . . . .Army/Tacom ... , , . . . . . . . . . . . . . . . . . . . . . . . . . .Army/Huntsville. . . . . . . . . . . . . . . . . . . . . . . . . . . .Army/Kwajaiein . . . . . . . . . . . . . . . . . . . . . . . . . . . .Army/WES (on order) . . . . . . . . . . . . . . . . . . . . . . .Army/Warren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Defense Nuclear Agency . . . . . . . . . . . . . . . . . . . .

NASAAmes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Goddard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Lewis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Langley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Marshal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

  Department CommerceNational inst. of Standards and Technology . . . . .National Oceanic & Atmospheric Administration .

Environmental Protection AgencyRaleigh, North Carolina . . . . . . . . . . . . . . . . . . . . .

Eeportment and Human Services National Institutes of Health . . . . . . . . . . . . . . . . . .National Cancer institute . . . . . . . . . . . . . . . . . . . .

6

47321

1

1

1

1

1

1

2

1

1

1

1

11

21

1

1

1

21

1

1

1

1

1

5

21

1

1

1

4

1

1

1

SOURCE: Offics of Technology Assessment estimate.

bought its third Cray 2, the only one in use now,atthe end of 1988, just after it installed its ETA-10.The ETA-10 has recently been decommissioned due

to the closure of ETA. A Cray X-MP has been added,giving them a total of two supercomputers. TheMinnesota Supemomputer Centerhas acquired more

supercomputers than anyone outside the FederalGovernment.

The Minnesota State Legislature provides fundsto the University for the purchasing of supercom-puter time. Although the University buys a substan-tial portion of supercomputing time, the center hasmany industrial clients whose identities are proprie-tary, but they include representatives of the auto,aerospace, petroleum, and electronic industries.They are charged a fee for the use of the facility.

The Ohio Supercomputer Center

The Ohio Supercomputer Center (OSC) orig-inated from a coalition of scientists in the State. Thecenter, located on Ohio State University’s campus,is connected to 20 other Ohio universities via theOhio Academic Research Network (OARNET). Asof January 1989, three private firms were using the

Center’s resources.

In August, 1987, OSC installed a Cray X-MP/24,which was upgraded to an Cray X-MP/28 a yearlater. The center replaced the X-MP in August 1989with a Cray Research Y-MP. In addition to Crayhardware, there are 40 Sun Graphic workstations, aPixar II, a Stallar Graphics machine, a SiliconGraphic workstation and a Abekas Still Storemachine. The Center maintains a staff of about 35people.

The Ohio General Assembly began funding thecenter in the summer of 1987, appropriating $7.5

million. In March of 1988, the Assembly allocated $22 million for the acquisition of a Cray Y-MP. OhioState University has pledged $8.2 million to aug-ment the center’s budget. As of February 1989 theState has spent $37.7 million in funding.

5OSC’s

annual budget is around $6 million (not includingthe purchase/leasing of their Cray).

Center for High Performance Computing,

Texas (CHPC)

The Center for High Performance Computing islocated at The University of Texas at Austin. CHPC

serves all 14 institutions, 8 academic institutions,and 6 health-related organizations, in the Universityof Texas System.

5J ~ WUC,  “ohio~: Blazing Computer,” Ohio, February 1989, p. 12.

Page 22: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 22/45

 — —

The University of Texas installed a Cray X-MP/ 24 in March 1986, and a Cray 14se in November of 1988. The X-MP is used primarily for research. Forthe time being, the Cray 14se is being used as avehicle for the conversion of users to the Unix

system. About 40 people staff the center.

Original funding for the center and the CrayX-NIP came from bonds and endowments from bothThe University of Texas system and The Universityof Texas at Austin. The annual budget of CHPC isabout $3 million. About 95 percent of the center’soperating budget comes from State funding andendowments. Five percent of the costs are recoveredfrom selling CPU time.

 Alabama Supercomputer Network

The George C. Wallace Supercomputer Center,located in Huntsville Alabama, serves the needs of researchers throughout Alabama. Through the Ala-bama Supercomputer Network, 13 Alabama institu-tions, university and government sites, are con-nected to the center. Under contract to the State,Boeing Computer Services provides the supportstaff and technical skills to operate the center.Support staff are located at each of the nodes to helpfacilitate the use of the supercomputer from remotesites.

A Cray X-MP/24 arrived in 1987 and became

operational in early 1988. In 1987 the State of Alabama agreed to finance the center. The Stateallocated $2.2 million for the center and $38 millionto Boeing Services for the initial 5 years. Theaverage yearly budget is $7 million. The center hasa support staff of about 25.

Alabama universities are guaranteed 60 percent of the available time at no cost while commercialresearchers are charged a user fee. The impetus forthe State to create a supercomputer center has beenstated as the technical superiority a supercomputerwould bring, which would draw high-tech industry

to the State, enhance interaction between industryand the universities, and promote research and theassociated educational programs within the univer-sity.

Commercial Labs

A few corporations, such as the Boeing ComputerCorp., have been selling high performance computertime for a while. Boeing operates a Cray X-MP/24.Other commercial sellers of high performance com-puting time include the Houston Area ResearchCenter (HARC). HARC operates the only JapaneseSupercomputer in America, the NEC SX2. Thecenter offers remote services.

Computer Sciences Corp. (CSC), located in FallsChurch, Virginia, has a 16-processor FLEX/32 fromFlexible Computer Corp., a Convex 120 fromConvex Computer Corp, and a DAP21O from ActiveMemory Technology. Federal agencies comprisetwo-thirds of CSC’s customers.

6Power Computing

Co., located in Dallas, Texas, offers time on a CrayX-MP/24. Situated in Houston, Texas, Supercom-

puting Technology sells time on its Cray X-MP/28.Opticom Corp., of San Jose, California, offers timeon a Cray X-MP/24, Cray l-M, Convex C220, andcl XP.

 Federal Centers

In an informal poll of Federal agencies, OTAidentified 70 unclassified installations that operatesupercomputers, confirming the commonly expressedview that the Federal Government still represents amajor part of the market for HPC in the United States(see figure 2-l). Many of these centers serve theresearch needs of government scientists and engi-

neers and are, thus, part of the total researchcomputing environment. Some are available tonon-Federal scientists, others are closed.

CHANGING ENVIRONMENT

The scientific computing environment haschanged in important ways during the few years thatNSF’s Advanced Scientific Computing Programshave existed. Some of these changes are as follows:

The ASC programs, themselves, have notevolved as originally planned. The original NSFplanning document for the ASC program originally

proposed to establish 10 supercomputer centers overa 3-year period; only 5 were funded. Center manag-ers have also expressed the strong opinion that NSFhas not met many of its original commitments for

6N~s  p~kcr  Smiti, “More Than Just Buying Cycles,” Supercompufer Review, April 1989.

Page 23: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 23/45

14

Figure 2-1—Distribution of Federal Suparcomputers development of the Cray 3, a machine based on

Supercomputers gallium arsenide electronics,4 0 

3 5

3 0 

2 5 

2 0 

15

10

33

19 

10

2 1

DOE DoD NASA Commerce HSS EPA

Agencies

SOURCE: office of Technology Assessment, 1989.

funding in successive years of the contracts, forcingthe centers to change their operational priorities andsearch for support in other directions.

Technology has changed. There has been a burstof innovation in the HPC industry. At the top of theline, Cray Research developed two lines of ma-chines, the Cray 2 and the Cray X-MP (and itssuccessor, the Y-MP) that are much more powerfulthan the Cray 1, which was considered the leadingedge of supercomputing for several years by themid- 1980s. IBM has delivered several 3090s equippedwith multiple vector processors and has also becomea partner in a project to develop a new supercom-puter in a joint venture with SSI, a firm started bySteve Chen, a noted supercomputer architect previ-ously with Cray Research.

More recently, major changes have occurred inthe industry. Control Data has closed down ETA, itssupercomputer operation. Cray Research has beenbroken into two parts-Cray Computer Corp. andCray Research. Each will develop and market adifferent line of supercomputers. Cray Researchwill, initially, at least, concentrate on the Y-MPmodels, the upcoming C-90 machines, and theirlonger term successors. Cray Computer Corp., under

the leadership of Seymour Cray, will concentrate on

At the middle and lower end, the HPC industryhas introduced several new so-called “mini-supercomputers"-many of them based on radically

different system concepts, such as massive parallel-ism, and many designed for specific applications,such as high-speed graphics. New chips promisevery high-speed desktop workstations in the nearfuture.

Finally, three Japanese manufacturers, NEC, Fujitsu,and Hitachi have been successfully building andmarketing supercomputers that are reportedly com-petitive in performance with U.S. machines.

7While

these machines have, as yet, not penetrated the U.S.computer market, they indicate the potential com-petitiveness of the Japanese computer industry in theinternational HPC markets, and raise questions for

U.S. policy.

Many universities and State systems haveestablished "supercornputer centers” to serve theneeds of their researchers.

8Many of these centers

have only recently been formed, some have not yetinstalled their systems, so their operational experi-ence is, at best, limited to date. Furthermore, someother centers operate systems that, while verypowerful scientific machines, are not considered byall experts to be supercomputers. Nevertheless, thesecenters provide high performance scientific comput-ing to the research community, and create new

demands for Federal support for computer time.Individual scientist and research teams are also

getting Federal and private support from theirsponsors to buy their own “minisupercomputers.” Insome cases, these systems are used to develop andcheck out software eventually destined to run onlarger machines; in other cases, researchers seem tofind these machines adequate for their needs. Ineither mode of use, these departmental or laboratorysystems expand the range of possible sourcesresearchers turn to for high performance com-puting. Soon, desktop workstations will have per-formance equivalent to that of supercomputers of a

decade ago at a significantly lower cost.7Si=,  ~ *OW ~ve,  com~g  ~e ~wer  ~d  ~rfo~mce of suwrcomputers is a complex and arcane field, OTA will refrti from ~mp~g

or ranking systems in any absolute sense.8 sw  N~o~ As~ l a t j o n of Smte  Unlvemities  ~d  L~d.Gr~[ Co]]eges,  SWerc~~ufi”ngj_~r  t~  /W()’~: A Stied Responsibility (Washington,

DC: January 1989).

Page 24: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 24/45

Finally, some important changes have oc-curred in national objectives or perceptions of issues. For example, the development of a very highcapacity national science network (or “internet”) hastaken on a much greater significance. Originally

conceived of in the narrow context of tying togethersupercomputer centers and providing regional ac-cess to them, the science network has now come tobe thought of by its proponents as a basic infrastruc-ture, potentially extending throughout (and, perhaps,even beyond) the entire scientific, technical, andeducational community.

Science policy is also changing, as new importantand costly projects have been started or are beingseriously considered, Projects such as the supercol -lider, the space station, NASA’s Earth ObservingSystem (EOS) program, and the human genomemapping may seem at first glance to compete for

funding with science networks and supercomputers.However, they will create formidable new demandsfor computation, data communications, and datastorage facilities; and, hence, constitute additionalarguments for investments in an information tech-nology infrastructure.

Finally, some of the research areas in the so-called“Grand Challenges"

9have attained even greater

social importance-such as fluid flow modelingwhich will help the design of faster and more fuelefficient planes and ships, climate modeling to helpunderstand long term weather patterns, and thestructural analysis of proteins to help understanddiseases and design vaccines and drugs to fightthem.

REVIEW AND RENEWAL

OF THE NSF CENTERS

Based on the recent review, NSF has concludedthat the centers, by and large, have been successfuland are operating smoothly. That is, their systemsare being fully used, they have trained many newusers, and they are producing good science. In lightof that conclusion, NSF has tentatively agreed torenewal for the three Cray-based centers and the

IBM-based Cornell Center. The John von NeumannCenter in Princeton has been based on ETA-10computers. Since ETA was closed down, NSF put

the review of the JvNC on hold pending review of arevised plan that has now been submitted, A decision

is expected soon.

Due to the environmental changes noted above,if the centers are to continue in their presentstatus as special NSF-sponsored facilities, theNational Supercomputer Centers will need tosharply define their roles in terms of: 1) the usersthey intend to serve, 2) the types of applicationsthey serve, and 3) the appropriate balance be-tween service, education, and research.

The NSF centers are only a few of a growingnumber of facilities that provide access to HPCresources. Assuming that NSF’s basic objective is toassure researchers access to the most appropriatecomputing for their work, it will be under increasingpressure to justify dedicating funds to one limited

group of facilities. Five years ago, few U.S. aca-demic supercomputer centers existed. When scien-tific demand was less, managerial attention wasfocused on the immediate problem of getting equip-ment installed and of developing an experienceduser community. Under those circumstances, someambiguity of purpose may have been acceptable andunderstandable. However, in light of the prolifera-tion of alternative technologies and centers, as wellas growing demand by researchers, unless thepurposes of the National Centers are more clearlydelineated, the facilities are at risk of being asked toserve too many roles and, as a result, serving none

well.Some examples of possible choices are as fol-

lows:

L Provide Access to HPCq

q

q

q

Provide access to the most powerful, leadingedge, supercomputers available,Serve the HPC requirements for research pro-

  jects of critical importance to the FederalGovernment, for example, the “Grand Chal-lenge” topics.

Serve the needs of all NSF-funded researchersfor HPC.

Serve the needs of the (academic, educational,and/or industrial) scientific community forHPC.

%’(j~and ~~ e ng e ’) ~=ach  toplc~ ~ que~iom  Of  major ~i~  lm~~~e  mat  rquwe for progress subs~[ially  grea~r cOmputklg  reSOUCeS dltul

arc currently available. The term was fust corned by Nobel Laureate physlclst, Kenneth Wikm.

Page 25: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 25/45

Page 26: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 26/45

Page 27: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 27/45

18

academic Europe, supplying over 14 supermomput-ers to European universities.

The United Kingdom began implementing a highperformance computing plan in 1985. The JointWorking Party on Advanced Research Computing’s

report in June of 1985, “Future Facilities forAdvanced Research Computing,” recommended anational facility for advanced research computing.This center would have the most powerful super-computer available; upgrade the United Kingdom’snetworking systems, JANET, to ensure communicat-ions to remote users; and house a national organiza-tion of advanced research computing to promotecollaboration with foreign countries and withinindustry, ensuring the effective use of these re-sources.

12Following this report, a Cray XMP/48

was installed at the Atlas Computer Center inRutherford. A Cray 1s was installed at the University

of London. Between 1986 and 1989, some $11.5million was spent on upgrading and enhancingJANET

13

Alvey was the United Kingdom’s key informationtechnology R&D program. The program promotedprojects in information technology undertaken jointlyby industry and academics. The United Kingdombegan funding the Alvey program in 1983. Duringthe first 5 years, 350 million pounds were allocatedto the Alvey program. The program was eliminatedat the end of 1988. Some research was picked up byother agencies, and many of the projects that were

sponsored by Alvey are now submitting proposals toEsprit (see below).

The European Community began funding theEuropean Strategic Programme for Research inInformation Technology (Esprit) program in 1984partly as a reaction to the poor performance of theEuropean Economic Community in the market of information technology and partly as a response toMITI’s 1981 computer programs. The program,funded by the European Community (EC), intends to“provide the European IT industry with the key

components of technology it needs to be competitiveon the world markets within a decade. ”

14The EC has

designed a program that forces collaboration be-tween nations, develops recognizable standards inthe information technology industry, and promotes

pre-competitive R&D. The R&D focuses on fivemain areas: microelectronics, software develop-ment, office systems, computer integrated manufac-turing, and advanced information.

Phase I of Esprit, the first 5 years, received $3.88billion in funding. 15 The finding was split 50-50 by

the EC and its participants, This was considered thecatch-up phase. Emphasis was placed on basicresearch, realizing that marketable goods will fol-low. Many of the companies that participated inPhase I were small experimental companies.

Phase II, which begins in late 1989, is called

commercialization. Marketable goods will be themajor emphasis of Phase II. This implies that thelarger firms will be the main industrial participantssince they have the capital needed to put a producton the market. The amount of funds for Phase II willbe determined by the world environment in informa-tion technology and the results of Phase I, but hasbeen estimated at around $4.14 billion.

l6

Almost all of the high performance computertechnologies emerging from Europe have beenbased on massively parallel architectures. Some of Europe’s parallel machines incorporate the transputer.Transputer technology (basically a computer on a

chip) is based on high density VLSI (very large-scaleintegration) chips. The T800, Inmos’s transputer,has the same power as Intel’s 80386/80387 chip, thedifference being in size and price. The transputer isabout one-third the size and price of Intel’s chip.

17

The transputer, created by the Inmos company, hadits initial R&D funded by the British government.Eventually Thorn EMI bought Inmos and the rightsto the transputer. Thorn EMI recently sold Inmos toa French-Italian joint venture company, SGS-Thomson, just as it was beginning to be profitable.

IZ’’FUtUre  F~ilities for A&anced Research Computing,” the report of a Joint Working Party on Advanced Research Computing, United Kingdom,Juty 1985.

13D&l&on p-r  on %upereomputers in Australia,” Department of Industry, Ikchnoloy and Commerce, April 1988, pp. 14-15.

l’W3spnt,” c ommission of the European Communities, p. 5.

Is’’Esprit,” Commission of tic European Cotnmunitics,  p.  21 .Ibsimm  peq, C’fiWan  marn Effort Breaks Ground in Software Standards,”   Electronic Business, Aug. 15, 1988, pp. 90-91.

17Graham K. EMS , ‘~r~wuters Advance Parallel Processing,”   Research and Devefopmenr,  March 1989, p. 50.

Page 28: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 28/45

19

Some of the more notable high performance com-puter products and R&D in Europe include:

q

q

q

q

T.Node, formerly called Supernode P1085, isone of the more successful endeavors of the

Esprit program. T.Node is a massively parallelmachine that exploits the Inmos T800 transputer.A single node is composed of 16 transputersconnected by two NEC VLSI chips and twoadditional transputers. The participants in theproject are The University of Southampton,Royal Signals, Radar Establishment, Thom-EMI (all British) and the French firm Telemat.The prototype of the French T. Node, Marie, amassively parallel MIMD (multiple instruc-tion, multiple data) computer, was delivered inApril of 1988. The product is now beingmarketed in America.

Project 415 is also funded by Esprit. Its projectleader is Philips, the Dutch electronics group.This project, which consists of six groups,focuses on symbolic computation, artificialintelligence (AI), rather than “number crunch-ing” (mathematical operations by conventionalsupercomputers). Using parallel architecture,the project is developing operating systems andlanguages that they hope will be available in 5years for the office environment.

18

The Flagship project, originally sponsored bythe Alvey program, has created a prototypeparallel machine using 15 processors. Its origi-nal participants were ICL, Imperial College,

and the University of Manchester. Other Alveyprojects worked with the Flagship project indesigning operating systems and languages forthe computer. By 1992 the project hopes tohave a marketable product. Since cancellationof the Alvey program, Flagship has gained

sponsorship from the Esprit Program.The Supernum Project of West Germany,with the help of the French Isis program,currently is creating machinery with massivelyparallel architecture. The parallelism, based onIntel’s 80386 microprocessors, is one of Es-prit’s more controversial and ambitious pro-

  jects. Originally the project was sponsored by

q

the West German government in their super-computing program. A computer prototype wasrecently shown at the industry fair in Hanover.It will be marketed in Germany by the end of the year for around $14 million.

The Supercluster, produced and manufacturedby Parsytec GmbH, a small private company,exemplifies Silicon Valley initiative occurringin West Germany. Parsytec has received somefinancial backing from the West German gov-ernment for their venture. This start-up firmsells a massively parallel machine that rivalssuperminicomputers or low-end supercomput-ers. The Supercluster architecture exploits the32-bit transputer from Inmos, the T800. Sixteentransputer-based processors in clusters of fourare linked together. This architecture is lesscostly than conventional machines, costing

between $230,000 and $320,000.19 Parsytechas just begun to market its product in America.

Other Nations

The Australia National University recently pur-chased a Fujitsu VP-1OO. A private service bureau inAustralia, Leading Edge, possesses a Cray Researchcomputer. At least two sites in India have supercom-puters, one at the Indian Meteorological Centre andone at ISC University. Two Middle Eastern petro-leum companies house supercomputers, and Koreaand Singapore both have research institutes with

supercomputers.

Over half a dozen Canadian universities have highperformance computers from CDC, Cray Research,or IBM. Canada’s private sector has also invested insupercomputers. Around 10 firms possess highperformance computers. The Alberta government,aside from purchasing a supercomputer and support-ing associated services, has helped finance MyriasComputer Corp. A wholly owned U.S. subsidiary,Myrias Research Corp. manufactures the SP-2, aminisupercomputer.

One newly industrialized country is reported to be

developing a minisupercomputer of its own. The

IBJ~a VOWk,  s~rc~tig Review, “European Transputer-based Projects @  ChtdlmW  to U.S. Su_Ptig  SUWXY,” Nov~~_~

1988, pp. 8-9.

lgJotIII Go*, “A New Transputer Design From West German Startup,” Electrom”cs,  Mu. 3, 1988, PP. 71-72.

Page 29: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 29/45

20

first Brazilian rninisupercomputer, claimed to be machine will sell for $2.5 million. The Fundingcapable of 150 mips, is planned to be available by the Authority of Studies and Projects (FINEP) financedend of 1989. The prototype is a parallel machine the project, with annual investment around $1with 64 processors, each with 32-bit capacity. The million.

Page 30: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 30/45

Chapter 3

Networks

Information is the lifeblood of science; commu-nication of that information is crucial to the

advance of research and its applications. Datacommunication networks enable scientists to talk with each other, access unique experimental data,share results and publications, and run modelson remote supercomputers, all with a speed,capacity, and ease that makes possible the posingof new questions and the prospect for newanswers. Networks ease research collaborationby removing geographic barriers. They havebecome an invaluable research tool, opening upnew channels of communication and increasingaccess to research equipment and facilities. Mostimportant networking is becoming the indispen-

sable foundation for all other use of informationtechnology in research.

Research networking is also pushing the frontiersof data communications and network technologies.Like electric power, highways, and the telephone,data communications is an infrastructure that will becrucial to all sectors of the economy. Businessesdemand on-line transaction processing, and finan-cial markets run on globally networked electronictrading. The evolution of telephony to digitaltechnology allows merging of voice, data, andinformation services networking, although voicecircuits still dominate the deployment of the technol-

ogy. Promoting scientific research networking—dealing with data-intense outputs like satellite imag-ing and supercomputer modeling—should pushnetworking technology that will find application faroutside of science.

Policy action is needed, if Congress wishes tosee the evolution of a full-scale national researchand education network. The existing “internet”of scientific networks is a fledgling. As thisconglomeration of networks evolves from anR&D enterprise to an operational network, userswill demand round-the-clock, high-quality serv-ice. Academics, policy makers, and researchersaround the world agree on the pressing need totransform it into a permanent infrastructure.This will entail grappling with difficult issues of public and private roles in funding, management,pricing/cost recovery, access, security, and inter-national coordination as well as assuring ade-

quate funding to carry out initiatives that are setby Congress.

Research networking faces two particular policycomplications. First, since the network in its broad-est form serves most disciplines, agencies, and manydifferent groups of users, it has no obvious leadchampion. As a common resource, its potentialsponsors may each be pleased to use it but unlikelyto give it the priority and funding required to bringit to its full potential. There is a need for clear centralleadership, as well as coordination of governments,the private sector, and universities. A second com-plication is a mismatch between the concept of anational research network and the traditionallydecentralized, subsidized, mixed public-private na-ture of higher education and science. The processesand priorities of mission agency-based Federalsupport may need some redesigning, as they areoriented towards supporting ongoing mission-oriented and basic research, and may work less wellat fostering large-scale scientific facilities and infra-structuremissions.

In thegetting ain place.

that cut across disciplines and agency

near term, the most important step iswidely connected, operational network But the “bare bones” networks are a

small part of the picture. Information that flows

over the network, and the scientific resources anddata available through the network, are theimportant payoffs. Key long-term issues for theresearch community will be those that affect thesort of information available over the network,who has access to it, and how much it costs. Themain issue areas for scientific data networking areoutlined below:

q

q

q

q

research-to develop the technology requiredto transmit and switch data at very high rates;

private sector participation-role of the com-mon carriers and telecommunication compa-nies in developing and managing the network 

and of private information firms in offeringservices;

scope—who the network is designed to servewill drive its structure and management;

access—balancing open use against securityand information control and determining who

–21 –

Page 31: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 31/45

22

q

q

q

q

q

q

will be able to gain access to the network forwhat purpose;standards-the role of government, industry,users, and international organizations in settingand maintaining technical standards;management-public and private roles; degree

of decentralization;funding-an operational network will requiresignificant, stable, continuing investment; thefinancial responsibilities demarcated must re-flect the interests of various players, fromindividual colleges through States and theFederal Government, in their stake in network operations and policies;economics-pricing and cost recovery for net-work use, central to the evolution and manage-ment of any infrastructure. Economics willdrive the use of the network;information services-who will decide what

types of services are to be allowed over thenetwork, who is allowed to offer them; and whowill resolve information issues such as privacy,intellectual property, fair competition, andsecurity;long-term science policy issues—the networks’impacts on the process of science, and onaccess to and dissemination of valuable scien-tific and technical information.

THE NATIONAL RESEARCH ANDEDUCATION NETWORK (NREN)

“A universal communications network connectedto national and international networks enables elec-tronic communication among scholars anywhere inthe world, as well as access to worldwide informa-tion sources, special experimental instruments, andcomputing resources. The network has sufficientbandwidth for scholarly resources to appear to beattached to a world local area network.”

EDUCOM, 1988.

66

. , , a national research network to provide a distributed computing capability that links the govemment,industry, and higher education communities.”

.OSTP, 1987.

“The goal of the National Research and EducationNetwork is to enhance national competitiveness andproductivity through a high-speed, high-qualitynetwork infrasoucture which supports a broad set of applications and network services for the research

an d instructional community. ”EDUCOM/NTTF March 1989.

“The NREN will provide high-speed communica-tion access to over 1300 institutions across theUnited States within five years. It will offer suffi-cient capacity, performance, and functionality so that

the physical distance between institutions is nolonger a barrier to effective collaboration. It willsupport access to high-performance computing fa-cilities and services . , . and advanced informationsharing and exchange, including national file sys-tems and online libraries . . . the NREN will evolvetoward fully supported commercial facilities thatsupport a broad range of applications and services.”

FRICC, Program Planfor the NREN, May 23, 1989.

This chapter of the background paper reviews the

status of and issues surrounding data networking forscience, in particular the proposed NREN. It de-

scribes current Federal activities and plans, andidentifies issues to be examined in the full report, tobe completed in summer 1990.

The existing array of scientific networks consistsof a hierarchy of local, regional and nationalnetworks, linked into a whole. In this paper,“NREN” will be used to describe the next generationof the national “backbone” that ties them together.The term “Internet” is used to describe a morespecific set of interconnected major networks, all of which use the same data transmission protocols. Themost important are NSFNET and its major regionalsubnetworks, ARPANET, and several other feder-ally initiated networks such as ESNET andNASNET. The term internet is used fairly loosely.At its broadest, the more generic term internet can beused to describe the international conglomeration of networks, with a variety of protocols and capabili-ties, which have a gateway into Internet; whichcould include such things as BITNET and MCI Mail.

The Origins of Research Networking

Research users were among the first to link computers into networks, to share information andbroaden remote access to computing resources.

DARPA created ARPANET in the 1960s for twopurposes: to advance networking and data communi-cations R&D, and to develop a robust communica-tions network that would support the data-richconversations of computer scientists. Building on

Page 32: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 32/45

23

the resulting packet-switched network technology,other agencies developed specialized networks fortheir research communities (e.g., ESNET, CSNETNSFNET), Telecommunications and electronic in-dustries provided technology and capacity for these

networks, but they were not policy leaders orinnovators of new systems. Meanwhile, other research-oriented networks, such as BITNET and Usenet,were developed in parallel by academic and industryusers who, not being grantees or contractors of Federal agencies, were not served by the agency-sponsored networks. These university and lab-basednetworks serve a relatively small number of special -ized scientific users, a market that has been ignoredby the traditional telecommunications industry. Thenetworks sprang from the efforts of users—academic and other research scientists-and theFederal managers who were supporting them.

l

The Growing Demand for Capability and Connectivity

Today there are thousands of computer networksin the United States. These networks range fromtempoary linkages between modem-equipped

2desk-

top computers linked via common carriers, toinstitution-wide area networks, to regional andnational networks, Network traffic moves throughdifferent media, including copper wire and opticalcables, signal processors and switches, satellites,and the vast common carrier system developed forvoice communication. Much of this hodgepodge of 

networks has been linked (at least in terms of abilityto interconnect) into the internet. The ability of anytwo systems to interconnect depends on their abilityto recognize and deal with the form informationflows take in each. These “protocols” are sets of technical standards that, in a sense, are the “lan-guages” of communication systems. Networks withdifferent protocols can often be linked together bycomputer-based “gateways” that translate the proto-cols between the networks.

National networks have partially coalesced, wheretechnology allows cost savings without losingconnectivity. Over the past years, several agencies

have pooled funds and plans to support a shared

national backbone. The primary driver for thisinterconnecting and coalescing of networks has beenthe need for connectivity among users. The power of the whole is vastly greater than the sum of the pieces.Substantial costs are saved by extending connectiv-

ity while reducing duplication of network coverage.The real payoff is in connecting people, information,and resources. Linking brings users in reach of eachother. Just as telephones would be of little use if onlya few people had them, a research and educationnetwork’s connectivity is central to its usefulness,and this connectivity comes both from ability of each network to reach the desks, labs, and homes of its users and the extent to which various networksare, themselves, interconnected.

The Present NREN 

The national research and education network can

be viewed as four levels of increasingly complex andflexible capability:

qphysical wire/fiberoptic common carrier ’’high-ways”;

. user-defined, packet-switched networks;

. basic network operations and services; and

. research, education, database, and informationservices accessible to network users

In a fully developed NREN, all of these levels of service must be integrated. Each level involvesdifferent technologies, services, policy issues, re-search opportunities, engineering requirements, cli-

entele, providers, regulators, and policy issues. Amore detailed look at the policy problems can bedrawn by separating the NREN into its majorcomponents.

Level 1: Physical wire/fiber optic commoncarrier highways

The foundation of the network is the physicalconduits that carry digital signals. These telephonewires, optical fibers, microwave links, and satellitesare the physical highways and byways of datatransit. They are invisible to the network user. Toprovide the physical skeleton for the intemet,

government, industry, and university network man-

I John S, ~e~an  and  Josiah C. Hoskins, “Notable Computer Networks,” Cornmurucations of the ACM, vol 29, No, 10, October 1986, pp. 932-971;John S. Quartennan, The Matrix Networks Around the World, Dlg]tiil Press, August 1989

2A ‘*Mod~m” ~onve~  ~fo~ation  in a computer  {o a  form  ~a[  a communication systcm  can  CqJ,  and vice versa. h alSO  autOma[eS SOme Simple

functions, such as dialing and answering the phone, dctectmg and corrccung transmission errors.

Page 33: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 33/45

24

agers lease circuits from public switched commoncarriers, such as AT&T, MCI, GTE, and NTN. Indoing so they take advantage of the large system of circuits already laid in place by the telecommunica-tions common carriers for other telephony and datamarkets. A key issue at this level is to what extentbroader Federal agency and national telecommuni-cations policies will promote, discourage, or divertthe evolution of a research-oriented data network.

Level 2: User-defined subnetworks

The internet is a conglomeration of smallerforeign, regional, State, local, topical, private, gov-ernment, and agency networks. Generally, theseseparately managed networks, such as SURANET,BARRNET, BITNET, and EARN, evolved along

naturally occurring geographic, topical, or userlines, or mission agency needs. Most of these logicalnetworks emerged from Federal research agency(including the Department of Defense) initiatives. Inaddition, there are more and more commercial, Stateand private, regional, and university networks (suchas Accunet, Telenet, and Usenet) at the same timespecialized and interlined. Many have since linkedthrough the Internet, while keeping to some extenttheir own technical and socioeconomic identity.This division into small, focused networks offers theadvantage of keeping network management close toits users; but demands standardization and some

central coordination to realize the benefits of inter-connection.

Networks at this level of operations are distin-guished by independent management and technicalboundaries. Networks often have different standardsand protocols, hardware, and software. They carryinformation of different sensitivity and value. Thediversity of these logical subnetworks matters toinstitutional subscribers (who must choose amongnetwork offerings), to regional and national network managers (who must manage and coordinate thesenetworks into an internet), and to users (who can findthe variety of alternatives confusing and difficult todeal with). A key issue is the management relation-ship among these diverse networks; to what extentis standardization and centralization desirable?

Level 3: Basic network operations and services

A small number of basic maintenance tools keepsthe network running and accessible by diverse,distributed users. These basic services are software-based, provided for the users by network operators

and computer manufacture in operating systems.They include software for password recognition,electronic-mail, and file transfer. These are coreservices necessary to the operation of any network.These basic services are not consistent across thecurrent range of computers used by research. A keyissue is to what extent these services should bestandardized, and as important, who should makethose decisions.

Level 4: Value-added superstructure: links toresearch, education, and information services

The utility of the network lies in the information,services, and people that the user can access throughthe network. These value-added services providespecialized tools, information, and data for researchand education. Today they include specializedcomputers and software, library catalogs and publi-cation databases, archives of research data, confer-encing systems, and electronic bulletin boards andpublishing services that provide access to colleaguesin the United States and abroad. These informationresources are provided by volunteer scientists and bynon-profit, for-profit, international, and governmentorganizations. Some are amateur, poorly maintainedbulletin boards; others are mature information or-ganizations with well-developed services. Some are

“free”; others recover costs through user charges.

Core policy issues are the appropriate roles forvarious information providers on the network. If thenetwork is viewed as public infrastructure, what is“fair” use of this infrastructure? If the network easesaccess to sensitive scientific data (whether rawresearch data or government regulatory databases),how will this stress the policies that govern therelationships of industry, regulators, lobbyists, andexperts? Should profit-seeking companies be al-lowed to market their services? How can we ensurethat technologies needed for network maintenance,

cost accounting, and monitoring will not be usedinappropriately or intrusively? Who should setprices for various users and services? How willintellectual property rights be structured for elec-tronically available information? Who is responsible

Page 34: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 34/45

25

for the quality and integrity of the data provided andused by researchers on the network?

  Research Networking as a Strategic

 High Technology Infrastructure

Research networking has dual roles. First, net-working is a strategic, high technology infrastruc-ture for science. More broadly applied, data net-

working enables research, education, business, andmanufacturing, and improves the Nation’s knowl-edge competitiveness. Second, networking technol-ogies and applications are themselves a substantialgrowth area, meriting focused R&D.

Knowledge is the commerce of education and

research. Today networks are the highways for

information and ideas. The y expand access to

computing, data, instruments, the research commun-

ity, and the knowledge they create. Data areexpensive (relative to computing hardware) and areincreasingly created in many widely distributedlocations, by specialized instruments and enter-prises, and then shared among many separate users.The more effectively that research information is

disseminated to other researchers and to industry,the more effective is scientific progress and social

application of technological knowledge. An internetof networks has become a strategic infrastructure forresearch.

The research networks are also a testbed fordata communications technology. Technologies

developed through the research networks are likelyto enhance productivity of all economic sectors, not

  just university research. The federally supportedInternet has not only sponsored frontier-breakingnetwork research, but has pulled data-networkingtechnology with it. ARPANET catalyzed the devel-opment of packet-switching technology, which hasexpanded rapidly from R&D networking to multibil-lion-dollar data handling for business and financialtransactions. The generic technologies developedfor the Internet-hardware (such as high-speedswitches) and software for network management,routing, and user interface-will transfer readily

into general data-networking applications. Gover-

nment support for applied research can catalyze andintegrate R&D, decrease risk, create markets fornetwork technologies and services, transcend eco-nomic and regulatory barriers, and accelerate earlytechnology development and deployment. This would

not only bolster U.S. science and education, butwould fuel industry R&D and help support themarket and competitiveness of the U.S. network andinformation services industry,

Governments and private industries the worldover are developing research networks, to enhanceR&D productivity and to create testbeds for highlyadvanced communications services and technolo-gies. Federal involvement in infrastructure is moti-vated by the need for coordination and nationallyoriented investment, to spread financial burdens,and promote social policy goals (such as furtheringbasic research).

3Nations that develop markets in

network-based technologies and services will createinformation industry-based productivity growth.

 Federal Coordination of the Evolving Internet

NREN plans have evolved rapidly. Congres-sional interest has grown; in 1986, Congress re-quested the Office of Science and TechnologyPolicy (OSTP) to report on options for networkingfor research and supercomputing.

4The resulting

report, completed in 1987 by the interagency FederalCoordinating Council for Science, Engineering, andTechnology (FCCSET), called for a new Federal

program to create an advanced national researchnetwork by the year 2000.5This vision incorporated

two objectives: 1 ) providing vital computer-communications network services for the Nationacademic research community, and 2) stimulatingnetworking and communications R&D which wouldfuel U.S. industrial technology and commerce in thegrowing global data communications market.

The 1987 FCCSET report, building on ongoingFederal activities, addressed near-term questionsover the national network scope, purposes, agencyauthority, performance targets, and budget. It did notresolve issues surrounding the long-term operation

of a network, the role of commercial services in3Conwe.lm~ Budget  office,  NW  Dlrectlom for  1~  Na~”on’s p~~l( ~~r~, septcnl~r  lq~~, p, X1  ]]: c~o, Ffd~rUl ~ofl( [es for  ln~)mtrlu’ture

Management, June 1986,

4pL  99-383, Aug. 21, 1986.

50~p,  A Re~eUch  ~~  De v e [ o p~~  s~~egy for  fflgh peflor~~e ~t)~utlng, NOV ?-(), 1987

Page 35: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 35/45

26

providing network operations and services, or inter-face with broader telecommunications policies.

A 1988 National Research Council report praisedongoing activities, emphasized the need for coordi-

nation, stable funding, broadened goals and design

criteria, integrated management, and increased pri-vate sector involvement.

6

FCCSET’S Subcommittee on Networking has

since issued a plan to upgrade and expand thenetwork.

7In developing this plan, agencies have

worked together to improve and interconnect severalexisting networks. Most regional networks were

  joint creations of NSF and regional consortia, andhave been part of the NSFNET world since theirinception. Other quasi-private, State, and regionalnetworks (such as CICNET, Inc., and CERFNET)have been started.

Recently, legislation has been reintroduced toauthorize and coordinate a national research net-work.

8As now proposed, a National Research and

Education Network would link universities, nationallaboratories, non-profit institutions and governmentresearch organizations, private companies doinggovernment-supported research and education, andfacilities such as supercomputers, experimentalinstruments, databases, and research libraries. Net-work research, as a joint endeavor with industry,would create and transfer technology for eventualcommercial exploitation, and serve the data-networking needs of research and higher educationinto the next century.

 Players in the NREN 

The current Internet has been created by Federal

leadership and funding, pulling together a wide baseof university commitment, national lab and aca-demic expertise, and industry interest and technol-ogy. The NREN involves many public and privateactors. Their roles must be better delineated foreffective policy. Each of these actors has vestedinterests and spheres of capabilities. Key players are:

. universities, which house most end users;

q

q

q

q

networking industry, the telecommunications,data communications, computer, and informa-tion service companies that provide networkingtechnologies and services;State enterprises devoted to economic develop-ment, research, and education;

industrial R&D labs (network users); andthe Federal Government, primarily the nationallabs and research-funding agencies

Federal funding and policy have stimulated thedevelopment of the Internet. Federal initiatives havebeen well complemented by States (through findingState networking and State universities’ institutionaland regional networking), universities (by fundingcampus networking), and industry (by contributingnetworking technology and physical circuits atsharply reduced rates). End users have experienceda highly subsidized service during this “experimen-tal” stage. As the network moves to a bigger, moreexpensive, more established operation, how mightthese relative roles change?

Universities

Academic institutions house teachers, research-ers, and students in all fields. Over the past fewdecades universities have invested heavily in librar-ies, local computing, campus networks, and regionalnetwork consortia. The money invested in campusnetworking far outweighs the investment in theNSFNET backbone. In general, academics view theNREN as fulfillment of a longstanding ambition to

build a national system for the transport of informa-tion for research and education. EDUCOM has longlabored from the “bottom” up, bringing togetherresearchers and educators who used networks (orbelieved they could use them) for both research andteaching.

Networking Industry

There is no simple unified view of the NREN inthe fragmented telecommunications “industry.” Thelong-distance telecommunications common carriersgenerally see the academic market as too specializedand risky to offer much of a profit opportunity.

6Nauon~  Re=Mch COMC,l, Tward ~ Natio~/  Rese~c,h  Nefw~rk (Wash] nson, DC , NationaJ  ~ademy Press, 1988), especiidly pp. 25-37.TFCCSET or  F~era]  Cwrdnatlng  COMC1} for Science, ~glnwr~g,  ~d  ~hnology, The Federa/  High  Perjormnce Compunng  Progrurn,

Washington, DC, OSTP, Sep[. 8, 1989.ES, 1067, ‘I~e  Natlon~ High-peflommce  Compukr ~~oloB  ~[ of ]989,”  May  1989, inwoduc~ by Mr. Gore, Hearings were held on June

21, 1989, H,R, 3131, “The National High-Performance Computer Technology Act of 1989,” introduced by Mr.  Walgren.

Page 36: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 36/45

27

However, companies have gained early experiencewith new technologies and applications by partici-pating in university R&D; it is for this reason thatindustry has jointly funded the creation and develop-ment of NSFNET.

Various specialized value-added common carriersoffer packet-switched services. They could in princi-ple provide some of the same services that the NRENwould provide, such as electronic mail. They are not,however, designed to meet the capacity require-ments of researchers, such as transferring vast filesof supercomputer-generated visualizations of weathersystems, simulated airplane test flights, or econo-metric models. Nor can common carriers provide the“reach” to all carriers.

States

The interests of States in research, education, andeconomic development parallel Federal concerns.Some States have also invested in informationinfrastructure development. Many States have in-vested heavily in education and research network-ing, usually based in the State university system andencompassing, to varying degrees, private universi-ties, State government, and industry. The State is a“natural” political boundary for network financing.In some States, such as Alabama, New York, NorthCarolina, and Texas, special initiatives have helpedcreate statewide networks.

Industry Users

There are relatively few industry users of theinternet; most are very large R&D-intensive compa-nies such as IBM and DEC, or small high-technology companies. Many large companies haveinternal business and research networks which link their offices and laboratories within the UnitedStates and overseas; many also subscribe to com-mercial services such as MCI Mail. However, theseproprietary and commercial networks do not providethe internet’s connectivity to scientists or the highbandwidth and services so useful for researchcommunications. Like universities and national

labs, companies are a part of the Nation’s R&Dendeavor; and being part of the research community

today includes being “on” the internet. Appropriate

industry use of the NREN should encourage interac-tion of industry, university, and government re-searchers, and foster technology transfer. Industry

internet users bring with them their own set of concerns such as cost accounting, proper network use, and information security. Other non-R&Dcompanies, such as business analysts, also are likelyto seek direct network connectivity to universities,

government laboratories, and R&D-intensive com-panies.

Federal

Three strong rationales-support of mission andbasic science, coordinating a strategic nationalinfrastructure, and promotion of data-networkingtechnology and industrial productivity-drive asubstantial, albeit changing, Federal involvement.Another more modest goal is to rationalize duplica-tion of effort by integrating, extending, and moder-nizing existing research networks. That is in itself 

quite important in the present Federal budgetaryenvironment. The international nature of the net-work also demands a coherent national voice ininternational telecommunications standardization.The Internet’s integration with foreign networks also

 justifies Federal concern over the international flowof militarily or economically sensitive technicalinformation. The same university-government-industry linkages on a domestic scale drive Federalinterests in the flow of information.

Federal R&D agencies’ interest in research net-working is to enhance their external research supportmissions. (Research networking is a small, special-ized part of agency telecommunications. It is de-signed to meet the needs of the research community,rather than agency operations and administrativetelecommunications that are addressed in FTS2000.) The hardware and software communicationstechnologies involved should be of broad commer-cial importance. The NREN plans reflect nationalinterest in bolstering a serious R&D base and acompetitive industry in advanced computer commu-nications.

The dominance of the Federal Government innetwork development means that Federal agency

interests ha-e strongly influenced its form andshape. Policies can reflect Federal biases; for in-stance, the limitation of access to the early AR-PANET to ARPA contractors left out many academ-ics, who consequent y created their own grass-roots,lower-capability BITNET.

Page 37: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 37/45

28

International actors are also important. As with

the telephone system, the internet is inherentlyinternational. These links require coordination, for

example for connectivity standards, higher level

network management, and security. This require-

ment implies the need for Federal level management

and policy.

The NREN in the International 

Telecommunications Environment

The nature and economics of an NREN willdepend on the international telecommunicationscontext in which it develops. Research networks are

a leading edge of digital network technologies, but

are only a tiny part of the communications and

information services markets.

The 1990s will be a predominantly digital world;historically different computing, telephony, and

business communications technologies are evolvinginto new information-intensive systems. Digitaltechnologies are promoting systems and marketintegration. Telecommunications in the 1990s willrevolve around flexible, powerful, “intelligent” net-

works. However, regulatory change and uncertainty,market turbulence, international competition, the

explosion in information services, and significant

changes in foreign telecommunications policies, allare making telecommunications services more tur-

bulent. This will cloud the research network’slong-term planning.

High-bandwidth, packet-switched networking isat persent a young market in comparison to commer-cial telecommunications. Voice overwhelminglydominates other services (e.g. fax, e-mail, on-linedata retrieval). While flexible, hybrid voice-dataservices are being introduced in response to businessdemand for data services, the technology base isoptimized for voice telephony.

Voice communications brings to the world of computer telecommunications complex regulatoryand economic baggage. Divestiture of the AT&Tregulated monopoly opened the telecommunicationsmarket to new entrants, who have slowly gained

long-haul market share and offered new technolo-gies and information services. In general, however,the post-divestiture telecommunications industryremains dominated by the descendants of oldAT&T, and most of the impetus for service innova-

tions comes from the voice market. One reason isuncertainty about the legal limits, for providinginformation services, imposed on the newly divestedcompanies. (In comparison, the computer industryhas been unregulated. With the infancy of thetechnology, and open markets, computer R&D has

been exceptionally productive,) A crucial concernfor long-range NREN planning is that scientificand educational needs might be ignored amongthe regulations, technology priorities, and eco-nomics of a telecommunications market gearedtoward the vast telephone customer base.

POLICY ISSUES

The goal is clear; but the environment iscomplex, and the details will be debated as thenetwork evolves

There is substantial agreement in the scientific

and higher education community about the pressingnational need for a broad-reaching, broad-bandwidth, state-of-the-art research network, Theexisting Internet provides vital communication,research, and information services, in addition to itsconcomitant role in pushing networking and datahandling technology, Increasing demand on network capacity has quickly saturated each network up-grade. In addition, the fast-growing demand isoverburdening the current informal administrativearrangements for running the Internet. Expandedcapability and connectivity will require substantialbudget increases. The current network is adequate

for broad e-mail service and for more restricted filetransfer, remote logon, and other sophisticated uses.Moving to gigabit bandwidth, with appropriatenetwork services, will demand substantial techno-logical innovation as well as investment.

There are areas of disagreement and even broaderareas of uncertainty in planning the future nationalresearch network. There are several reasons for this:the immaturity of data network technology, serv-ices, and markets; the Internet’s nature as strategicinfrastructure for diverse users and institutions;and the uncertainties and complexities of overridingtelecommunications policy and economics.

First, the current Internet is, to an extent, anexperiment in progress, similar to the early days of the telephone system. Technologies, uses, and po-tential markets for network services are still nascent.

Page 38: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 38/45

Page 39: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 39/45

Page 40: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 40/45

31

on the computers that attach users to the network,than by the network itself. Study is needed onwhether and how access can be controlled bytechnical fixes within the network, by computercenters attached to the network, informal codes of 

behavior, or laws,

Another approach is not to limit access, butminimize the vulnerability of the network—and itsinformation resources and users-to accidents ormalice. In comparison, essentially anyone who hasa modest amount of money can install a phone, oruse a public phone, or use a friend’s phone, andaccess the national phone system. However, crimi-nal, fraudulent, and harassing uses of the phonesystem are illegal. Access is unrestricted, but use isgoverned.

Controlling International Linkages

Science, business, and industry are international;their networks are inherently international. It isdifficult to block private telecommunications linkswith foreign entities, and public telecommunica-tions is already international. However, there is afundamental conflict between the desire to captureinformation for national or corporate economic gain,and the inherent openness of a network. Scientistsgenerally argue that open network access fostersscientifically valuable knowledge exchange, whichin turn leads to commercially valuable innovation.

Hierarchy of Network Capability

Investment in expanded network access must be

balanced continually with the upgrading of network performance, As the network is a significant com-petitive advantage in research and higher education,access to the “best” network possible is important.There are also technological considerations in link-ing networks of various performance levels andvarious architectures. There is already a consensusthat there should be a separate testbed or research

network for developing and testing new network technologies and services, which will truly be at thecutting edge (and therefore also have the weaknesses

of cutting edge technology, particularly unreliabilityand difficulty of use).

 Policy and Management Structure

Possible management models include: federallychartered nonprofit corporations, single lead agen-cies, interagency consortium, government-ownedcontractor operations, commercial operations; andTennessee Valley Authority, Atomic Energy Com-mission, the NSF Antarctic Program, and FannieMae. What are the implications of various scenariosfor the nature of traffic and users?

Degree of Centralization

What is the value of centralized, federally ac-countable management for network access control,traffic management and monitoring, and security,compared to the value of decentralized operations,open access and traffic? There are two key technicalquestions here: to what extent does network tech-nology limit the amount of control that can beexerted over access and traffic content? To whatextent does technology affect the strengths andweaknesses of centralized and decentralized mana-gement?

Mechanisms for Interagency Coordination

Interagency coordination has worked well so far,but with the scaling up of the network, more formalmechanisms are needed to deal with larger budgetsand to more tightly coordinate further development.

Coordination With Other Networks

National-level resources allocation and planningmust coordinate with interdependent institutionaland mid-level networking (the other two legs of networking).

Mechanisms for Standard Setting

Who should set standards, when should they beset, and how overarching should they be? Standardsat some common denominator level are absolutelynecessary to make networks work. But excessivestandardization may deter innovation in network technology, applications and services, and otherstandards.

Any one set of standards usually is optimal forsome applications or users, but not for others. Thereare well-established international mechanisms forformal standards-setting, as well as strong intern-ational involvement in more informal standards

Page 41: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 41/45

32

development. These mechanisms have worked well,albeit slowly. Early standard-setting by agencies andtheir advisers accelerated the development of U.S.networks, In many cases the early establishedstandards have become, with some modification, defacto national and even international standards. Thisis proving the case with ARPANET’s protocol suite,TCP/IP. However, many have complained thatagencies’ relatively precipitous and closed standardsdetermination has resulted in less-than-satisfactorystandards. NREN policy should embrace standards-setting. Should it, however, encourage wider partici-pation, especially by industry, than has been thecase? U.S. policy must balance the need for intern-ational compatibility with the furthering of nationalinterests.

 Financing and Cost Recovery

How can the capital and operating costs of theNREN be met? Issues include subsidies, user oraccess charges, cost recovery policies, and costaccounting. As an infrastructure that spans disci-plines and sectors, the NREN is outside the tradi-tional grant mechanisms of science policy. Howmight NREN economics be structured to meet costsand achieve various policy goals, such as encourag-ing widespread yet efficient use, ensuring equity of access, pushing technological development whilemaintaining needed standards, protecting intellec-tual property and sensitive information while en-couraging open communication, and attracting U.S.

commercial involvement and third-party informa-tion services?

Creating a Market

One of the key issues centers around the extent towhich deliberate creation of a market should be builtinto network policy, and into the surroundingscience policy system. There are those who believethat it is important that the delivery of network access and services to academics eventually becomea commercial operation, and that the current Federalsubsidy and apparently “free” services will get

academics so used to free services that there willnever be a market. How do you gradually create aninformation market, for networks, or for network-accessible value-added services?

Funding and Charge Structures

Financing issues are akin to ones in more tradi-tional infrastructures, such as highways and water-ways. These issues, which continue to dominateinfrastructure debates, are Federal private sectorroles and the structure of Federal subsidies andincentives (usually to restructure payments andaccess to infrastructure services). Is there a continu-ing role for Federal subsidies? How can universityaccounting, OMB circular A-21, and cost recoverypractices be accommodated?

User fees for network access are currently chargedas membership/access fees to institutions. End usersgenerally are not charged. In the future, user feesmay combine access/connectivity fees, and use-related fees. They may be secured via a trust fund (asis the case with national highways, inland water-

ways, and airports), or be returned directly tooperating authorities. A few regional networks (e.g.,CICNET, Inc.) have set membership/connectivityfees to recover full costs. Many fear that user fees arenot adequate for full funding/cost recovery.

Industry Participation

Industry has had a substantial financial role innetwork development. Industry participation hasbeen motivated by a desire to stay abreast of data-networking technology as well as a desire to

develop a niche in potential markets for researchnetworking. It is thus desirable to have significantindustry participation in the development of theNREN. Industry participation does several things:industry cost sharing makes the projects financiallyfeasible; industry has the installed long-haul tele-communications base to build on; and industryinvolvement in R&D should foster technologytransfer and, generally, the competitiveness of U.S.telecommunications industry. Industry in-kind con-tributions to NSFNET, primarily from MCI andIBM, are estimated at $40 million to $50 millioncompared to NSF’s 5 year, $14 million budget.

10It

is anticipated that the value of industry cost sharing(e.g., donated switches, lines, or software) for NRENwould be on the order of hundreds of millions of dollars.

1~]1~ MU~~, *’NSF  ~ns  High-S@ Computer Network,” S~ienCt?, p. 22.

Page 42: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 42/45

33

 Network Use

Network service offerings (e.g., databases anddatabase searching services, news, publication, andsoftware) will need some policy treatment. Thereneed to be incentives to encourage development of 

and access to network services, yet not undulysubsidize such services, or compete with privatebusiness, while maintaining quality control. Manynetwork services used by scientists have been “free”to the end user.

Economic and legal policies will need to beclarified for reference services, commercial infor-mation industry, Federal data banks, university dataresources, libraries, publishers, and generally allpotential services offered over the network.11 Thesepolicies should be designed to encourage use of services, while allowing developers to capture thepotential benefits of network services and ensurelegal and economic incentives to develop andmarket network services.

 Longer Term Science Policy Issues

The near-term technical implementation of theNREN is well laid out. However, longer-term policyissues will arise as the national network affects moredeeply the conduct of science, such as:

q

q

q

q

q

q

q

q

patterns of collaboration, communication andinformation transfer, education, and appren-ticeship;intellectual property, the value and ownership

of information;export control of scientific informationpublishing of research resultsthe “productivity” of research and attempts tomeasure itcommunication among scientists, particularly

across disciplines and between university, gov-

ernment, and industry scientists.potential economic and national security risks

of international scientific networking, collabo-

ration, and scientific communication;equity of access to scientific resources, such asfacilities, equipment, databases, research

grants, conferences, and other scientists. (Will

q

q

q

q

q

q

a fully implemented NREN change the concen-tration of academic science and Federal fund-ing in a limited number of departments andresearch universities, and of corporate sciencein a few large, rich corporations; what might bethe impacts of networks on traditional routes toscientific priority and prestige?)controlling scientific information flow. Whattechnologies and authority to control network-resident scientific information? How mightthese controls affect misconduct, quality con-trol, economic and corporate proprietary pro-tection, national security, and preliminary re-lease of tentative or confidential research infor-mation that is scientifically or medically sensi-tive?

cost and capitalization of doing research; towhat extent might networking reduce the need

for facilities or equipment?oversight and regulation of science, such asquality control, investigations of misconduct,research monitoring, awarding and auditing of government grants and contracts, data collec-tion, accountability, and regulation of researchprocedures.

12Might national networking ena-

ble or encourage new oversight roles forgovernments?

the access of various publics to scientists andresearch information;

the dissemination of scientific information,from raw data, research results, drafts of papers

through finished research reports and reviews;might some scientific journals be replaced byelectronic reports?

legal issues, data privacy, ownership of data,copyright. How might national networkinginteract with trends already underway in thescientific enterprise, such as changes in thenature of collaboration, sharing of data, andimpacts of commercial potential on scientificresearch? Academic science traditionally hasemphasized open and early communication,but some argue that pressures from competitionfor research grants and increasing potential for

commercial value from basic research have1 low, Cuculu  A.  130,  Ij ~  F~~~~  R~~ i~ t~~  52730  (~, 24, 1985); A. 1 ~()  H,R. 2381, The Information policy  AC I  of 1989, which restates the ro~c

of OMB and policles on governmen( reformation dissemination.

12u, s, Conqes s ,  office of  ~~o]o~  A~\essment,  The  Regu~to~  En}tlro~en/f~r  ,$~[e~~e, OTA-TM-SET-34 (Wtimgton, DC: U.S. Government

Prirmng Office, February 1986).

Page 43: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 43/45

34

dampened free communication. Might net-works counter, or strengthen, this trend?

Technical Questions

Several unresolved technical challenges are im-portant to policy because they will help determine

who has access to the network for what purposes.Such technical challenges include:

q

q

q

q

q

q

q

standards for networks and network-accessibleinformation services;requirements for interface to common carriers(local through international);requirements for interoperability across manydifferent computes;improving user interfaces;reliability and bandwidth requirements;methods for measuring access and usage, tocharge users that will determine who is mostlikely to pay for network operating costs; andmethods to promote security, which will affectthe balance-between network and informationvulnerability, privacy, and open access.

 Federal Agency Plans: FCCSET/FRICC 

A recently released plan by the Federal ResearchInternet Coordinating Committee (FRICC) outlinesa technical and management plan for NREN.

13This

plan has been incorporated into the broader FCCSETimplementation plan. The technical plan is wellthought through and represents further refinement of the NREN concept. The key stages are:

Stage 1:

Stage  2:

Stage 3:

upgrade and interconnect existing agencynetworks into a jointly funded andmanaged T1 (1.5 Mb/s) National Net-working Testbed.

14

integrate national networks into a T3 (45Mb/s) backbone by 1993.push a technological leap to a multigiga-bit NREN starting in the mid-1990s.

The proposal identifies two parts of an NREN, anoperational network and networking R&D. A serv-ice network would connect about 1,500 labs and

universities by 1995, providing reliable service andrapid transfer of very large data streams, such as arefound in interactive computer graphics, in apparentreal time. The currently operating agency networkswould be integrated under this proposal, to create ashared 45Mb/s service net by 1992. The second part

of the NREN would be R&D on a gigabit network,to be deployed in the latter 1990s. The first part isprimarily an organizational and financial initiative,requiring little new technology. The second involvesmajor new research activity in government andindustry.

The “service” initiative extends present activitiesof Federal agencies, adding a governance structurewhich includes the non-Federal participants (re-gional and local networking institutions and indus-try), in a national networking council, It formalizeswhat are now ad-hoc arrangements of the FRICC,and expands its scale and scope. Under this effort,

virtually all of the Nation’s research and highereducation communities will be interconnected. Traf-fic and traffic congestion will be managed viapriority routing, with service for participating agen-cies guaranteed via “policy” routing techniques. Thebenefits will be in improving productivity forresearchers and educators, and in creating anddemonstrating the demand for networks and network services to the computing, telecommunications, andinformation industries.

The research initiative (called stage 3 in theFCCSET reports) is more ambitious, seeking sup-

port for new research on communications technolo-gies capable of supporting a network that is at leasta thousand times faster than the 45Mb/s net. Such anet could use the currently unused capabilities of optical fibers to vastly increase effective capabilityand capacity, which are congested by today’stechnology for switching and routing, and supportthe next generation of computers and communicat-ions applications. This effort would require asubstantial Federal investment, but could invigoratethe national communication technology base, andboost the long-term economic competitiveness of 

13FIUCC,  P r o g rm  P lm for  th e Na iod f? e s e a r ch and Educatwn Network, May 23, 1989. FRICC  hm members from DHHS, DOE,  DwA,  USGS,

NASA, NSF, NOAA, and observers from the Internet Activities Board. FRICC is an informal committee that grew out of agencies’ shared interest incoordinating related network activities and avoiding duplication of resourees. As the de facto interagency coordination forum, FRICC was asked by NSFto prepare the NREN program plan.

14sw ~ w  NysE~~  NOTE, vol.  1,  No.  1,  Feb,  6,  1989 NysER~ h =  &n  a w ~ d ~  a  m ult f i i ]] ion-dol l~  Confrwt from DARPA to develop

tbe National Networking 7ksIbed.

Page 44: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 44/45

35

the telecommunications and computing industries.The gigabit network demonstration can be consid-ered similar to the Apollo project for communica-tions technologies, albeit on a smaller and lessspectacular scale. Technical research needed wouldinvolve media, switches, network design and controlsoftware, operating systems in connected comput-ers, and applications.

There are several areas where the FRICC manage-ment plan-and other plans-is unclear. It calls for,but does not detail any transition to commercialoperations. It does not outline potential structures forlong-term financing or cost recovery. And thenational network council’s formal area of responsi-bility is limited to Federal agency operations. Whilethis scope is appropriate for a Federal entity, and theprivate sector has participated influentially in pastFederal FRICC plans, the proposed council does not

encompass all the policy actors that need to partici-pate in a coordinated national network. The growthof non-Federal networks demonstrates that someinterests—such as smaller universities on the fringesof Federal-supported R&D-have not been served.The FRICC/FCCSET implementation plan for net-working research focuses on the more near-termmanagement problems of coordinated planning andmanagement of the NREN. It does not deal with twoextremely important and complex interfaces. At themost fundamental level, the common carriers, thenetwork is part of the larger telecommunicationslabyrinth with all its attendant regulations, vested

interests, and powerful policy combatants. At the toplevel, the network is a gateway into a globalinformation supermarket. This marketplace of infor-mation services is immensely complex as well aspotentially immensely profitable, and policy andregulation has not kept up with the many newopportunities created by technology.

The importance of institutional and mid-levelnetworking to the performance of a national net-work, and the continuing fragmentation and regula-tory and economic uncertainty of lower-level net-working, signals a need for significant policyattention to coordinating and advancing lower-levelnetworking. While there is a formal advisory role foruniversities, industry, and other users in the FRICCplan, it is difficult to say how and how well their

interests would be represented in practice. It is notclear what form this may take, or whether it willnecessitate some formal policy authority, but thereis need to accommodate the interests of universities(or some set of universities), industry research labs,and States in parallel to a Federal effort. Theconcerns of universities and the private sector abouttheir role in the national network are reflected inEDUCOM’S proposal for an overarching Federal-private nonprofit corporation, and to a lesser extentin NRI’s vision. The FRICC plan does not excludesuch a broader policy-setting body, but the currentplan stops with Federal agency coordination.

Funding for the FRICC NREN, based on theanalysis that went into the FCCSET report, isproposed at $400 million over 5 years, as shownbelow. This includes all national backbone Federalspending on hardware, software, and research,

which would be funneled through DARPA and NSFand overseen by an interagency council. It includessome continued support for mid-level or institu-tional networking, but not the value of any costsharing by industry, or specialized network R&D byvarious agencies. This budget is generally regardedas reasonable and, if anything, modest consideringthe potential benefits (see table 3-2).

15

 AREN Management Desiderata

All proposed initiatives share the policy goal of increasing the nation’s research productivity and

creating new opportunities for scientific collabora-tion. As a technological catalyst, an explicit nationalNREN initiative would reduce unacceptably highlevels of risk for industry and help create newmarkets for advanced computer-communicationsservices and technologies. What is needed now is asustained Federal commitment to consolidate andfortify agency plans, and to catalyze broader na-tional involvement. The relationship between science-oriented data networking and the broader telecom-munications world will need to be better sorted outbefore the NREN can be made into a partly or fullycommercial operation. As the engineering challengeof building a fully national data network is sur-mounted, management and user issues of econom-ics, access, and control of scientific information willrise in importance.

15 For~xmp l e ,  N~ion~  Rese~ch  Com cil , Toward u  NUrLOnUl Research Nerwork (Washington, DC: Naloml kademy fieSS  19~8)!  PP. 2~

-31

Page 45: 8918

8/14/2019 8918

http://slidepdf.com/reader/full/8918 45/45

36

Table 3-2-Proposed NREN Budget ($ millions)

FY90 FY91 FY92 FY93 FY94

FCCSET Stage 1 & 2 (upgrade; NSF) . . . . . . . . . . 14 23 55 50 50  

FCCSET Stage 3 (gigabit+; DARPA) . . . . . . . . , . . 16 27 40  55  60 

Total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30  50 95 95 110

S. 1067 authorization . . . . . . . . . . . . . . . . . . . . . . . . 50 50   100 100 100

HR. 3131 authorization . . . . . . . . . . . . . . . . . . . . . . 50 50   100 100 100SOURCE: Office of Technology Assessment, 1989.

The NREN is a strategic, complex infrastructurewhich requires long-term planning. Consequently,network management should be stable (insulatedfrom too much politics and budget vagaries), yetallow for accountability, feedback, and course cor-rection. It should be able to leverage funding,maximize cost effciency, and create incentives forcommercial networks. Currently, there is no singleentity that is big enough, risk-protected enough, andregulatory-free enough to make a proper national

network happen. While there is a need to formalizecurrent policy and management, there is concern thatsetting a strong federally focused structure in placemight prevent a move to a more desirable, effective,appropriate management system in the long run.

There is need for greater stability in NREN policy.The primary vehicle has been a voluntary coordinat-ing group, the FRICC, consisting of program offi-cers from research-oriented agencies, working withinagency missions with loose policy guidance fromthe FCCSET. The remarkable cooperation andprogress made so far depends on a complex set of agency priorities and budget fortunes, and continued

progress must be considered uncertain.

The pace of the resolution of these issues will becontrolled initially by the Federal budget of eachparticipating agency. While the bulk of the overallinvestment rests with midlevel and campus net-works, it cannot be integrated without strong centralcoordination, given present national telecommuni-cations policies and market conditions for therequired network technology. The relatively modestinvestment proposed by the initiative can have majorimpact by providing a forum for public-private

cooperation for the creation of new knowledge, anda robust and willing experimental market to test newideas and technologies.

For the short term there is a clear need to maintainthe Federal initiative, to sustain the present momen-tum, to improve the technology, and coordinate theexpanding networks. The initiative should acceler-ate the aggregation of a sustainable domestic marketfor new information technologies and services.These goals are consistent with a primary purpose of improving the data communications infrastructure

for U.S. science and engineering,