26
Overview of Research at NPS using High Speed Networks and High Performance Computers CENIC 08 March 10, 2008 Jeffrey L. Haferman, Ph.D.

Overview of Research at NPS using High Speed Networks and High Performance Computers

  • Upload
    matteo

  • View
    31

  • Download
    1

Embed Size (px)

DESCRIPTION

Overview of Research at NPS using High Speed Networks and High Performance Computers. CENIC 08 March 10, 2008 Jeffrey L. Haferman, Ph.D. NPS 1955: CDC 102A. - PowerPoint PPT Presentation

Citation preview

Page 1: Overview of Research at NPS using High Speed Networks and High Performance Computers

Overview of Research at NPS using High Speed Networks and High Performance Computers

CENIC 08March 10, 2008

Jeffrey L. Haferman, Ph.D.

Page 2: Overview of Research at NPS using High Speed Networks and High Performance Computers

2

NPS 1955: CDC 102A

This is one of the earliest pictures of the first computer ("electronic automatic digital computer") used for academic instruction at NPS, the Control Data Corporation 102A. This picture was taken 25 March 1955.

Page 3: Overview of Research at NPS using High Speed Networks and High Performance Computers

3

NPS 1960: Navy’s First Supercomputer

"World's first all-solid-state computer -- Model 1, Serial No. 1 of Control Data Corporation's CDC1604 -- designed, built and personally certified in the lobby of Spanagel Hall by the legendary Seymour Cray."

Page 4: Overview of Research at NPS using High Speed Networks and High Performance Computers

4

2008: NPS HPCC Overview

• High performance computing includes scientific workstations, supercomputer systems, high speed networks, special purpose and experimental systems, the new generation of large scale parallel systems, and application and systems software with all components well integrated and linked over a high speed network.

NPS HPC Mission: “Promote scientific computing at NPS by providing support to researchers and departments who wish to engage in scientific computing, and establish NPS as a nationally

recognized HPC “Center of Excellence.”

• Website: http://www.nps.edu/hpc/

Page 5: Overview of Research at NPS using High Speed Networks and High Performance Computers

5

Corporation for Education Network Initiatives in California (CENIC)California Research and Education Network (CalREN)

CalREN-HPR connects to the Internet2 Abilene network.

CENIC / CalREN

Page 6: Overview of Research at NPS using High Speed Networks and High Performance Computers

6

NPS HPC Resources

Department OS Hardware Status Uses

HPC Center AIX 5.2 IBM Power 4+, 128 (1.7 GHz) processors, 128 GB memory Sum 07 Open

AIX 5.2 IBM Power 4, 8 (1.3 GHz) processors, 32 GB memory Up Open

ECE Linux SRC-6 Reconfigurable Computer, 2 dual processors + dual MAP Up VLSI, ECE

Linux ecenet, 5 nodes – each 4 single core (20 total processors) Up Teaching

Linux (Rocks) ecekvm, 5 nodes – each 4 dual core (40 total processors) Up Teaching

Mech Engineering Linux (Rocks) 22 PE (11 nodes w/ 2 2.6 GHz 252 Opterons/node) Up Combustion, Dynamics

Dual-boot 132 PE (33 nodes w/ 4 2.0 GHz 270 Opterons/node) Linux/XP Up Teaching

Linux 132 PE (33 nodes w/4 2.2 GHz AMD Opterons/node) Sum 07 Shock & Vibe

Meteorology Linux 16 PE (8 nodes w/dual 2.0 GHz 246 Opterons/node) Up MM5 Weather

IRIX 8 PE SGI Shared Memory Octane Up Visualization

MOVES Linux (Rocks) 10 PE (5 nodes w/ 2.4 GHz dual core Intel Xeons/node) Up Experimental

Linux (Rocks) 16 PE (Sun cluster w/ 8 dual core Opterons/node) Up Experimental

Oceanography Linux (Fedora) 32 PE (4 node w/ quad dual core 1.8 GHz 865 Opterons/node) Up Ocean Models

Mathematics OSX BSD 32 PE (4 dual-quad processor Apple Clovertown X-serve/node) Fall 07 Research

Operations Research NT 12 PE (6 Dell PC's w/ dual core processors) Up Marine Corps

OSX BSD 8 PE (4 dual processor Apple G5 X-serve/node) Up Data Mining

Physics OSX BSD 128 PE (64 dual processor Apple G5 X-serve/node) Up Laser Physics

Sys Eng & Analysis Windows 24 3.2 GHz Pentium (WinXP) + 6 Dual Processor (Win2K) Up SAE Apps

Page 7: Overview of Research at NPS using High Speed Networks and High Performance Computers

7

CHEETAH

Mechanical and Astronautical Engineering

• 33 nodes (4 processors per node) = 133 PE*• 4 dual core 2.0 GHz AMD 270 Opteron / node  • 2 GB of PC3200 DDR RAM / head node• 4 GB of PC3200 DDR RAM / slave node• 1.66 TB accessible RAID 5 storage • Gigabit 10/100/1000 BaseT NIC • Fedora Linux  (core 4 kernel 2.6.17-1)• Built by PSSC systems• Fluid flow problems, teaching

*processor elements

Page 8: Overview of Research at NPS using High Speed Networks and High Performance Computers

8

CFD: Flow through cross flow fanFlow Visualization at 3,000 rpm

Peak Efficiency

Near Stall

3000 rpm

Prof. Garth Hobson, NPS

Page 9: Overview of Research at NPS using High Speed Networks and High Performance Computers

9

WIPEOUT

MOVES Institute

• 8 nodes (4 processors per node) = 32 PE*• 2 dual core 1.0 GHz AMD 270 Opteron / node  • 8 GB of RAM / node• CentOS Linux 4.2 • Node 8 off due to power requirements • Sun Sunfire V20z servers • Modeling of Virtual Environments and Simulation

*processor elements

Page 10: Overview of Research at NPS using High Speed Networks and High Performance Computers

10

MOVES: Collaborations

Stanford

UCSB

Page 11: Overview of Research at NPS using High Speed Networks and High Performance Computers

11

ZEUS

HPC Center – IBM p690

• 1 32-processor node with 128 GB RAM• 32 1.7 GHz Power4+ processors  • Theoretical Peak Speed of 870 GFlops• AIX 5.2 • 6.0 TB storage• General Purpose Scientific Computing

*processor elements

Page 12: Overview of Research at NPS using High Speed Networks and High Performance Computers

12

Distributed Learning

Webcast-in-a-box: classes via streaming video

CITRIX: virtualization for application delivery

Elluminate: web-based collaboration system

Page 13: Overview of Research at NPS using High Speed Networks and High Performance Computers

13

ANASTASIA

Oceanography(Timour Radko, Assistant Professor)

• 5 nodes (8 processors per node) = 40 PE*• 4 dual core 1.8 GHz AMD 865 Opteron /node• 1 dual core 2.2 GHz AMD 875 Opteron n04• 20 GB of PC3200 DDR RAM / node• 1.15 TB accessible RAID 5 storage • Gigabit 10/100/1000 BaseT NIC • 11-node capacity (6 more would give 88 PE)• Fedora Linux  (core 3 kernel 2.6.12-1)• Built by PSSC systems• Temperature / Salinity distribution in the ocean (important for vessel buoyancy)

*processor elements

Page 14: Overview of Research at NPS using High Speed Networks and High Performance Computers

14

Salt fingering contributes to vertical mixing in the oceans. Such mixing helps regulate the gradual overturning circulation of the ocean, which strongly affects climate.

Important for vessel buoyancy

Occurs near poles, Mediterranean, Gulf of Arabia

Collaborating with UCSC, U-Washington

Example: salt fingers

Page 15: Overview of Research at NPS using High Speed Networks and High Performance Computers

15

APPLE CLUSTER

Physics(Bill Colson, Distinguished Professor)

• 64 nodes (2 processors per node) = 128 PE*• Apple G5 X-serve  (dual 2.3 GHz PowerPC)• Laser Physics• OSX BSD • 1 day versus 30 days

*processor elements

Page 16: Overview of Research at NPS using High Speed Networks and High Performance Computers

16

Example: FEL

UCLA

LLNL

Page 17: Overview of Research at NPS using High Speed Networks and High Performance Computers

17

RIEMANN

Applied Mathematics(Frank Giraldo, Assoc Professor)

• 4 nodes (8 processors per node) = 32 PE*• 4 dual-quad processor Intel Clovertown / node• NSEAMS (Atmospheric Model – Special Grid)• OSX BSD • Installed 10/15/2007

*processor elements

Page 18: Overview of Research at NPS using High Speed Networks and High Performance Computers

18

Problem is to compute:f(latitude, longitude, elevation, time) temperature, pressure, humidity, wind velocity

Approach:Discretize the domain, e.g., a measurement point every 10 kmDevise an algorithm to predict weather at time t+1 given t

Uses:Predict El NinoRouting of Navy ships and planes

Source: http://www.epm.ornl.gov/chammp/chammp.html

Example: weather forecasting

Page 19: Overview of Research at NPS using High Speed Networks and High Performance Computers

19

• One piece is modeling the fluid flow in the atmosphere– Solve Navier-Stokes problem

• Roughly 100 Flops per grid point with 1 minute timestep

• Computational requirements:– To match real-time, need 5x 1011 flops in 60 seconds =

8 Gflop/s

– Weather prediction (7 days in 24 hours) 56 Gflop/s

– Climate prediction (50 years in 30 days) 4.8 Tflop/s

– To use in policy negotiations (50 years in 12 hours) 288 Tflop/s

• To 2x grid resolution, computation is > 8x

Weather Forecasting Requirements

Page 20: Overview of Research at NPS using High Speed Networks and High Performance Computers

20

HPC motivation

• memory and processor capacity• Example

– A 3D weather simulation over Monterey Bay (1-meter resolution)

– Say we consider a volume 2km x 2km x 1km over the bay

– Each zone is characterized by, say, temperature, wind direction, wind velocity, air pressure, air moisture, for a total of (1+3+1+1+1)*8 = 56 bytes, times (2000 x 2000 x 1000 meters) [4 billion cubic meters]

– Therefore I need about 224GB of memory to hold the data

Page 21: Overview of Research at NPS using High Speed Networks and High Performance Computers

21

PIPS (ice model)

W. Maslowski,W. Maslowski, NPS, Monterey, CA;NPS, Monterey, CA; Sponsor: NSF/ONRSponsor: NSF/ONR

Snapshots of (a) sea ice area (%) and drift (m/s), (b) Snapshots of (a) sea ice area (%) and drift (m/s), (b) divergence (1.e3/s), (c) shear (1.3/s), and (d) vorticity (1.e3/s) divergence (1.e3/s), (c) shear (1.3/s), and (d) vorticity (1.e3/s) for August 01, 1979 – Stand Alone PIPS 3.0 Model Spinupfor August 01, 1979 – Stand Alone PIPS 3.0 Model Spinup

Issues:1. Several GBs output per day2. Satellite data to initialize model will soon be on order of TB/day3. Would like to increase resolution4. Need fast networks to share data with other researchers5. Need fast interconnects to solve the problem

Global Warming implications1/12 deg (9 km) – 45 levels

(1280x720x45)

Page 22: Overview of Research at NPS using High Speed Networks and High Performance Computers

Naval Postgraduate School Arctic Modeling EffortNaval Postgraduate School Arctic Modeling EffortNaval Postgraduate School Arctic Modeling EffortNaval Postgraduate School Arctic Modeling Effort

A snapshot of Sea Surface height (cm)

Page 23: Overview of Research at NPS using High Speed Networks and High Performance Computers

23

Page 24: Overview of Research at NPS using High Speed Networks and High Performance Computers

24

GRID COMPUTING

NPSUCSBUCSC(Globus)

Page 25: Overview of Research at NPS using High Speed Networks and High Performance Computers

25

VISUALIZATION

• MOVES

• Weather Models  

• UC-Davis

Page 26: Overview of Research at NPS using High Speed Networks and High Performance Computers

26

Summary

• NPS is connected to the tier-2 CalREN backbone (HPR)

• We have just selected Foundry to perform the necessary upgrades to allow NPS to take advantage of 10 gigE

• Local “Institutional Network” (I-NET) includes NPS, CSUMB, City of Monterey, MBARI, MPUSD, MCOE, DLI, and others.

• Teaching and Research at NPS will benefit through collaborative initiatives made possible by our relationship with CENIC.