27
Optimization Techniques at the I/O Forwarding Layer Kazuki Ohta (presenter): Preferred Infrastructure, Inc., University of Tokyo Dries Kimpe, Jason Cope, Kamil Iskra, Robert Ross: Argonne National Laboratory Yutaka Ishikawa: University of Tokyo Contact: [email protected] Cluster 2010 Presentation 1 2010923日木曜日

Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Optimization Techniques at the I/O Forwarding Layer

Kazuki Ohta (presenter):Preferred Infrastructure, Inc., University of Tokyo

Dries Kimpe, Jason Cope, Kamil Iskra, Robert Ross:Argonne National Laboratory

Yutaka Ishikawa:University of Tokyo

Contact: [email protected]

Cluster 2010 Presentation

1

2010年9月23日木曜日

Page 2: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Background: Compute and Storage Imbalance

• Leadership-class computational scale:

• 100,000+ processes

• Advanced Multi-core architectures, Compute node OSs

• Leadership-class storage scale:

• 100+ servers

• Commercial storage hardware, Cluster file system

• Current leadership-class machines supply only 1GB/s of storage throughput for every 10TF of compute performance. This gap grew factor of 10 in recent years.

• Bridging this imbalance between compute and storage is a critical problem for the large-scale computation.

2

2010年9月23日木曜日

Page 3: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Storage Devices

VFS, FUSE

POSIX I/OMPI-IO

High-Level I/O Libraries

Parallel/Serial Applications

Parallel File Systems

Previous Studies: Current I/O Software Stack

Storage Abstraction,Data Portability(HDF5, NetCDF,

ADIOS)

Organizing Accessesfrom Many Clients

(ROMIO)

Logical File Systemover Many Storage Devices(PVFS2, Lustre, GPFS, PanFS,

Ceph, etc)

3

2010年9月23日木曜日

Page 4: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Challenge: Millions of Concurrent Clients

• 1,000,000+ concurrent clients present a challenge to current I/O stack

• e,g. metadata performance, locking, network incast problem, etc.

• I/O Forwarding Layer is introduced.

• All I/O requests are delegated to dedicated I/O forwarder process.

• I/O forwarder reduces the number of clients seen by the file system for all applications, without collective I/O.

PVFS2 PVFS2 PVFS2 PVFS2

Disk Disk Disk Disk

Compute Compute Compute Compute Compute .... Compute

I/O Forwarder

ComputeProcesses

I/O Forwarder

Parallel File System

I/O Path

4

Parallel File System

2010年9月23日木曜日

Page 5: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

I/O Software Stack with I/O Forwarding

Storage Devices

VFS, FUSE

POSIX I/OMPI-IO

High-Level I/O Libraries

Parallel/Serial Applications

I/O Forwarding

Parallel File Systems

Bridge Between Compute Process and Storage System(IBM ciod, Cray DVS, IOFSL)

5

2010年9月23日木曜日

Page 6: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Example I/O System: Blue Gene/P Architecture

6

2010年9月23日木曜日

Page 7: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

I/O Forwarding Challenges

• Large Requests

• Latency of the forwarding

• Memory limit of the I/O

• Variety of backend file system node performance

• Small Requests

• Current I/O forwarding mechanism reduces the number of clients, but does not reduces the number of requests.

• Request processing overheads at the file systems

• We proposed two optimization techniques for the I/O forwarding layer.

• Out-Of-Order I/O Pipelining, for large requests.

• I/O Request Scheduler, for small requests.

7

2010年9月23日木曜日

Page 8: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Out-Of-Order I/O Pipelining

• Split large I/O requests into small fixed-size chunks

• These chunks are forwarded in an out-of-order way.

• Good points

• Reduce forwarding latency, by overlapping the I/O requests and the network transfer.

• I/O sizes are not limited by the memory size at the forwarding node.

• Little effect by the slowest file system node.

Client IOFSLFile

System

No-Pipelining Out-Of-Order Pipelining

Client IOFSLFileSystem

Threads

8

2010年9月23日木曜日

Page 9: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

I/O Request Scheduler

• Scheduling and Merging the small requests at the forwarder

• Reduce number of seeks

• Reduce number of requests, the file systems actually sees

• Scheduling overhead must be minimum

• Handle-Based Round-Robin algorithm for the fairness between files

• Ranges are managed by Interval Tree

• The contiguous requests are merged

H H H H H

Read Read Write Read Read

Pick N Requestsand Issue I/OQ

9

2010年9月23日木曜日

Page 10: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

I/O Forwarding and Scalability Layer (IOFSL)

•IOFSL Project [Nawab 2009]

•Open-Source I/O Forwarding Implementation

•http://www.iofsl.org/

•Portable on most HPC environment

•Network Independent

•All network communication is done by BMI [Carns 2005]

•TCP/IP, Infiniband, Myrinet, Blue Gene/P Tree, Portals, etc.

•File System Independent

•MPI-IO (ROMIO) / FUSE Client10

2010年9月23日木曜日

Page 11: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

IOFSL Software Stack

FUSE

ZOIDFS Client API

ROMIO-ZOIDFS

MPI-IO InterfacePOSIXInterface

Application

BMI

IOFSL Server

BMI

FileSystem Dispatcher

PVFS2 libsysio

ZOIDFS Protocol I/O Request(TCP, Infiniband, Myrinet, etc. )

11

• Out-Of-Order I/O Pipelining and the I/O request scheduler have been implemented in the IOFSL, and evaluated on two environments.

• T2K Tokyo (Linux Cluster), and ANL Surveyor (Blue Gene/P)

2010年9月23日木曜日

Page 12: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on T2K: Spec

• T2K Open Super Computer, Tokyo Sites

• http://www.open-supercomputer.org/

• 32 node Research Cluster

• 16 cores: 2.3 GHz Quad-Core Opteron*4

• 32GB Memory

• 10Gbps Myrinet Network

• SATA Disk (Read: 49.52 MB/sec, Write 39.76 MB/sec)

• One IOFSL, Four PVFS2, 128 MPI Processes

• Software

• MPICH2 1.1.1p1

• PVFS2 CVS (almost 2.8.2)

12

2010年9月23日木曜日

Page 13: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on T2K: IOR Benchmark

•Each process issues the same amount of I/O

•Gradually increasing the message size, and see the bandwidth change

•Note: modified to do fsync() for MPI-IO

Message Size

13

2010年9月23日木曜日

Page 14: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on T2K: IOR Benchmark, 128procs

0

20

40

60

80

100

120

4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

B/s)

Message Size (KB)

ROMIO PVFS2IOFSL noneIOFSL hbrr

14

2010年9月23日木曜日

Page 15: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on T2K: IOR Benchmark, 128procs

0

20

40

60

80

100

120

4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

B/s)

Message Size (KB)

ROMIO PVFS2IOFSL noneIOFSL hbrr

Out-Of-Order Pipelining Improvements(~29.5%)

14

2010年9月23日木曜日

Page 16: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on T2K: IOR Benchmark, 128procs

0

20

40

60

80

100

120

4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

B/s)

Message Size (KB)

ROMIO PVFS2IOFSL noneIOFSL hbrr

I/O Scheduler Improvement (~40.0%)

Out-Of-Order Pipelining Improvements(~29.5%)

14

2010年9月23日木曜日

Page 17: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on Blue Gene/P: Spec

• Argonne National Laboratory BG/P “Surveyor”

• Blue Gene/P platform for research and development

• 1024 nodes, 4096-core

• Four PVFS2 servers

• DataDirect Networks S2A9550 SAN

• 256 compute nodes, with 4 I/O nodes were used.

Node Card: 4 core Node Board: 128 core Rack: 4096 core 15

2010年9月23日木曜日

Page 18: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: BMI PingPong

0

100

200

300

400

500

600

700

800

900

1 4 16 64 256 1024 4096

Band

wid

th (M

B/s)

Buffer Size (KB)

BMI TCP/IPBMI ZOID

CNK BG/P Tree Network

16

2010年9月23日木曜日

Page 19: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: IOR Benchmark, 256nodes

0

100

200

300

400

500

600

700

800

1 4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

iB/s)

Message Size (KB)

CIODIOFSL FIFO(32threads)

17

2010年9月23日木曜日

Page 20: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: IOR Benchmark, 256nodes

0

100

200

300

400

500

600

700

800

1 4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

iB/s)

Message Size (KB)

CIODIOFSL FIFO(32threads)

Performance Improvements(~42.0%)

17

2010年9月23日木曜日

Page 21: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: IOR Benchmark, 256nodes

0

100

200

300

400

500

600

700

800

1 4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

iB/s)

Message Size (KB)

CIODIOFSL FIFO(32threads)

Performance Drop(~ -38.5%)

Performance Improvements(~42.0%)

17

2010年9月23日木曜日

Page 22: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: Thread Count Effect

0

100

200

300

400

500

600

700

800

1 4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

iB/s)

Message Size (KB)

IOFSL FIFO (16threads)IOFSL FIFO (32threads)

18

2010年9月23日木曜日

Page 23: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Evaluation on BG/P: Thread Count Effect

0

100

200

300

400

500

600

700

800

1 4 16 64 256 1024 4096 16384 65536

Band

wid

th (M

iB/s)

Message Size (KB)

IOFSL FIFO (16threads)IOFSL FIFO (32threads)

16 threads > 32threads

18

2010年9月23日木曜日

Page 24: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Related Work

• Computational Plant Project @ Sandia National Laboratory

• First introduced I/O Forwarding Layer

• IBM Blue Gene/L, Blue Gene/P

• All I/O requests are forwarded to I/O nodes

• Compute OS can be stripped down to minimum functionality, and reduces the OS noise

• ZOID: I/O Forwarding Project [Kamil 2008]

• Only on Blue Gene

• Lustre Network Request Scheduler (NRS) [Qian 2009]

• Request scheduler at the parallel file system nodes

• Only simulation results

19

2010年9月23日木曜日

Page 25: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Future Work

• Event-driven server architecture

• reduced thread contension

• Collaborative Caching at the I/O forwarding layer

• multiple I/O forwarder works collaboratively for caching data and also metadata

• Hints from MPI-IO

• Better cooperation with collective I/O

• Evaluation on other leadership scale machines

• ORNL Jaguar, Cray XT4, XT5 systems

20

2010年9月23日木曜日

Page 26: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Conclusions

• Implementation and evaluation of two optimization techniques at the I/O Forwarding Layer

• I/O pipelining that overlaps the file system requests and the network communication.

• I/O scheduler that reduces the number of independent, non-contiguous file systems accesses.

• Demonstrating portable I/O forwarding layer, and performance comparison with existing HPC I/O software stack.

• Two Environments

• T2K Tokyo Linux cluster

• ANL Blue Gene/P Surveyor

• First I/O forwarding evaluations on linux cluster and Blue Gene/P

• First comparison between BG/P IBM stack with OSS stack

21

2010年9月23日木曜日

Page 27: Cluster 2010 Presentation - Argonne National LaboratoryCluster 2010 Presentation 1 ... Cluster file system • Current leadership-class machines supply only 1GB/s of storage throughput

Thanks!

Kazuki Ohta (presenter):Preferred Infrastructure, Inc., University of Tokyo

Dries Kimpe, Jason Cope, Kamil Iskra, Robert Ross:Argonne National Laboratory

Yutaka Ishikawa:University of Tokyo

Contact: [email protected]

2010年9月23日木曜日