Upload
duongquynh
View
227
Download
5
Embed Size (px)
Citation preview
Designing HPC, Big Data and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities
Dhabaleswar K. (DK) Panda
The Ohio State University
E‐mail: [email protected]‐state.edu
http://www.cse.ohio‐state.edu/~panda
Talk at Tsinghua (October 2016)
by
HPCAC‐China (Oct ‘16) 2Network Based Computing Laboratory
High‐End Computing (HEC): ExaFlop & ExaByte
100-200PFlops in 2016-2018
1 EFlops in 2021-2024?
10K-20K EBytes in 2016-2018
40K EBytes in 2020 ?
ExaFlop & HPC•
ExaByte & BigData•
HPCAC‐China (Oct ‘16) 3Network Based Computing Laboratory
Drivers of Modern HPC Cluster Architectures
Tianhe – 2 Titan Stampede Tianhe – 1A
• Multi‐core/many‐core technologies
• Remote Direct Memory Access (RDMA)‐enabled networking (InfiniBand and RoCE)
• Solid State Drives (SSDs), Non‐Volatile Random‐Access Memory (NVRAM), NVMe‐SSD
• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)
Accelerators / Coprocessors high compute density, high
performance/watt>1 TFlop DP on a chip
High Performance Interconnects ‐InfiniBand
<1usec latency, 100Gbps Bandwidth>Multi‐core Processors SSD, NVMe‐SSD, NVRAM
HPCAC‐China (Oct ‘16) 4Network Based Computing Laboratory
• Scientific Computing– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant
Programming Model
– Many discussions towards Partitioned Global Address Space (PGAS) • UPC, OpenSHMEM, CAF, UPC++ etc.
– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Big Data/Enterprise/Commercial Computing– Focuses on large data and data analysis
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
Three Major Computing Categories
HPCAC‐China (Oct ‘16) 5Network Based Computing Laboratory
Designing Communication Middleware for Multi‐Petaflop and Exaflop Systems: Challenges
Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies(InfiniBand, 40/100GigE, Aries, and OmniPath)
Multi/Many‐coreArchitectures
Accelerators(NVIDIA and MIC)
MiddlewareCo‐Design
Opportunities and
Challenges across Various
Layers
PerformanceScalabilityFault‐
Resilience
Communication Library or Runtime for Programming ModelsPoint‐to‐point Communication
Collective Communication
Energy‐Awareness
Synchronization and Locks
I/O andFile Systems
FaultTolerance
HPCAC‐China (Oct ‘16) 6Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly‐efficient inter‐node and intra‐node communication (both two‐sided and one‐sided)– Scalable job start‐up
• Scalable Collective communication– Offload– Non‐blocking– Topology‐aware
• Balancing intra‐node and inter‐node communication for next generation nodes (128‐1024 cores)– Multiple end‐points per node
• Support for efficient multi‐threading• Integrated Support for GPGPUs and Accelerators• Fault‐tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)• Virtualization • Energy‐Awareness
Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale
HPCAC‐China (Oct ‘16) 7Network Based Computing Laboratory
Overview of the MVAPICH2 Project• High Performance open‐source MPI Library for InfiniBand, Omni‐Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI‐1), MVAPICH2 (MPI‐2.2 and MPI‐3.0), Started in 2001, First version available in 2002
– MVAPICH2‐X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2‐GDR) and MIC (MVAPICH2‐MIC), Available since 2014
– Support for Virtualization (MVAPICH2‐Virt), Available since 2015
– Support for Energy‐Awareness (MVAPICH2‐EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,675 organizations in 83 countries
– More than 399,000 (> 0.39 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Jun ‘16 ranking)• 12th ranked 519,640‐core cluster (Stampede) at TACC
• 15th ranked 185,344‐core cluster (Pleiades) at NASA
• 31st ranked 76,032‐core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio‐state.edu
• Empowering Top500 systems for over a decade
– System‐X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ‐>
– Stampede at TACC (12th in Jun’16, 462,462 cores, 5.168 Plops)
HPCAC‐China (Oct ‘16) 8Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
300000
350000
400000
450000Sep‐04
Jan‐05
May‐05
Sep‐05
Jan‐06
May‐06
Sep‐06
Jan‐07
May‐07
Sep‐07
Jan‐08
May‐08
Sep‐08
Jan‐09
May‐09
Sep‐09
Jan‐10
May‐10
Sep‐10
Jan‐11
May‐11
Sep‐11
Jan‐12
May‐12
Sep‐12
Jan‐13
May‐13
Sep‐13
Jan‐14
May‐14
Sep‐14
Jan‐15
May‐15
Sep‐15
Jan‐16
May‐16
Num
ber o
f Dow
nloa
ds
Timeline
MV 0.9.4
MV2
0.9.0
MV2
0.9.8
MV2
1.0
MV1.0
MV2
1.0.3
MV1.1
MV2
1.4
MV2
1.5
MV2
1.6
MV2
1.7
MV2
1.8
MV2
1.9 M
V22.1
MV2
‐GDR 2.0b
MV2
‐MIC 2.0
MV2
‐Virt 2.1rc2
MV2
‐GDR 2.2rc1
MV2
‐XMV2
MVAPICH/MVAPICH2 Release Timeline and Downloads
HPCAC‐China (Oct ‘16) 9Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface(MPI)
PGAS(UPC, OpenSHMEM, CAF, UPC++)
Hybrid ‐‐‐MPI + X(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms
Point‐to‐point
Primitives
Collectives Algorithms
Energy‐Awareness
Remote Memory Access
I/O andFile Systems
FaultTolerance
Virtualization Active Messages
Job StartupIntrospection & Analysis
Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni‐Path)
Support for Modern Multi‐/Many‐core Architectures(Intel‐Xeon, OpenPower, Xeon‐Phi (MIC, KNL), NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODPSR‐IOV
Multi Rail
Transport MechanismsShared Memory CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
HPCAC‐China (Oct ‘16) 10Network Based Computing Laboratory
MVAPICH2 Software Family High‐Performance Parallel Programming Libraries
MVAPICH2 Support for InfiniBand, Omni‐Path, Ethernet/iWARP, and RoCE
MVAPICH2‐X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2‐GDR Optimized MPI for clusters with NVIDIA GPUs
MVAPICH2‐Virt High‐performance and scalable MPI for hypervisor and container based HPC cloud
MVAPICH2‐EA Energy aware and High‐performance MPI
MVAPICH2‐MIC Optimized MPI for clusters with Intel KNC
Microbenchmarks
OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs
Tools
OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMT Utility to measure the energy consumption of MPI applications
HPCAC‐China (Oct ‘16) 11Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly‐efficient inter‐node and intra‐node communication– Support for advanced IB mechanisms (UMR and ODP)– Extremely minimal memory footprint (DCT)– Dynamic and Adaptive Tag Matching
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs• InfiniBand Network Analysis and Monitoring (INAM)• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China (Oct ‘16) 12Network Based Computing Laboratory
One‐way Latency: MPI over IB with MVAPICH2
00.20.40.60.81
1.21.41.61.8 Small Message Latency
Message Size (bytes)
Latency (us)
1.111.19
1.011.15
1.04
TrueScale‐QDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐3‐FDR ‐ 2.8 GHz Deca‐core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB‐Dual FDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐4‐EDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB Switch
Omni‐Path ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with Omni‐Path switch
0
20
40
60
80
100
120TrueScale‐QDRConnectX‐3‐FDRConnectIB‐DualFDRConnectX‐4‐EDROmni‐Path
Large Message Latency
Message Size (bytes)
Latency (us)
HPCAC‐China (Oct ‘16) 13Network Based Computing Laboratory
TrueScale‐QDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switchConnectX‐3‐FDR ‐ 2.8 GHz Deca‐core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB‐Dual FDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with IB switch
ConnectX‐4‐EDR ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 IB switchOmni‐Path ‐ 3.1 GHz Deca‐core (Haswell) Intel PCI Gen3 with Omni‐Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000TrueScale‐QDRConnectX‐3‐FDRConnectIB‐DualFDRConnectX‐4‐EDROmni‐Path
Bidirectional Bandwidth
Band
width
(MBy
tes/sec)
Message Size (bytes)
21,227
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Band
width
(MBy
tes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,08312,366
HPCAC‐China (Oct ‘16) 14Network Based Computing Laboratory
0
0.5
1
0 1 2 4 8 16 32 64 128 256 512 1K
Latency (us)
Message Size (Bytes)
LatencyIntra‐Socket Inter‐Socket
MVAPICH2 Two‐Sided Intra‐Node Performance(Shared memory and Kernel‐based Zero‐copy Support (LiMIC and CMA))
Latest MVAPICH2 2.2
Intel Ivy‐bridge0.18 us
0.45 us
0
5000
10000
15000
Band
width (M
B/s)
Message Size (Bytes)
Bandwidth (Inter‐socket)inter‐Socket‐CMAinter‐Socket‐Shmeminter‐Socket‐LiMIC
0
5000
10000
15000
Band
width (M
B/s)
Message Size (Bytes)
Bandwidth (Intra‐socket)intra‐Socket‐CMAintra‐Socket‐Shmemintra‐Socket‐LiMIC
14,250 MB/s13,749 MB/s
HPCAC‐China (Oct ‘16) 15Network Based Computing Laboratory
• Introduced by Mellanox to support direct local and remote noncontiguous memory access– Avoid packing at sender and unpacking at receiver
• Available since MVAPICH2‐X 2.2b
User‐mode Memory Registration (UMR)
050100150200250300350
4K 16K 64K 256K 1M
Latency (u
s)
Message Size (Bytes)
Small & Medium Message LatencyUMRDefault
0
5000
10000
15000
20000
2M 4M 8M 16M
Latency (us)
Message Size (Bytes)
Large Message LatencyUMRDefault
Connect‐IB (54 Gbps): 2.8 GHz Dual Ten‐core (IvyBridge) Intel PCI Gen3 with Mellanox IB FDR switch
M. Li, H. Subramoni, K. Hamidouche, X. Lu and D. K. Panda, High Performance MPI Datatype Support with User‐mode Memory Registration: Challenges, Designs and Benefits, CLUSTER, 2015
HPCAC‐China (Oct ‘16) 16Network Based Computing Laboratory
Minimizing Memory Footprint by Direct Connect (DC) TransportNod
e0 P1P0
Node 1
P3
P2Node 3
P7
P6
Nod
e2 P5P4
IBNetwork
• Constant connection cost (One QP for any peer)• Full Feature Set (RDMA, Atomics etc)
• Separate objects for send (DC Initiator) and receive (DC Target)– DC Target identified by “DCT Number”– Messages routed with (DCT Number, LID)– Requires same “DC Key” to enable communication
• Available since MVAPICH2‐X 2.2a
0
0.5
1
160 320 620Normalized
Executio
n Time
Number of Processes
NAMD ‐ Apoa1: Large data setRC DC‐Pool UD XRC
1022
4797
1 1 12
10 10 10 10
1 13
5
1
10
100
80 160 320 640
Conn
ectio
n Mem
ory (KB)
Number of Processes
Memory Footprint for AlltoallRC DC‐Pool UD XRC
H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
HPCAC‐China (Oct ‘16) 17Network Based Computing Laboratory
• Introduced by Mellanox to avoid pinning the pages of registered memory regions
• ODP‐aware runtime could reduce the size of pin‐down buffers while maintaining performance
On‐Demand Paging (ODP)
M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, “Designing MPI Library with On‐Demand Paging (ODP) of InfiniBand: Challenges and Benefits”, SC, 2016
1
10
100
1000
ExecutionTime(s)
Applications (64 Processes)Pin‐downODP
HPCAC‐China (Oct ‘16) 18Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Challenge Tag matching is a significant
overhead for receiversExisting Solutions are‐ Static and do not adapt dynamically to communication pattern‐ Do not consider memory overhead
Solutio
n A new tag matching design‐ Dynamically adapt to communication patterns‐ Use different strategies for different ranks ‐ Decisions are based on the number of request object that must be traversed before hitting on the required one
Results Better performance than
other state‐of‐the art tag‐matching schemesMinimum memory consumption
Will be available in future MVAPICH2 releases
HPCAC‐China (Oct ‘16) 19Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• InfiniBand Network Analysis and Monitoring (INAM)• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China (Oct ‘16) 20Network Based Computing Laboratory
MVAPICH2‐X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2‐X 1.9) – http://mvapich.cse.ohio‐state.edu
HPCAC‐China (Oct ‘16) 21Network Based Computing Laboratory
UPC++ Support in MVAPICH2‐X
MPI + {UPC++} Application
GASNet Interfaces
UPC++ Interface
Network
Conduit (MPI)
MVAPICH2‐XUnified Communication
Runtime (UCR)
MPI + {UPC++} Application
UPC++ Runtime
MPI Interfaces
• Full and native support for hybrid MPI + UPC++ applications
• Better performance compared to IBV and MPI conduits
• OSU Micro‐benchmarks (OMB) support for UPC++
• Available since MVAPICH2‐X (2.2rc1)
05
10152025303540
1K 2K 4K 8K 16K 32K 64K 128K256K512K 1M
Tim
e (m
s)
Message Size (bytes)
GASNet_MPIGASNET_IBVMV2-X 14x
Inter‐node Broadcast (64 nodes 1:ppn)
MPI Runtime
HPCAC‐China (Oct ‘16) 22Network Based Computing Laboratory
Application Level Performance with Graph500 and SortGraph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012
05101520253035
4K 8K 16K
Time (s)
No. of Processes
MPI‐SimpleMPI‐CSCMPI‐CSRHybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes
‐ 2.4X improvement over MPI‐CSR‐ 7.6X improvement over MPI‐Simple
• 16,384 processes‐ 1.5X improvement over MPI‐CSR‐ 13X improvement over MPI‐Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0500
10001500200025003000
500GB‐512 1TB‐1K 2TB‐2K 4TB‐4K
Time (secon
ds)
Input Data ‐ No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort Application
• 4,096 processes, 4 TB Input Size‐MPI – 2408 sec; 0.16 TB/min‐ Hybrid – 1172 sec; 0.36 TB/min‐ 51% improvement over MPI‐design
HPCAC‐China (Oct ‘16) 23Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs
– CUDA‐aware MPI– GPUDirect RDMA (GDR) Support– Control Flow Decoupling through GPUDirect Async
• Energy‐Awareness• InfiniBand Network Analysis and Monitoring (INAM)• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China (Oct ‘16) 24Network Based Computing Laboratory
At Sender:
At Receiver:MPI_Recv(r_devbuf, size, …);
insideMVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU‐Aware (CUDA‐Aware) MPI Library: MVAPICH2‐GPU
HPCAC‐China (Oct ‘16) 25Network Based Computing Laboratory
CUDA‐Aware MPI: MVAPICH2‐GDR 1.8‐2.2 Releases• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA‐based inter‐node point‐to‐point communication
(GPU‐GPU, GPU‐Host and Host‐GPU)• High performance intra‐node point‐to‐point communication for multi‐GPU
adapters/node (GPU‐GPU, GPU‐Host and Host‐GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra‐node
communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point‐to‐point and collective communication from
GPU device buffers• Unified memory
HPCAC‐China (Oct ‘16) 26Network Based Computing Laboratory
01000200030004000
1 4 16 64 256 1K 4K
MV2‐GDR2.2rc1MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU Internode Bi‐Bandwidth
Message Size (bytes)
Bi‐Ban
dwidth (M
B/s)
051015202530
0 2 8 32 128 512 2K
MV2‐GDR2.2rc1 MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU internode latency
Message Size (bytes)
Latency (us)
MVAPICH2‐GDR‐2.2rc1Intel Ivy Bridge (E5‐2680 v2) node ‐ 20 cores
NVIDIA Tesla K40c GPUMellanox Connect‐X4 EDR HCA
CUDA 7.5Mellanox OFED 3.0 with GPU‐Direct‐RDMA
10x2X
11x
Performance of MVAPICH2‐GPU with GPU‐Direct RDMA (GDR)
2.18us0
50010001500200025003000
1 4 16 64 256 1K 4K
MV2‐GDR2.2rc1MV2‐GDR2.0bMV2 w/o GDR
GPU‐GPU Internode Bandwidth
Message Size (bytes)
Band
width
(MB/s) 11X
2X
3X
HPCAC‐China (Oct ‘16) 27Network Based Computing Laboratory
Application‐Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96Normalized
Executio
n Time
Number of GPUs
CSCS GPU cluster
Default Callback‐based Event‐based
00.20.40.60.81
1.2
4 8 16 32
Normalized
Executio
n Time
Number of GPUs
Wilkes GPU Cluster
Default Callback‐based Event‐based
• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non‐Contiguous Data Movement Processing on Modern GPU‐enabled Systems, IPDPS’16
On‐going collaboration with CSCS and MeteoSwiss (Switzerland) in co‐designing MV2‐GDR and Cosmo Application
Cosmo model: http://www2.cosmo‐model.org/content/tasks/operational/meteoSwiss/
HPCAC‐China (Oct ‘16) 28Network Based Computing Laboratory
Latency oriented: Send+kernel and Recv+kernel
MVAPICH2‐GDS: Preliminary Results
0
5
10
15
20
25
30
8 32 128 512
DefaultEnhanced‐GDS
Message Size (bytes)
Latency (us)
05101520253035
16 64 256 1024
DefaultEnhanced‐GDS
Message Size (bytes)
Latency (us)
Throughput Oriented: back‐to‐back
• Latency Oriented: Able to hide the kernel launch overhead– 25% improvement at 256 Bytes compared to default behavior
• Throughput Oriented: Asynchronously to offload queue the Communication and computation tasks– 14% improvement at 1KB message size
Intel Sandy Bridge, NVIDIA K20 and Mellanox FDR HCAWill be available in a public release soon
HPCAC‐China (Oct ‘16) 29Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• InfiniBand Network Analysis and Monitoring (INAM)• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China (Oct ‘16) 30Network Based Computing Laboratory
Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the
MPI runtime
– http://mvapich.cse.ohio‐state.edu/tools/osu‐inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• OSU INAM v0.9.1 released on 05/13/16
• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes
• Improve database insert times by using 'bulk inserts‘
• Capability to look up list of nodes communicating through a network link
• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with MVAPICH2‐X 2.2rc1
• “Best practices “ guidelines for deploying OSU INAM on different clusters
• Capability to analyze and profile node‐level, job‐level and process‐level activities for MPI communication– Point‐to‐Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2‐X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
HPCAC‐China (Oct ‘16) 31Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters• Visualize traffic pattern on different links• Quickly identify congested links/links in error state• See the history unfold – play back historical state of the network
Comet@SDSC ‐‐‐ Clustered View
(1,879 nodes, 212 switches, 4,377 network links)Finding Routes Between Nodes
HPCAC‐China (Oct ‘16) 32Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view• Show different network metrics (load, error, etc.) for any live job• Play back historical data for completed jobs to identify bottlenecks
• Node level view ‐ details per process or per node• CPU utilization for each rank/node• Bytes sent/received for MPI operations (pt‐to‐pt, collective, RMA)• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view• Classify data flowing over a network link at
different granularity in conjunction with MVAPICH2‐X 2.2rc1• Job level and• Process level
HPCAC‐China (Oct ‘16) 33Network Based Computing Laboratory
• Scalability for million to billion processors• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …) • Integrated Support for GPGPUs• InfiniBand Network Analysis and Monitoring (INAM)• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
HPCAC‐China (Oct ‘16) 34Network Based Computing Laboratory
• Virtualization has many benefits– Fault‐tolerance– Job migration– Compaction
• Have not been very popular in HPC due to overhead associated with Virtualization
• New SR‐IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR‐IOV• MVAPICH2‐Virt 2.1 (with and without OpenStack) is publicly available
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter‐VM Shmem Benefit MPI Applications on SR‐IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR‐IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR‐IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
HPCAC‐China (Oct ‘16) 35Network Based Computing Laboratory
• OpenStack is one of the most popular open‐source solutions to build clouds and manage virtual machines
• Deployment with OpenStack– Supporting SR‐IOV configuration
– Supporting IVSHMEM configuration
– Virtual Machine aware design of MVAPICH2 with SR‐IOV
• An efficient approach to build HPC Clouds with MVAPICH2‐Virt and OpenStack
MVAPICH2‐Virt with SR‐IOV and IVSHMEM over OpenStack
J. Zhang, X. Lu, M. Arnold, D. K. Panda. MVAPICH2 over OpenStack with SR‐IOV: An Efficient Approach to Build HPC Clouds. CCGrid, 2015.
HPCAC‐China (Oct ‘16) 36Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
Execution Time (s)
MV2‐SR‐IOV‐Def
MV2‐SR‐IOV‐Opt
MV2‐Native
1%9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
Execution Time (m
s)
Problem Size (Scale, Edgefactor)
MV2‐SR‐IOV‐Def
MV2‐SR‐IOV‐Opt
MV2‐Native2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2‐5% overhead for Graph500 with 128 Procs
• Compared to Native, 1‐9.5% overhead for SPEC MPI2007 with 128 Procs
Application‐Level Performance on Chameleon
SPEC MPI2007Graph500
5%
HPCAC‐China (Oct ‘16) 37Network Based Computing Laboratory
NSF Chameleon Cloud: A Powerful and Flexible Experimental Instrument• Large‐scale instrument
– Targeting Big Data, Big Compute, Big Instrument research– ~650 nodes (~14,500 cores), 5 PB disk over two sites, 2 sites connected with 100G network
• Reconfigurable instrument– Bare metal reconfiguration, operated as single instrument, graduated approach for ease‐of‐use
• Connected instrument– Workload and Trace Archive– Partnerships with production clouds: CERN, OSDC, Rackspace, Google, and others– Partnerships with users
• Complementary instrument– Complementing GENI, Grid’5000, and other testbeds
• Sustainable instrument– Industry connections http://www.chameleoncloud.org/
HPCAC‐China (Oct ‘16) 38Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
3500
4000
22, 16 22, 20 24, 16 24, 20 26, 16 26, 20
Execution Time (m
s)
Problem Size (Scale, Edgefactor)
Container‐Def
Container‐Opt
Native
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
Execution Time (s)
Container‐Def
Container‐Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container‐Def, up to 11% and 16% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500
• Optimized Container support will be available with the upcoming release of MVAPICH2‐Virt
Containers Support: Application‐Level Performance on Chameleon
Graph 500NAS
11%
16%
HPCAC‐China (Oct ‘16) 39Network Based Computing Laboratory
MVAPICH2 – Plans for Exascale• Performance and Memory scalability toward 1M cores• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)
• MPI + Task*• Enhanced Optimization for GPU Support and Accelerators• Taking advantage of advanced features of Mellanox InfiniBand
• Switch‐IB2 SHArP*• GID‐based support*
• Enhanced communication schemes for upcoming architectures• Knights Landing with MCDRAM*• NVLINK*• CAPI*
• Extended topology‐aware collectives• Extended Energy‐aware designs and Virtualization Support• Extended Support for MPI Tools Interface (as in MPI 3.1)• Extended Checkpoint‐Restart and migration support with SCR• Support for * features will be available in future MVAPICH2 Releases
HPCAC‐China (Oct ‘16) 40Network Based Computing Laboratory
• Scientific Computing– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant
Programming Model
– Many discussions towards Partitioned Global Address Space (PGAS) • UPC, OpenSHMEM, CAF, UPC++ etc.
– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Big Data/Enterprise/Commercial Computing– Focuses on large data and data analysis
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
Three Major Computing Categories
HPCAC‐China (Oct ‘16) 41Network Based Computing Laboratory
• Deep Learning frameworks are a different game altogether– Unusually large message sizes (order of
megabytes)
– Most communication based on GPU buffers
• How to address these newer requirements?– GPU‐specific Communication Libraries (NCCL)
• NVidia's NCCL library provides inter‐GPU communication
– CUDA‐Aware MPI (MVAPICH2‐GDR)• Provides support for GPU‐based communication
• Can we exploit CUDA‐Aware MPI and NCCL to support Deep Learning applications?
Deep Learning: New Challenges for MPI Runtimes
1
32
4
Internode Comm. (Knomial)
1 2
CPU
PLX
3 4
PLX
Intranode Comm.(NCCL Ring)
Ring Direction
Hierarchical Communication (Knomial + NCCL ring)
HPCAC‐China (Oct ‘16) 42Network Based Computing Laboratory
• NCCL has some limitations– Only works for a single node, thus, no scale‐out on
multiple nodes
– Degradation across IOH (socket) for scale‐up (within a node)
• We propose optimized MPI_Bcast– Communication of very large GPU buffers (order of
megabytes)
– Scale‐out on large number of dense multi‐GPU nodes
• Hierarchical Communication that efficiently exploits:– CUDA‐Aware MPI_Bcast in MV2‐GDR
– NCCL Broadcast primitive
Efficient Broadcast: MVAPICH2‐GDR and NCCL
110
1001000
10000100000
1 8 64 512 4K 32K
256K 2M 16M
128M
Latency (us)
Log Scale
Message Size
MV2‐GDR MV2‐GDR‐Opt
100x
0
10
20
30
2 4 8 16 32 64Time (secon
ds)
Number of GPUs
MV2‐GDR MV2‐GDR‐Opt
Performance Benefits: Microsoft CNTK DL framework (25% avg. improvement )
Performance Benefits: OSU Micro‐benchmarks
Efficient Large Message Broadcast using NCCL and CUDA‐Aware MPI for Deep Learning, A. Awan , K. Hamidouche , A. Venkatesh , and D. K. Panda, The 23rd European MPI Users' Group Meeting (EuroMPI 16), Sep 2016 [Best Paper Runner‐Up]
HPCAC‐China (Oct ‘16) 43Network Based Computing Laboratory
• Can we optimize MVAPICH2‐GDR to efficiently support DL frameworks?– We need to design large‐scale
reductions using CUDA‐Awareness
– GPU performs reduction using kernels
– Overlap of computation and communication
– Hierarchical Designs
• Proposed designs achieve 2.5x speedup over MVAPICH2‐GDR and 133x over OpenMPI
Efficient Reduce: MVAPICH2‐GDR
0.001
0.1
10
1000
100000
4M 8M 16M 32M 64M 128MLatency (m
illise
cond
s) –Log Scale
Message Size (MB)
Optimized Large‐Size Reduction
MV2‐GDR OpenMPI MV2‐GDR‐Opt
HPCAC‐China (Oct ‘16) 44Network Based Computing Laboratory
0100200300
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Reduce – 192 GPUsLarge Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Latency (m
s)
No. of GPUs
Reduce – 64 MB
0100200300
16 32 64
Latency (m
s)
No. of GPUs
Allreduce ‐ 128 MB
0
50
100
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Latency (m
s)
No. of GPUs
Bcast 128 MB
• MV2‐GDR provides optimized collectives for large message sizes
• Optimized Reduce, Allreduce, and Bcast
• Good scaling with large number of GPUs
0100200300
2 4 8 16 32 64 128
Latency (m
s)
Message Size (MB)
Allreduce – 64 GPUs
HPCAC‐China (Oct ‘16) 45Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses– Multi‐GPU Training within a single node
– Performance degradation for GPUs across different sockets
– Limited Scale‐out
• OSU‐Caffe: MPI‐based Parallel Training – Enable Scale‐up (within a node) and Scale‐out
(across multi‐GPU nodes)
– Scale‐out on 64 GPUs for training CIFAR‐10 network on CIFAR‐10 dataset
– Scale‐out on 128 GPUs for training GoogLeNet network on ImageNet dataset
OSU‐Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
Training
Tim
e (secon
ds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU‐Caffe (1024) OSU‐Caffe (2048)
Invalid use case
OSU‐Caffe will be publicly available soon
HPCAC‐China (Oct ‘16) 46Network Based Computing Laboratory
• Scientific Computing– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant
Programming Model
– Many discussions towards Partitioned Global Address Space (PGAS) • UPC, OpenSHMEM, CAF, etc.
– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)
• Deep Learning– Caffe, CNTK, TensorFlow, and many more
• Big Data/Enterprise/Commercial Computing– Focuses on large data and data analysis
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
Three Major Computing Categories
HPCAC‐China (Oct ‘16) 47Network Based Computing Laboratory
How Can HPC Clusters with High‐Performance Interconnect and Storage Architectures Benefit Big Data Applications?
Bring HPC and Big Data processing into a “convergent trajectory”!
What are the major What are the major bottlenecks in current Big
Data processing middleware (e.g. Hadoop, Spark, and Memcached)?
Can the bottlenecks be alleviated with new designs by taking advantage of HPC technologies?
Can RDMA‐enabledhigh‐performance interconnectsbenefit Big Data processing?
Can HPC Clusters with high‐performance
storage systems (e.g. SSD, parallel file
systems) benefit Big Data applications?
How much performance benefits
can be achieved through enhanced
designs?
How to design How to design benchmarks for evaluating the
performance of Big Data middleware on
HPC clusters?
HPCAC‐China (Oct ‘16) 48Network Based Computing Laboratory
Designing Communication and I/O Libraries for Big Data Systems: Challenges
Big Data Middleware(HDFS, MapReduce, HBase, Spark and Memcached)
Networking Technologies(InfiniBand, 1/10/40/100 GigE
and Intelligent NICs)
Storage Technologies(HDD, SSD, and NVMe‐SSD)
Programming Models(Sockets)
Applications
Commodity Computing System Architectures
(Multi‐ and Many‐core architectures and accelerators)
Other Protocols?
Communication and I/O LibraryPoint‐to‐PointCommunication
QoS
Threaded Modelsand Synchronization
Fault‐ToleranceI/O and File Systems
Virtualization
Benchmarks
Upper level Changes?Upper level Changes?
HPCAC‐China (Oct ‘16) 49Network Based Computing Laboratory
• RDMA for Apache Spark
• RDMA for Apache Hadoop 2.x (RDMA‐Hadoop‐2.x)
– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions
• RDMA for Apache HBase
• RDMA for Memcached (RDMA‐Memcached)
• RDMA for Apache Hadoop 1.x (RDMA‐Hadoop)
• OSU HiBD‐Benchmarks (OHB)
– HDFS, Memcached, and HBase Micro‐benchmarks
• http://hibd.cse.ohio‐state.edu
• Users Base: 190 organizations from 26 countries
• More than 18,300 downloads from the project site
• RDMA for Impala (upcoming)
The High‐Performance Big Data (HiBD) Project
Available for InfiniBand and RoCE
HPCAC‐China (Oct ‘16) 50Network Based Computing Laboratory
• HHH: Heterogeneous storage devices with hybrid replication schemes are supported in this mode of operation to have better fault‐tolerance as well
as performance. This mode is enabled by default in the package.
• HHH‐M: A high‐performance in‐memory based setup has been introduced in this package that can be utilized to perform all I/O operations in‐
memory and obtain as much performance benefit as possible.
• HHH‐L: With parallel file systems integrated, HHH‐L mode can take advantage of the Lustre available in the cluster.
• HHH‐L‐BB: This mode deploys a Memcached‐based burst buffer system to reduce the bandwidth bottleneck of shared file system access. The burst
buffer design is hosted by Memcached servers, each of which has a local SSD.
• MapReduce over Lustre, with/without local disks: Besides, HDFS based solutions, this package also provides support to run MapReduce jobs on top
of Lustre alone. Here, two different modes are introduced: with local disks and without local disks.
• Running with Slurm and PBS: Supports deploying RDMA for Apache Hadoop 2.x with Slurm and PBS in different running modes (HHH, HHH‐M, HHH‐
L, and MapReduce over Lustre).
Different Modes of RDMA for Apache Hadoop 2.x
HPCAC‐China (Oct ‘16) 51Network Based Computing Laboratory
• RDMA‐based Designs and Performance Evaluation– HDFS– MapReduce– Spark
Acceleration Case Studies and Performance Evaluation
HPCAC‐China (Oct ‘16) 52Network Based Computing Laboratory
• Enables high performance RDMA communication, while supporting traditional socket interface
• JNI Layer bridges Java based HDFS with communication library written in native code
Design Overview of HDFS with RDMA
HDFS
Verbs
RDMA Capable Networks(IB, iWARP, RoCE ..)
Applications
1/10/40/100 GigE, IPoIB Network
Java Socket Interface Java Native Interface (JNI)WriteOthers
OSU Design
• Design Features– RDMA‐based HDFS write– RDMA‐based HDFS
replication– Parallel replication support– On‐demand connection
setup– InfiniBand/RoCE support
N. S. Islam, M. W. Rahman, J. Jose, R. Rajachandrasekar, H. Wang, H. Subramoni, C. Murthy and D. K. Panda , High Performance RDMA‐Based Design of HDFS over InfiniBand , Supercomputing (SC), Nov 2012
N. Islam, X. Lu, W. Rahman, and D. K. Panda, SOR‐HDFS: A SEDA‐based Approach to Maximize Overlapping in RDMA‐Enhanced HDFS, HPDC '14, June 2014
HPCAC‐China (Oct ‘16) 53Network Based Computing Laboratory
Triple‐H
Heterogeneous Storage
• Design Features– Three modes
• Default (HHH)
• In‐Memory (HHH‐M)
• Lustre‐Integrated (HHH‐L)
– Policies to efficiently utilize the heterogeneous storage devices
• RAM, SSD, HDD, Lustre
– Eviction/Promotion based on data usage pattern
– Hybrid Replication
– Lustre‐Integrated mode:
• Lustre‐based fault‐tolerance
Enhanced HDFS with In‐Memory and Heterogeneous Storage
Hybrid Replication
Data Placement Policies
Eviction/Promotion
RAM Disk SSD HDD
Lustre
N. Islam, X. Lu, M. W. Rahman, D. Shankar, and D. K. Panda, Triple‐H: A Hybrid Approach to Accelerate HDFS on HPC Clusters with Heterogeneous Storage Architecture, CCGrid ’15, May 2015
Applications
HPCAC‐China (Oct ‘16) 54Network Based Computing Laboratory
Design Overview of MapReduce with RDMA
MapReduce
Verbs
RDMA Capable Networks(IB, iWARP, RoCE ..)
OSU Design
Applications
1/10/40/100 GigE, IPoIB Network
Java Socket Interface Java Native Interface (JNI)
Job
Tracker
Task
Tracker
Map
Reduce
• Enables high performance RDMA communication, while supporting traditional socket interface
• JNI Layer bridges Java based MapReduce with communication library written in native code
• Design Features– RDMA‐based shuffle
– Prefetching and caching map output
– Efficient Shuffle Algorithms
– In‐memory merge
– On‐demand Shuffle Adjustment
– Advanced overlapping• map, shuffle, and merge
• shuffle, merge, and reduce
– On‐demand connection setup
– InfiniBand/RoCE support
M. W. Rahman, X. Lu, N. S. Islam, and D. K. Panda, HOMR: A Hybrid Approach to Exploit Maximum Overlapping in MapReduce over High Performance Interconnects, ICS, June 2014
HPCAC‐China (Oct ‘16) 55Network Based Computing Laboratory
0
50
100
150
200
250
80 100 120
Execution Time (s)
Data Size (GB)
IPoIB (FDR)
0
50
100
150
200
250
80 100 120
Execution Time (s)
Data Size (GB)
IPoIB (FDR)
Performance Benefits – RandomWriter & TeraGen in TACC‐Stampede
Cluster with 32 Nodes with a total of 128 maps
• RandomWriter– 3‐4x improvement over IPoIB
for 80‐120 GB file size
• TeraGen– 4‐5x improvement over IPoIB
for 80‐120 GB file size
RandomWriter TeraGen
Reduced by 3x Reduced by 4x
HPCAC‐China (Oct ‘16) 56Network Based Computing Laboratory
0100200300400500600700800900
80 100 120
Execution Time (s)
Data Size (GB)
IPoIB (FDR) OSU‐IB (FDR)
0
100
200
300
400
500
600
80 100 120
Execution Time (s)
Data Size (GB)
IPoIB (FDR) OSU‐IB (FDR)
Performance Benefits – Sort & TeraSort in TACC‐Stampede
Cluster with 32 Nodes with a total of 128 maps and 64 reduces
• Sort with single HDD per node– 40‐52% improvement over IPoIB
for 80‐120 GB data
• TeraSort with single HDD per node– 42‐44% improvement over IPoIB
for 80‐120 GB data
Reduced by 52% Reduced by 44%
Cluster with 32 Nodes with a total of 128 maps and 57 reduces
HPCAC‐China (Oct ‘16) 57Network Based Computing Laboratory
• RDMA‐based Designs and Performance Evaluation– HDFS– MapReduce– Spark
Acceleration Case Studies and Performance Evaluation
HPCAC‐China (Oct ‘16) 58Network Based Computing Laboratory
• Design Features– RDMA based shuffle
– SEDA‐based plugins
– Dynamic connection management and sharing
– Non‐blocking data transfer
– Off‐JVM‐heap buffer management
– InfiniBand/RoCE support
Design Overview of Spark with RDMA
• Enables high performance RDMA communication, while supporting traditional socket interface
• JNI Layer bridges Scala based Spark with communication library written in native codeX. Lu, M. W. Rahman, N. Islam, D. Shankar, and D. K. Panda, Accelerating Spark with RDMA for Big Data Processing: Early Experiences, Int'l Symposium on High Performance Interconnects (HotI'14), August 2014
X. Lu, D. Shankar, S. Gugnani, and D. K. Panda, High‐Performance Design of Apache Spark with RDMA and Its Benefits on Various Workloads, IEEE BigData ‘16, Dec. 2016.
HPCAC‐China (Oct ‘16) 59Network Based Computing Laboratory
• InfiniBand FDR, SSD, 64 Worker Nodes, 1536 Cores, (1536M 1536R)
• RDMA‐based design for Spark 1.5.1
• RDMA vs. IPoIB with 1536 concurrent tasks, single SSD per node. – SortBy: Total time reduced by up to 80% over IPoIB (56Gbps)
– GroupBy: Total time reduced by up to 74% over IPoIB (56Gbps)
Performance Evaluation on SDSC Comet – SortBy/GroupBy
64 Worker Nodes, 1536 cores, SortByTest Total Time 64 Worker Nodes, 1536 cores, GroupByTest Total Time
0
50
100
150
200
250
300
64 128 256
Time (sec)
Data Size (GB)
IPoIB
RDMA
0
50
100
150
200
250
64 128 256
Time (sec)
Data Size (GB)
IPoIB
RDMA
74%80%
HPCAC‐China (Oct ‘16) 60Network Based Computing Laboratory
• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)
• RDMA‐based design for Spark 1.5.1
• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node. – 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)
– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)
Performance Evaluation on SDSC Comet – HiBench PageRank
32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time
050
100150200250300350400450
Huge BigData Gigantic
Time (sec)
Data Size (GB)
IPoIB
RDMA
0
100
200
300
400
500
600
700
800
Huge BigData Gigantic
Time (sec)
Data Size (GB)
IPoIB
RDMA
43%37%
HPCAC‐China (Oct ‘16) 61Network Based Computing Laboratory
Performance Evaluation on SDSC Comet: Astronomy Application
• Kira Toolkit1: Distributed astronomy image processing toolkit implemented using Apache Spark.
• Source extractor application, using a 65GB dataset from the SDSS DR2 survey that comprises 11,150 image files.
• Compare RDMA Spark performance with the standard apache implementation using IPoIB.
1. Z. Zhang, K. Barbary, F. A. Nothaft, E.R. Sparks, M.J. Franklin, D.A. Patterson, S. Perlmutter. Scientific Computing meets Big Data Technology: An Astronomy Use Case. CoRR, vol: abs/1507.03325, Aug 2015.
020406080100120
RDMA Spark Apache Spark(IPoIB)
21 %
Execution times (sec) for Kira SE benchmark using 65 GB dataset, 48 cores.
M. Tatineni, X. Lu, D. J. Choi, A. Majumdar, and D. K. Panda, Experiences and Benefits of Running RDMA Hadoop and Spark on SDSC Comet, XSEDE’16, July 2016
HPCAC‐China (Oct ‘16) 62Network Based Computing Laboratory
Performance Evaluation on SDSC Comet: Topic ModelingApplication
• Application in Social Sciences: Topic modeling using big data middleware and tools*.
• Latent Dirichlet Allocation (LDA) for unsupervised analysis of large document collections.
• Computational complexity increases as the volume of data increases.
• RDMA Spark enabled simulation of largest test cases.
*Investigating Topic Models for Big Data Analysis in Social Science DomainNitin Sukhija, Nicole Brown, Paul Rodriguez, Mahidhar Tatineni, and Mark Van Moer, XSEDE16 Poster Paper
Default Spark with IPoIB does not scale!
RDMA‐Spark can scale very well!
HPCAC‐China (Oct ‘16) 63Network Based Computing Laboratory
• Exascale systems will be constrained by– Power– Memory per core– Data movement cost– Faults
• Programming Models, Runtimes and Middleware need to be designed for– Scalability– Performance– Fault‐resilience– Energy‐awareness– Programmability– Productivity
• Highlighted some of the issues and challenges• Need continuous innovation on all these fronts
Looking into the Future ….
HPCAC‐China (Oct ‘16) 64Network Based Computing Laboratory
Funding AcknowledgmentsFunding Support by
Equipment Support by
HPCAC‐China (Oct ‘16) 65Network Based Computing Laboratory
Personnel AcknowledgmentsCurrent Students
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.‐H. Chu (Ph.D.)
Past Students – A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist– S. Sur
Past Post‐Docs– D. Banerjee
– X. Besseron
– H.‐W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– S. Guganani (Ph.D.)
– J. Hashmi (Ph.D.)
– N. Islam (Ph.D.)
– M. Li (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists– K. Hamidouche
– X. Lu
– H. Subramoni
Past Programmers– D. Bureddy
– M. Arnold
– J. Perkins
Current Research Specialist– J. Smith
– M. Rahman (Ph.D.)
– D. Shankar (Ph.D.)
– A. Venkatesh (Ph.D.)
– J. Zhang (Ph.D.)
– S. Marcarelli
– J. Vienne
– H. Wang
HPCAC‐China (Oct ‘16) 66Network Based Computing Laboratory
• Looking for Bright and Enthusiastic Personnel to join as
– Post‐Doctoral Researchers
– PhD Students
– Hadoop/Spark/Big Data Programmer/Software Engineer
– MPI Programmer/Software Engineer
• If interested, please contact me at this conference and/or send an e‐mail to [email protected]‐state.edu
Multiple Positions Available in My Group
HPCAC‐China (Oct ‘16) 67Network Based Computing Laboratory
[email protected]‐state.edu
Thank You!
The High‐Performance Big Data Projecthttp://hibd.cse.ohio‐state.edu/
Network‐Based Computing Laboratoryhttp://nowlab.cse.ohio‐state.edu/
The MVAPICH2 Projecthttp://mvapich.cse.ohio‐state.edu/