26
KISTI SITE REPORT Global Science Data Center 1 2010.11.02 Christophe BONNAUD GSDC / KISTI 1 Hepix fall 2010 G workshop

KISTI SITE REPORT Global Science Data Center

  • Upload
    gavan

  • View
    50

  • Download
    0

Embed Size (px)

DESCRIPTION

KISTI SITE REPORT Global Science Data Center. 2010.11.02 Christophe BONNAUD GSDC / KISTI. 1. Contents. Introduction to KISTI Introduction to GSDC Project Activities in 2009/2010 GSDC system architecture. Introduction to KISTI. SEOUL. 160km. Daejeon. - PowerPoint PPT Presentation

Citation preview

Page 1: KISTI SITE REPORT Global Science  Data Center

KISTI SITE REPORTGlobal Science

Data Center

1

20101102Christophe BONNAUD

GSDC KISTI

1Hepix fall 2010G workshop

Contents

Introduction to KISTI

Introduction to GSDC Project

Activities in 20092010

GSDC system architecture

2Hepix fall 2010

Introduction to KISTI

Hepix fall 2010 3

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

4

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

Hepix fall 2010

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

5

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 2: KISTI SITE REPORT Global Science  Data Center

Contents

Introduction to KISTI

Introduction to GSDC Project

Activities in 20092010

GSDC system architecture

2Hepix fall 2010

Introduction to KISTI

Hepix fall 2010 3

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

4

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

Hepix fall 2010

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

5

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 3: KISTI SITE REPORT Global Science  Data Center

Introduction to KISTI

Hepix fall 2010 3

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

4

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

Hepix fall 2010

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

5

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 4: KISTI SITE REPORT Global Science  Data Center

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

4

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

Hepix fall 2010

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

5

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 5: KISTI SITE REPORT Global Science  Data Center

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

Hepix fall 2010

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

5

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 6: KISTI SITE REPORT Global Science  Data Center

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

Hepix fall 2010 6

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 7: KISTI SITE REPORT Global Science  Data Center

Hepix fall 2010

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

7

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 8: KISTI SITE REPORT Global Science  Data Center

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

8

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 9: KISTI SITE REPORT Global Science  Data Center

Introduction to GSDC project

Hepix fall 2010 9

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 10: KISTI SITE REPORT Global Science  Data Center

10

Introduction

Hepix fall 2010

GSDC

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 11: KISTI SITE REPORT Global Science  Data Center

Hepix fall 2010 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 12: KISTI SITE REPORT Global Science  Data Center

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of GSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

Hepix fall 2010

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 13: KISTI SITE REPORT Global Science  Data Center

Human resources

Hepix fall 2010 13

14 Members but only 9 for IT Only one System administrator Network power management and air-cooling managed by

other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

Future

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 14: KISTI SITE REPORT Global Science  Data Center

14

Activities in 20092010

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

Hepix fall 2010

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 15: KISTI SITE REPORT Global Science  Data Center

ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

Hepix fall 2010 15

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 16: KISTI SITE REPORT Global Science  Data Center

ALICE Tier2

Hepix fall 2010 16

Site Availability to EGEE 983 (Feb2009~) CREAM-CE support

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 17: KISTI SITE REPORT Global Science  Data Center

Tier 1 Testbed

Hepix fall 2010 17

Disk 100TB Managed by Xrootd

Tape 100TB (Tape emulation on disk) Managed by Xrootd

Storage

12 nodes IBM blades HS22 Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Set in last July In production now Resources will be increase next year

Timeline

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 18: KISTI SITE REPORT Global Science  Data Center

KIAF (Kisti Analysis Farm)

Hepix fall 2010 18

11 IBM Blades HS22 servers Intel Xeon X5650 x2 (266GHz) 24GB

Computing

Local disks managed by xrootd (~1TB) Storage

Set in last August Mainly dedicated to Korean researcher Resources will be increased next year

Timeline

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 19: KISTI SITE REPORT Global Science  Data Center

GSDC system architecture

HardwareGrid Middleware Deployment

Hepix fall 2010 19

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 20: KISTI SITE REPORT Global Science  Data Center

Hardware Resources (2008)

6 WNs Dell 1U12 WNs Dell Blade servers14 services machinesNo storage

20

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 21: KISTI SITE REPORT Global Science  Data Center

21

Hardware ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

Hepix fall 2010

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 22: KISTI SITE REPORT Global Science  Data Center

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

GSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)22

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 23: KISTI SITE REPORT Global Science  Data Center

Softwares

Hepix fall 2010 23

Nagios for real time SMURF for graph phpsyslog No accounting

Monalisa Gratia

Monitoring

InstallationConfiguration OS installation home made solution PuppetStorage IBRIX from HP

Virtualization VMware VMware Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 24: KISTI SITE REPORT Global Science  Data Center

Network

Hepix fall 2010 24U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 25: KISTI SITE REPORT Global Science  Data Center

Resource Plan in 2011

Hepix fall 2010 25

Disk 500TB ~ 800TB Tape (necessary for Alice Tier1)

At least 200TB HPSS far too expensive Tivoli + xrootd

Discussion with IBM Visit other centers in NovDec

Storage

WNs ~1000 cores Services 200 cores

Computing

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26
Page 26: KISTI SITE REPORT Global Science  Data Center

Hepix fall 2010 26

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpgsdckr will be available in soon

  • KISTI SITE REPORT Global Science Data Center
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to GSDC project
  • Introduction
  • Introduction (2)
  • History
  • Human resources
  • Activities in 20092010
  • ALICE Tier2
  • ALICE Tier2 (2)
  • Tier 1 Testbed
  • KIAF (Kisti Analysis Farm)
  • GSDC system architecture
  • Hardware Resources (2008)
  • Hardware Resources (2)
  • Slide 22
  • Softwares
  • Network
  • Resource Plan in 2011
  • Slide 26