28
Alice tier 1 testbed in KISTI-NSDC 1 2010. 7. 8 Christophe BONNAUD and Beob Kyun KIM, NSDC / KISTI 1 WLCWLCG workshop G workshop

Alice tier 1 testbed in KISTI-NSDC

  • Upload
    kera

  • View
    63

  • Download
    0

Embed Size (px)

DESCRIPTION

Alice tier 1 testbed in KISTI-NSDC. 2010. 7. 8 Christophe BONNAUD and Beob Kyun KIM, NSDC / KISTI. 1. Contents. Introduction to KISTI Introduction to NSDC Project Activities in 2009 System architecture Management plan for Alice tier-1 testbed Expected technical issues. - PowerPoint PPT Presentation

Citation preview

Page 1: Alice tier 1  testbed in KISTI-NSDC

Alice tier 1 testbedin KISTI-NSDC

1

2010 7 8Christophe BONNAUD and Beob Kyun KIM

NSDC KISTI

1WLCWLCG workshopG workshop

Contents

Introduction to KISTI

Introduction to NSDC Project

Activities in 2009

System architecture

Management plan for Alice tier-1 testbed

Expected technical issues

2WLCG workshop

Introduction to KISTI

WLCG workshop

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

WLCG workshop

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 2: Alice tier 1  testbed in KISTI-NSDC

Contents

Introduction to KISTI

Introduction to NSDC Project

Activities in 2009

System architecture

Management plan for Alice tier-1 testbed

Expected technical issues

2WLCG workshop

Introduction to KISTI

WLCG workshop

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

WLCG workshop

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 3: Alice tier 1  testbed in KISTI-NSDC

Introduction to KISTI

WLCG workshop

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

WLCG workshop

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 4: Alice tier 1  testbed in KISTI-NSDC

CDF Offline Meeting FNAL

SEOUL

Daejeon

160km

KISTI

bull 6 Universitiesbull 20 government re-

search institutesbull 10 government-invested institutesbull 33 private RampD labs bull 824 high-tech compa-

nies

Located in the heart of Science Valley ldquoDaedeokrdquo In-nopolis

The DAEDEOK INNOPOLIS complex consists of a cluster of firms that represents a cross-section of Koreas cutting-edge industries including information technology biotechnology and nanotechnology

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

WLCG workshop

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 5: Alice tier 1  testbed in KISTI-NSDC

a Government supported Research Instituteto serve Korea Scientists and Engineers with ScientificIndustrial

Data Bases Research Networks amp Supercomputing Infrastructure

WLCG workshop

We are the pioneer of building Cyber-Infrastructure of Korea

SampT InformationResource amp Service

Technological amp IndustrialInformation Analysis Supercomputing service

High PerformanceRampD Network service National

Cyber RampD Infrastructure

NationalRampD Promotion

Annual revenue about US-D100M mostly funded by the government

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 6: Alice tier 1  testbed in KISTI-NSDC

History of KISTI Supercomputers

1988 1993 20001997 2001

Cray-2S1st SC System

in Koreauntil 1993

Cray C902nd SC System

in Koreauntil 2001

Cray T3Euntil 2002

HP GS320 HPC160320

NEC SX-5

IBM p690

CAVE ampSGI Onyx3400

2002

PC Cluster128node

SC 45

2003

IBM p690+37TF

NEC SX-6 TeraCluster

2 GFlops 115 GFlops 240 GFlops 285 TFlops

436 TFlops435 GFlops111 GFlops16 GFlops

2GF 16GF 242GF131GF 786GF 52 TF 86TF2007 200835TF 280TF

CDF Offline Meeting FNAL (090903)

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 7: Alice tier 1  testbed in KISTI-NSDC

WLCG workshop

World Class Facilities at KISTI

300

2560034

Ranked 15th in TOP500 2010

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 8: Alice tier 1  testbed in KISTI-NSDC

KREONET is the national sci-ence amp research network of Ko-rea funded by MOST since 198820Gbps backbone 1 ~ 20Gbps access networks

National Research Network

GLORIAD (GLObal RIng Network for Advanced Appli-cations Development) with 1040Gbps Optical lambda networkingndash Global Ring topology for advanced science applicationsndash GLORIAD Consortia Korea USA China Russia Canada

the Netherlands and 5 Nordic Countries (11 nations) Essential to support advanced application develop-

ments HEP ITER Astronomy Earth System Bio-Medical HDTV

etc National GLORIAD project (funded by MOST of KOREA)

CDF Offline Meeting FNAL (090903)

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 9: Alice tier 1  testbed in KISTI-NSDC

Introduction to NSDC project

WLCG workshop 9

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 10: Alice tier 1  testbed in KISTI-NSDC

10

Introduction

WLCG workshop

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 11: Alice tier 1  testbed in KISTI-NSDC

WLCG workshop 11

Introduction

Data Intensive Researches

Bio-informaticsAstrophysics

Earth ScienceHigh Energy Physics

CDF Belle ALICE

2009 ~

Next ~

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 12: Alice tier 1  testbed in KISTI-NSDC

12

History

bull Planned by Governmentndash A project to build a strategic plan and a model of NSDC

bull MEST (Ministry of Education Science and Technology)bull 2008 02 ~ 2008 05bull ldquo 사이버 RampD 인프라 구축 활용방안 기획연구rdquo

bull Most important things for this project isndash The contribution to Korean researchersrsquo study

WLCG workshop

This project is a part of GOVERNMENTrsquoS MASTER PLAN

FOR SCIENCE AND TECHNOLOGY DEVELOPMENT ( 과학기술기본계획 2009~)

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 13: Alice tier 1  testbed in KISTI-NSDC

13

Activities in 2009

ALICE Tier2 CenterBelleCDF

DMRC amp Neuroimaging

WLCG workshop

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 14: Alice tier 1  testbed in KISTI-NSDC

KISTI ALICE Tier2

Resources

ndash 10 services for ALICE(WMS+LB PX RGMA BDII VOMS LFC VOBOX CREAMCE UI)

ndash 120 core 21888 kSI2kbull HP Blade (25GHz Quad 2) x 15 set

WLCG workshop 14

2008 2009 Provided Comments

CPU (kSI2K) 100 150 219 CREAM-CE

Storage (TB) 30 TB 50 TB 30 56TB Physical

Network (Gbps) 10 10 10

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 15: Alice tier 1  testbed in KISTI-NSDC

KISTI ALICE Tier2

WLCG workshop 15

Site Availability to EGEE983 (Feb2009~)

CREAM-CE support- First 3 sites in the world

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 16: Alice tier 1  testbed in KISTI-NSDC

NSDC system architecture

HardwareGrid Middleware Deployment

WLCG workshop 16

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 17: Alice tier 1  testbed in KISTI-NSDC

17

ResourcesCluster

Storage

Service nodes for Grid amp Application ~20 nodes (including VM)

WLCG workshop

Cluster Spec node nodes cores kSI2kce-al-ice

Dell Intel Xeon E5405 x2 (20GHz Quad) 16GB

6 48 48

ce01 HP Intel Xeon E5420 x2 (25Ghz Quad) 16GB

15 128 219

ce02 IBM Intel Xeon E5450 x2 (30GHz Quad) 16GB

38 304 650

Intel Xeon X5650 x2 (266GHz Hex) 24GB 36 432 --Total 114 976 --Model Capacity Physi-

callyCapacity Usable

NetApp FAS2050 (SAN only RAID6 HA) 48TB 30TBNetApp FAS6080 (SAN amp NAS RAID6 HA) 334TB 200TBHitachi (SAN only) + IBRIX 950TB 600TBTotal 13PB 830TB

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 18: Alice tier 1  testbed in KISTI-NSDC

VOBOXs

VOBOXs

Storage Pool

rsquo100702

ALICE

NSD

C

Computing Resourcesbull 60node (480core)

6 node 48 core- Xeon E5405 (20GHz) 2GB RAMcore- 1075 si2kcore AVG

DELL 1U (pe1950)WN

16 node 128 core- Xeon E5420 (25GHz) 2GB RAMcore- 1710 si2kcore AVG

HP BladeWN

10G 38 node 304 core- Xeon E5450 (30GHz) 2GB RAMcore- 2177 si2kcore AVG

IBM 1U (x3550)WN

px

CREAM-CE

lcg-CE

UIs

WMS+LB

RGMA

BDII-top

BDII-site

MyProxy

VOMS

VOBOXs

GUMS-OSG

CE-OSG

LFC

dgassdfarmkrNagios

repositorysdfarmkrRepository

SMURF

NetApp FAS6080Hitachi USP-V

IBRIX

Belle

LDG Server LIGO

SAM

36 node 432 core- Xeon X5650 (266GHz) 2GB RAMcore

IBM BladeWN

NetApp FAS2050

Belle db

bull 1349TB (830 TB usable)

Xrd head

Xrd head

SEDPM

Xrootd (t1)

Xrootd (t2)

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 19: Alice tier 1  testbed in KISTI-NSDC

Resource Plan in 2010

WLCG workshop 19

600TB usable space is being configured (Total 830TB Usable) ALICE Belle LIGO STAR GBRAIN hellip

Storage

36 blade servers HS22 (Xeon X5650 6 core x2 RAM 24GB HD 146GB)

Computing

Between storage and computingbull Basically all computing resource have 1G dedicated con-

nectionbull To enhance access speed Parallel File System is adaptedbull Special nodes like storage will have 10G connection to

storage

Internal Networking

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 20: Alice tier 1  testbed in KISTI-NSDC

Resource Plan in 2010

WLCG workshop 20U600TB

NAS NetAppFAS6080 A-1

NAS NetAppFAS2050 NAS NetApp

FAS6080 A-2

IBM server ldquox3550rdquo x 14

Dell 1U Server ldquoPE1950rdquo x 16

HP Blade ldquoBL460crdquo x 16

Y 2008 Install

IBM 1U server ldquox3550rdquo x 38

Y 2009 Install

IBM Blade Server ldquox5650rdquo x 36

Y 2010 Install

KREONET To Backbone Network

Public Network

Data Network

1Gb10Gb8Gb FC SAN

Intensive IO Servers

SMC8848M

Juniper EX2500

Cisco 3560E

HP IBRIX9300 (x5)

10 links4 links

HP StorageworksX9300(SAN Switch)

2 links

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 21: Alice tier 1  testbed in KISTI-NSDC

Management plan for Alice tier-1 testbed

HardwareGrid Middleware Deployment

WLCG workshop 21

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 22: Alice tier 1  testbed in KISTI-NSDC

Resources

WLCG workshop 22

Disk 100TB Managed by Xrootd

One new redirector Tape 100TB (Tape emulation on disk)

Storage

Shared resourcesComputing

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 23: Alice tier 1  testbed in KISTI-NSDC

Human resources

WLCG workshop 23

14 Members but only 9 for IT 4 members affected to Tier-1 testbed

Network power management and aircooling managed by other teams

Actual status

1 more person with strong experience in Tier-1 management will be recruited from any other country

We will invite experts for Tier-1 management as many as pos-sible

Future

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 24: Alice tier 1  testbed in KISTI-NSDC

Expected technical issues

HardwareGrid Middleware Deployment

WLCG workshop 24

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 25: Alice tier 1  testbed in KISTI-NSDC

Resources

WLCG workshop 25

tape storage needed to become a real Tier-1 Which hardware Which software (price seems critical) Manpowerexpertise

Storage

How to deal with a single farm shared by many projects 2 VOBOXs with same VO (Alice Tier-1 and Tier-2) issues

queue name Env Variables (software storage ele-ment hellip)

How to control resources usages

Farm

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 26: Alice tier 1  testbed in KISTI-NSDC

Monitoring

WLCG workshop 26

What other sites use What is available

Existing Software

Nagios for real time SMURF for historic No production monitoring

Current status

Critical for timemanpower consumption Bugs hunt project on Mobile Phone development

Home made software

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 27: Alice tier 1  testbed in KISTI-NSDC

WLCG workshop 2727

General Contact nsdcnsdckr Speaker cbonnaudkistirekr

Project Leader Haengjin JANG hjjangkistirekr

httpnsdckr will be available in soon

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a
Page 28: Alice tier 1  testbed in KISTI-NSDC

KISTI Supercomputing CenterExtending the horizon of science and technology

ResourceCenter

TestbedCenter

Keep securingprovid-ing world-class super-

computing systemsResourceCenter

Help Korea research commu-nities to be equipped with proper knowledge of super-computing

TestbedCenter

Validate newly emerg-ing concepts ideas tools and systems

Make the best use ofwhat the center has to create the new values

TechnologyCenter

Value-addingCenter

National headquarters of Supercomputing re-sources e-Science Grid and high performance research networks

WLCG workshop

  • Alice tier 1 testbed in KISTI-NSDC
  • Contents
  • Introduction to KISTI
  • Located in the heart of Science Valley ldquoDaedeokrdquo Innopolis
  • We are the pioneer of building Cyber-Infrastructure of Korea
  • History of KISTI Supercomputers
  • World Class Facilities at KISTI
  • National Research Network
  • Introduction to NSDC project
  • Introduction
  • Introduction (2)
  • History
  • Activities in 2009
  • KISTI ALICE Tier2
  • KISTI ALICE Tier2 (2)
  • NSDC system architecture
  • Resources
  • Slide 18
  • Resource Plan in 2010
  • Resource Plan in 2010 (2)
  • Management plan for Alice tier-1 testbed
  • Resources (2)
  • Human resources
  • Expected technical issues
  • Resources (3)
  • Monitoring
  • Slide 27
  • KISTI Supercomputing Center Extending the horizon of science a