25
大大大大大大大 / 大大大 Lecture 3 – Hadoop Environment 彭彭 彭彭彭彭彭彭彭彭彭彭彭彭 4/23/2011 http://net.pku.edu.cn/~cours e/cs402/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United S See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details Jimmy Lin University of Maryland 彭彭彭彭 SEWM Group

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment

  • Upload
    ayame

  • View
    164

  • Download
    0

Embed Size (px)

DESCRIPTION

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment. 彭波 北京大学信息科学技术学院 4/23/2011 http://net.pku.edu.cn/~course/cs402/. Jimmy Lin University of Maryland. 课程建设. SEWM Group. - PowerPoint PPT Presentation

Citation preview

Page 1: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment

彭波北京大学信息科学技术学院4/23/2011

http://net.pku.edu.cn/~course/cs402/

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Jimmy LinUniversity of Maryland 课程建设 SEWMGroup

Page 2: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

Install Hadoop

2

Page 3: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

3

Installation Prerequisites

JAVA SDK 6 Linux OS( the only production platform)

Installation Download hadoop.x.y.z.tar.gz (we use version 0.20.2) tar xzvf hadoop.x.y.z.tar.gz

Setup Environment Variables export HADOOP_HOME=/home/hadoop export HADOOP_CONF_DIR=/home/hadoop/hadoop-config export PATH=$HADOOP_HOME/bin:$PATH export JAVA_HOME=/usr/lib/jvm/java-6-openjdk

Page 4: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

4

Configuration

Directory: $HADOOP_CONF_DIR core-site.xml hdfs-site.xml mapred-site.xml

Page 5: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

5

Test HDFS

hadoop fs -ls / MapReduce

hadoop jar $HADOOP_HOME/hadoop-0.20.2-examples.jar pi 10 10000

Web Administration http://jobtracker:50030

Page 7: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

“Hello world”

Page 8: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

8

“Hello World”: Word Count

Map(String docid, String text): for each word w in text: Emit(w, 1);

Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

Page 9: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

9

Basic Hadoop API* Mapper

void map(K1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter)

void configure(JobConf job) void close() throws IOException

Reducer/Combiner void reduce(K2 key, Iterator<V2> values,

OutputCollector<K3,V3> output, Reporter reporter) void configure(JobConf job) void close() throws IOException

Partitioner void getPartition(K2 key, V2 value, int numPartitions)

*Note: forthcoming API changes…

Page 10: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

10

Data Types in Hadoop

Writable Defines a de/serialization protocol. Every data type in Hadoop is a Writable.

WritableComprable Defines a sort order. All keys must be of this type (but not values).

IntWritableLongWritableText…

Concrete classes for different data types.

SequenceFiles Binary encoded of a sequence of key/value pairs

Page 11: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

11

WordCount::Mapper

Page 12: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

12

WordCount::Reducer

Page 13: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

13

Basic Cluster Components One of each:

Namenode (NN) Jobtracker (JT)

Set of each per slave machine: Tasktracker (TT) Datanode (DN)

Page 14: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

14

Putting everything together…

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

namenode

namenode daemon

job submission node

jobtracker

Page 15: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

15

Anatomy of a Job MapReduce program in Hadoop = Hadoop job

Jobs are divided into map and reduce tasks An instance of running a task is called a task attempt Multiple jobs can be composed into a workflow

Job submission process Client (i.e., driver program) creates a job, configures it, and

submits it to job tracker JobClient computes input splits (on client end) Job data (jar, configuration XML) are sent to JobTracker JobTracker puts job data in shared location, enqueues tasks TaskTrackers poll for tasks Off to the races…

Page 16: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

16

InputSplit

Source: redrawn from a slide by Cloduera, cc-licensed

InputSplit InputSplit

Input File Input File

InputSplit InputSplit

RecordReader RecordReader RecordReader RecordReader RecordReader

Mapper

Intermediates

Mapper

Intermediates

Mapper

Intermediates

Mapper

Intermediates

Mapper

Intermediates

Inpu

tFor

mat

Page 17: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

17Source: redrawn from a slide by Cloduera, cc-licensed

Mapper Mapper Mapper Mapper Mapper

Partitioner Partitioner Partitioner Partitioner Partitioner

Intermediates Intermediates Intermediates Intermediates Intermediates

Reducer Reducer Reduce

Intermediates Intermediates Intermediates

(combiners omitted here)

Page 18: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

18Source: redrawn from a slide by Cloduera, cc-licensed

Reducer Reducer Reduce

Output File

RecordWriter

Out

putF

orm

at

Output File

RecordWriter

Output File

RecordWriter

Page 19: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

19

Input and Output InputFormat:

TextInputFormat KeyValueTextInputFormat SequenceFileInputFormat …

OutputFormat: TextOutputFormat SequenceFileOutputFormat …

Page 20: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

20

Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce! Map side

Map outputs are buffered in memory in a circular buffer When buffer reaches threshold, contents are “spilled” to disk Spills merged in a single, partitioned file (sorted within each

partition): combiner runs here Reduce side

First, map outputs are copied over to reducer machine “Sort” is a multi-pass merge of map outputs (happens in memory

and on disk): combiner runs here Final merge pass goes directly into reducer

Page 21: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

Shuffle and Sort

Mapper

Reducer

other mappers

other reducers

circular buffer (in memory)

spills (on disk)

merged spills (on disk)

intermediate files (on disk)

Combiner

Combiner

21

Page 22: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

22

Hadoop Workflow

Hadoop ClusterYou

1. Load data into HDFS

2. Develop code locally

3. Submit MapReduce job3a. Go back to Step 2

4. Retrieve data from HDFS

Page 23: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

23

Debugging Hadoop First, take a deep breath Start from unit test, then start locally Strategies

Learn to use the webapp Where does println go? Don’t use println, use logging Throw RuntimeExceptions

Page 24: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

24

Recap Hadoop data types Anatomy of a Hadoop job Hadoop jobs, end to end Software development workflow

Page 25: 大规模数据处理 / 云计算 Lecture 3  – Hadoop Environment

Q&A?Thanks you!