22
Finishing Flows Quickly with Preemptive Scheduling Presenter: Gong Lu

Finishing Flows Quickly with Preemptive Scheduling

Embed Size (px)

DESCRIPTION

Finishing Flows Quickly with Preemptive Scheduling. Presenter: Gong Lu. Authors. Chi-Yao Hong Ph.D., Computer Science, UIUC, 09-14 Co-advised by Matthew Caesar and Brighten Godfrey Research interests: Protocol design Network measurement Security. Authors (cont.). Matthew Caesar - PowerPoint PPT Presentation

Citation preview

Page 1: Finishing Flows Quickly with Preemptive Scheduling

Finishing Flows Quickly with Preemptive Scheduling

Presenter: Gong Lu

Page 2: Finishing Flows Quickly with Preemptive Scheduling

Authors

Chi-Yao Hong Ph.D., Computer Science, UIUC,

09-14 Co-advised by Matthew Caesar

and Brighten Godfrey Research interests:

Protocol design Network measurement Security

Page 3: Finishing Flows Quickly with Preemptive Scheduling

Authors (cont.)

Matthew Caesar Assistant Professor @ UIUC Ph.D., Computer Science, U.C.

Berkeley

Philip Brighten Godfrey Assistant Professor @ UIUC Ph.D., Computer Science, U.C.

Berkeley

Page 4: Finishing Flows Quickly with Preemptive Scheduling

Introduction

Datacenter applications Minimize flow completion time Meet soft-real-time deadlines

Existing works: TCP, RCP, ICTCP, DCTCP, … Approximate fair sharing Far from optimal

Page 5: Finishing Flows Quickly with Preemptive Scheduling

Example

Page 6: Finishing Flows Quickly with Preemptive Scheduling

Centralized Algorithm

: maximal sending rate of flow i : expected flow transmission time of flow i

Page 7: Finishing Flows Quickly with Preemptive Scheduling

Problem

The centralized algorithm is unrealistic Having complete visibility of the network Able to communicate with devices with zero

delay Introduces single point failure and

significant overhead for senders to interact with the centralized coordinator

Page 8: Finishing Flows Quickly with Preemptive Scheduling

The Solution

Fully distributed implementation Sender Receiver Switch

Propagate flow information via explicit feedback in packet headers When the feedback reaches the receiver, it

is returned to the sender in an ACK packet

Page 9: Finishing Flows Quickly with Preemptive Scheduling

PDQ Sender

Maintains some state variables: : its current sending rate : switches (if any) who has paused the flow : flow deadline (optional) : the expected flow transmission time : the inter-probing time : the measured round-trip time

Page 10: Finishing Flows Quickly with Preemptive Scheduling

PDQ Sender (cont.)

Sends packets with rate If , instead sends a probe packet every RTTs Attaches a scheduling header

Remaining fields are set to its current maintained variables

When ACK packet arrives Update by feedback Update by the remaining flow size Update by the packet arrival time Remaining fields are copied from the header

Page 11: Finishing Flows Quickly with Preemptive Scheduling

PDQ Receiver

Copies the scheduling header from each data packet to its corresponding ACK

Reduce if it exceeds the processing capacity To avoid buffer overrun on receiver

Page 12: Finishing Flows Quickly with Preemptive Scheduling

PDQ Switch

Maintains state about flows on each link <, , , , > Only store the most critical flows Use RCP for less critical flows using the

leftover bandwidth RCP does not require per-flow state Partial shift away from optimizing completion

time and towards traditional fair sharing

Page 13: Finishing Flows Quickly with Preemptive Scheduling

PDQ Switch (cont.)

Decides whether to accept or pause the flow A flow is accepted if all switches along the path

accept it A flow is paused if any switch pauses it

Flow acceptance In forward path, the switch computes the

available bandwidth based on the flow criticality, and updates and

In the reverse path, if a switch sees an empty pauseby field in the header, it updates the global decision of acceptance to its state ( and )

Page 14: Finishing Flows Quickly with Preemptive Scheduling

Several Optimizations

Early start Provide seamless flow switching

Early termination Terminate flows which unable to meet

deadlines Dampening

Avoid frequent flow switching Suppressed probing

Avoid large bandwidth usage from paused senders

Page 15: Finishing Flows Quickly with Preemptive Scheduling

Evaluation

Page 16: Finishing Flows Quickly with Preemptive Scheduling

Evaluation (cont.)

Page 17: Finishing Flows Quickly with Preemptive Scheduling

Evaluation (cont.)

Page 18: Finishing Flows Quickly with Preemptive Scheduling

Evaluation (cont.)

Page 19: Finishing Flows Quickly with Preemptive Scheduling

Evaluation (cont.)

Page 20: Finishing Flows Quickly with Preemptive Scheduling

Evaluation (cont.)

Page 21: Finishing Flows Quickly with Preemptive Scheduling

Conclusion

PDQ can complete flows quickly and meet flow deadlines

PDQ provides a distributed algorithm to approximate a range of scheduling disciplines

PDQ provides significant advantages over existing schemes under extensive packet-level and flow-level simulation

Page 22: Finishing Flows Quickly with Preemptive Scheduling

References