Upload
dino
View
43
Download
4
Embed Size (px)
DESCRIPTION
Budget-Based QoS Management Architecture. 指導老師連耀南 教授 學生陳建同 學生李宗勳 學生陳明志 學生陳逸民. 論文題目. Session 1. Introduction. Introduction. Network convergence All-IP Network Quality of Service UMTS Application Classes QoS Architecture for IP-Based Network Integrated Service - PowerPoint PPT Presentation
Citation preview
Budget-Based QoS Management Architecture
指導老師 連耀南 教授學生 陳建同學生 李宗勳學生 陳明志學生 陳逸民
論文題目陳建同 全 IP 網路中以預算為基礎之端對
端服務品質之管理李宗勳 預算法全 IP 核心網路服務品質管
理之路徑規劃陳明志 預算法全 IP 核心網路服務品質管
理之分散式資源管理與允入控制陳逸民 預算法全 IP 核心網路品質管理中
可彌補預測誤差的資源配置方法
Session 1
Introduction
Introduction Network convergence All-IP Network Quality of Service UMTS Application Classes QoS Architecture for IP-Based Network
Integrated Service Differentiated Service
Network Convergence Packet switching & circuit switching. All-IP Networks.
Network Convergence &All-IP Network
All-IP Network A globally integrated network based on IP t
echnology Strength
Low construction & management cost. A platform for cross-network applications.
Problem Heterogeneous networks (Impedance Mismatc
h) Complicated quality of service
Heterogeneous Networks
Impedance Mismatch Horizontal impedance mismatch
e.g. 3G – IP Core Network – WLAN Vertical impedance mismatch
e.g. DiffServ – ATM
Quality of Service QoS aspects
Long delay time Jitter Packet loss
More complicated QoS management Diversified QoS Expectations
Diversified QoS Expectations User
Lowest price, best service Lowest price, accommodative service Acceptable price, best service Lowest price, tolerable service
System Provider Acceptable price, best service Highest price, acceptable service Lowest price, tolerable service
QoS Flexibility System providers can adjust QoS
management depending on different policies.
UMTS Application Classes Conversational class Streaming class Interactive class Background class
QoS Sensitivity of UMTS classes
QoS Architecture for IP-Based Networks IETF working group Integrated Service Differentiated Service
Challenges Network convergence brings new QoS
problems. How to provide per-flow end-to-end Q
oS with limited resource and maximum satisfaction level?
Session 2
Related Work
Related Work Related Technology
Trunk MPLS (Multi-Protocol Label Switching) IP Network QoS architecture
IntServ DiffServ
QoS Management Architecture TEQUILA Victor O.K. Li’s System AQUILA (Adaptive Resource Control for QoS Usi
ng an IP-Based Layered Architecture) Summary
Related Work
Related Technology
Trunk A trunk is defined as
a unit consisting of a number of flows.
A link is a unit consisting of a number of trunks.
Intermediate routers recognize only trunks.
Trunk (2) Available bandwidth is fixed. Suffering high blocking rate
comparing to per-flow management.
MPLS (1) The use of labels is to explicitly identify
a common group of packets rather than to match variable parts of the packet header.
MPLS is to enable fast switching, by replacing route lookup for a variable length IP destination address, with an exact match of a fixed, predefined number of bits.
MPLS (2) MPLS provides the ability to
forward packets over arbitrary non-shortest paths
For non-IP based networks such as ATM or frame relay, it provides a IP-based control plane(routing, path selection, reservation)
Integrated Service IntServ Architecture
Virtual circuit Long-lived unicast or multicast flows
RSVP (Resource Reservation Protocol) PATH. RESV.
IntServ Strength & Weakness Strength
Absolute service guarantees Traffic flows can be monitored Using existing routing protocols
Weakness Enormous processing overhead & state Scalability problem
Differentiated Service DiffServ
Providing service by classes DSCP (DiffServ Code Point) & PHB (Per-H
op Behavior)
DiffServ PHBs Expedited Forwarding (EF) - to minimiz
e any delays that may occur when congestion exists
Assured Forwarding (AF) – to clarify whether a packet can be dropped
Best Effort (BE) – for traffic having no special forwarding requirement, for backward compatibility
DiffServ PHBs Application Examples
PHB Examples
BE E-mail, FTP
EF Voice over IP(VOIP), Video on Demand(VOD)
AF Web Browsing, Telnet
DiffServ Domain (1) Different DiffServ domain
DSCP is not compatible Hosts that arbitrarily mark DSCP
Non-DiffServ host can be part of DiffServ domain
DiffServ Domain (2)
DiffServ Management Function Classifying – based on IP, port # and
network protocol Policing
Metering – flow rate, burst size (not only for keeping the quality, but also for billing)
Shaping – holding bursts and pacing the traffic
Dropping
Distinction between Edge & Core Edge Router
Ingress Router Classifying Policing
Egress Router Shaping
Core Router Forwarding
DiffServ Strength & Weakness Strength
Low overhead No scalability problem & easy to implem
ent Weakness
DiffServ does not provide per-flow end-to-end QoS .
Related Work
QoS Management Architecture
TEQUILA The Traffic Engineering for Quality of
Service in the Internet at Large Scale (TEQUILA)
TEQUILA Architecture Service Level Specifications Management
(SLS Management) Traffic Engineering Data plane
TEQUILA Architecture
TEQUILA System Components SLS
Subscription Invocation Forecast Inter-domain SLS
Traffic Engineering Network dimensioning Dynamic route management Dynamic resource management
TEQUILA System Dynamic algorithms Route Management
Load balance Resource Management
Link bandwidth Buffer
Victor O.K. Li’s Architecture Efficient Resource Management for End-to-
End QoS Guarantees in DiffServ Networks
Admission Control RSVP-like Inter-domain Intra-domain
Victor O.K. Li’s System Centralized Routing & Resource Alloca
tion Pre-calculated paths Periodically Reallocation
Distributed Admission Control
Summary Scalability problem (IntServ) Providing per-flow end-to-end QoS (D
iffServ) Too many real-time processes (TEQUI
LA) Low performance to deal with bulky b
urst traffic (Victor O.K Li’s System)
Research Objective Propose a flexible QoS management s
ystem and associated tools for All-IP Networks.
QoS Management for large-scale networks Simple QoS management architecture Less real-time computation Capability of providing per-flow end-t
o-end QoS Flexibility for operators’ different ex
pectation Adaptability for different network tech
nologies.
Budget-Based QoSSystem Features BBQ system is one pre-planning, distribute
d system. A simplified architecture for providing per-f
low end-to-end QoS with high processing efficiency low management complexity.
The proposed architecture is easy to deploy. Flexibility for operators to adjust their QoS
policy. Adaptability for different layer-2 or layer-3 t
echnologies.
Session 3
Budget-Based End-to-End QoS Management for All-IP Networks
全 IP 網路中以預算為基礎之端對端服務品質管理
Student: Chen, Chien-Tung Advisor: Lien, Yao-Nan
Lab. of Mobile Communication, Dept. of Computer Science, National ChengChi Univ.
September, 2003
Outline Budget-Based QoS, BBQ Budget-Based End-to-End QoS
Management for All-IP Networks Performance Evaluation Conclusion and Future Work
Budget-Based QoS, BBQ BBQ Philosophy An Simplified All-IP Network Architecture Budget-Based Management Paths Definitions Bearer Service Architecture Quality Entropy On Demand Allocation vs. Pre-Planning Centralized Planning vs. Distributed Planning Management System Architecture for BBQ An Simplified End-to-End Path Setup Procedure BBQ System Assumptions
BBQ Philosophy Objective
Provide operator an adaptable QoS management infrastructure for All-IP networks.
Principles Simple and Easy to employ. Reduce the real time on demand resource
management. Scalability Adaptable Provide an architecture and management tool for
operators to tune their networks to meet own objective.
A Simplified All-IP Network Architecture
A Simplified All-IP Network Architecture (cont’d)
Budget-Based Management According to the capability of each
network component, we allocate user’s bandwidth requirements and quality requirements to each network component.
We allocate QoS management responsibilities to network by budget.
Budget-Based Management(cont’d) BBQ is designed to reduce
management complexity. Global resource optimization is
secondary concern.
Path Definitions The sequence of nodes and links a packet
traveled. In general, a packet in a packet-switching
network doesn't follow a predefined path to travel.
The process of finding a path for a packet is called routing.
It is much easier to assure QoS if packets travel along designated paths.
Path Definitions (cont’d) BBQ accomplishes QoS by
deploying paths with QoS. Paths with QoS are designed using
the Budget-Based Management.
Path Definitions (cont’d)
Planned Management Component
Abilities
Short Path
PPAs in Core Network
Paths cross a Core Network with certain quality guarantees.
Long Path
LPPAs in Core Network
Paths cross the backbone with certain quality guarantees.
End-to-End Path
Global ACAs in Stub Networks
A path is composed of long path and paths in Stub Networks with certain quality guarantees
Path Definitions (cont’d)
Bearer Service Architecture When a packet needs to be delivered to
a destination, each sub-network that the packet might travel through must offer its bearer service to the packet.
In general, it needs three basic bearer services, one for each of Entrance Stub Network, Exit Stub Network, and Backbone.
Bearer Service Architecture (cont’d)
Bearer Service Architecture (cont’d) End-to-End service is supported by
underlying bearer services. Backbone bearer service is supported
by Core Network bearer services. In order to simplify, we could define
Long paths providing backbone bearer services, and
Short paths providing Core Network bearer services
Quality Entropy The lower the better We use quality entropy inversed
quality level for simplification. A single value computed by
operator defined function. In our research, we assume quality
entropy is additive.
Quality Entropy (cont’d)
Quality Entropy (cont’d) Users’ quality definition may not
be the same with networks’. Quality entropy represents a
mediator. Quality entropy can accommodate
to different quality definitions.
On Demand Resource Allocation vs. Pre-Planning On demand resource allocation :
Admission controllers control minimum resource.
Request more resources when needed Low scalability Difficult to achieve optimal resource
utilization (FCFS). Slower response time
Not good for time- sensitive applications
On Demand Resource Allocation vs. Pre-Planning (cont’d) Pre-Planning :
Allow long computation time for better algorithms
Could utilize historical data for forecast Assign a pre-calculated path to a
request immediately fast! Good for stable networks with
periodical traffic patterns
Centralized vs. Distributed Centralized
Easy to implement. Lack of flexibility to deal with forecast error. Heavy load in planner Fairness problem
Distributed Reduce load for planner Fair resource allocation More precise forecast
Management System Architecture for BBQ Management System Hierarchy Management System Software
Architecture
Management System Hierarchy BBQ uses hierarchical management
system to reduce management complexity.
Components in different layers plan different resource respectively.
Lower layer components provide planned resource for upper layer components.
All layers are independent managements.
Management System Hierarchy(cont’d)
Layer Function
End-to-end Network QoS Coordination
End-to-end QoS control, end-to-end resource/path planning
Core Network Resource Management
Allocate resources of a Core Network to the QoS Manager of all resource mediators
Core Network QoS Control (One for each Ingress)
Implement QoS policy of Core Networks, e.g., admission control, load control, routing and path selection, packet scheduler, etc.
DiffServ Execute the QoS policy designated by the upper layer manager
Management System Software Architecture
A Simplified End-to-End Path Setup Procedure
BBQ System Assumptions Quality entropy is a single value
metric Quality entropy is additive Class by class planning Periodical traffic pattern
Session 4
Budget-Based End-to-End QoS Management for All-IP Networks Resource Reservation/Allocation Quality Entropy Definition Long Path Planning End-to-End QoS Component, LPPA End-to-End QoS Component, Global ACA End-to-End QoS Procedure LPPA Optimization Model
Resource Reservation/Allocation Reservation Time Frame
Long Term Short Term Real Time Automatic Revoke Period
Resource Reservation/Allocation (cont’d) Reservation Certainty
Hard : Cannot be revoked without negotiation within revoke time period
Soft : Can be revoked without negotiation within revoke time period.
Resource Reservation/Allocation (cont’d) Reservation Quantity
Batch Order Pre-reservation On demand reservation
Per flow On demand reservation
Resource Reservation/Allocation (cont’d)
Resource Reservation Scheme
Reservation Time Frame
Long Term Short Term Real Time
Simple Long Term Reservation Hard( Batch Order)
2. Simple Short Term Reservation Hard( Batch Order)
3. Real-time Batch Reservation Hard
( Batch Order)
4. Real-time Reservation(RSVP) Hard( Per Flow)
5. Progressive Reservation Soft( Batch Order)
Hard( Batch Order)
6. Proposed Approach for End-to-End Soft
( Batch Order)Soft
( Batch Order)Hard
( Per Flow)
Long Path Planning Long Path Planning
Choose short paths among different Core Networks for a service request
Per flow on demand is impossible We use pre-planning approach
Reduce management complexity Adopt Pre-planning
Not on demand allocation No reservation will be made
Real time on demand reservation instead avoid waste resources
Long Path Planning (cont’d) Due to the management system
hierarchy Long path planning is to compose
short paths into long paths Not to determine short path internals More efficient than virtual circuit
Long Path Planning (cont’d)
Long Path Planning (cont’d) LPPA is a component in End-to-End
Network QoS Coordination Layer in every Core Network.
Responsible for long path planning Input
Traffic Aggregate Forecast Each border gateway monitors the traffic pattern. (Source,Destination,Bandwidth,Quality Entropy)
Available Short Paths Planned by Core Networks (Source,Destination,Bandwidth,Quality Entropy)
Long Path Planning (cont’d) Output
Long Path Planning (cont’d)
Long Path Planning (cont’d)
Global planning LPPA Centralized planning Better
utilization Without reservation
mechanism Hard to realize Could not match
individual operators’ policy
Long Path Planning (cont’d)
Local planning LPPA Distributed planning Easy to deploy Individual operators
could operating objective
Lower network utilization
Resource reservation overhead
Use our reservation scheme for resource reservation
End to End Admission Control Global ACA is a component in End-to-
End Network QoS Coordination Layer. Global ACA resides in Network Access
Server. Responsible for
End-to-End path set up for incoming request End-to-End path selection Real time hard reservation End-to-End QoS admission control
End-to-End QoS Procedure Local planning LPPA is adopted as our
approach. Three phases
Long term path planning plus soft reservation LPPA establishes cooperate relationship between
operators( Seldom) . Short term soft reservation
LPPA revises the long path to adapt to enormous traffic fluctuation ( Seldom) .
Real time hard reservation Global ACA reserves resources for incoming request
End-to-End QoS Procedure(cont’d) Long term soft reservation
End-to-End QoS Procedure(cont’d) Short term soft reservation
Same as long term soft reservation Triggered whenever traffic fluctuation
occurs .
End-to-End QoS Procedure(cont’d) Real time hard reservation
LPPA Optimization Model
LPPA Optimization Model (cont’d)
LPPA Optimization Model(cont’d) Nonlinear Goal Mixed Integer
Programming Solve this model with linear relaxation
Minimize
NodePairBG BG N
i
ii
N
S
N
D m
mSDm CBCSp
11 1
4
1
)(
Performance Evaluation Performance Evaluation Metrics Experiment Design Experiments and Results Summary
Long term soft reservation Resource Reservation Cost
Cost for long term soft reservation
Satisfaction Ratio The level we can satisfy the traffic aggregate
forecast
Performance Evaluation Metrics
Real time hard reservation Blocking Ratio
The Ratio of blocked flows to traffic flow requests
Performance Evaluation Metrics (cont’d)
Experiment Design Experiment Environment
Optimization Model ILOG on Windows 2000 platform
Simulation BBQ Computational Simulator on Unix
Experiment Design (cont’d) Test Instance Generation
Network Topology Generation Adopt a basic topology Change the network connectivity Randomize the quality entropy and cost of each
short path Traffic Request Generation
Change the traffic aggregates forecast requirement
Change the traffic flows sequence
Experiment Design (cont’d)
Experiment Design (cont’d) Experiments
Experiment 1 & 2 Sensitivity to total traffic aggregate requirements Sensitivity to network connectivity Observing the performance using two metrics,
resource reservation cost and satisfaction ratio(penalty)
Experiment 3 Sensitivity to forecast error Observing the performance using one metrics,
blocking ratio
Results of Exp 1: Sensitivity to Total Traffic Aggregate requirements
Parameters Setup: Total traffic aggregate requirements: 10, 30, …,
150 Gbps Objective:
Observation of metrics: 1-A:Resource reservation cost 1-B:Satisfaction ratio, Penalty
Boundary 1-A(1): Satisfaction Ratio = 100% 1-A(2): Network Topology 1-B(1): Resource Reservation Cost = 3000k 1-B(2): Network Topology
Results of Experiment 1-ASatisfaction Ratio = 100%
Network Topology
Resource reservation cost increases whenever resource need increases.
Sensitivity to total traffic aggregate requirement on resource reservation cost
(1)Different network connectivity and (2) Different satisfaction ratio.
Results of Experiment 1-BResource Reservation Cost = 3000k
Network Topology
Satisfaction ratio decreases whenever resource need increases.
Sensitivity to total traffic aggregate requirement on satisfaction ratio
(1)Different network connectivity and (2) Different cost limit.
Results of Experiment 1-B(cont’d)
Resource Reservation Cost = 3000k
Network Topology
Penalty increases whenever resource need increases.
Sensitivity to total traffic aggregate requirement on penalty
(1)Different network connectivity and (2) Different cost limit.
Results of Exp 2: Sensitivity to Network Connectivity
Parameters Setup: Network connectivity: 30, 40, …, 70 %
Objective: Observation of metrics:
Resource reservation cost Satisfaction ratio, Penalty
Boundary 2-A(1): Satisfaction ratio = 100% 2-A(2): Total traffic aggregate requirement = 70Gbps 2-B(1): Total traffic aggregate requirement = 70Gbps 2-B(2): Resource Reservation Cost = 3000k
Results of Experiment 2-ASatisfaction Ratio = 100%
Total traffic aggregate requirement = 70Gbps
Resource reservation cost decreases whenever network connectivity increases.
Sensitivity to network connectivity on resource reservation cost
(1)Different total traffic aggregate requirement and (2) Different satisfaction ratio.
Results of Experiment 2-BTotal traffic aggregate requirement = 70Gbps
Satisfaction ratio increases whenever network connectivity increases.
Sensitivity to network connectivity on satisfaction ratio
(1)Different cost limit and (2) Different total traffic aggregate requirement .
Resource Reservation Cost = 3000k
Results of Experiment 2-B(cont’d)
Total traffic aggregate requirement = 70Gbps
Penalty decreases whenever network connectivity increases.
Sensitivity to network connectivity on penalty
(1)Different cost limit and (2) Different total traffic aggregate requirement .
Resource Reservation Cost = 3000k
Results of Exp 3: Sensitivity to Forecast Error
Parameters Setup: Forecast Error: 0, 10, …, 100 %
Objective: Observation of metrics:
Blocking Ratio
Test Cases 5 test cases of different traffic aggregate
bandwidth requirements S.D.
Results of Exp 3: Sensitivity to Forecast Error (cont’d)
Results of Exp 3: Sensitivity to Forecast Error (cont’d)
Results of Experiment 3S.D.=48.7
1. Blocking ratio increases whenever forecast error increases.
2. Compare to Trunk, the influence on LPPA and OSPF is slight
Sensitivity to forecast error on blocking ratio (1)Different S.D.
S.D.=160.75
Summary Resource reservation cost increases
whenever total traffic aggregate requirement increases.
Resource reservation cost decreases whenever network connectivity increases.
Our approach is more tolerant towards forecast error if the total traffic aggregate requirement remains the same.
Conclusion and Future Work Long Path planning can achieve per-
flow End-to-End QoS among multi Core Networks for All-IP networks.
Our approach could use resource more efficiently.
Long term soft reservation broker A more delicate pricing mechanism Design a heuristic algorithm.
Session 5
Outline Core Network Architecture Resource Allocation in Core Network
under BBQ Resource Allocation Procedure in Core
Network Path-planning Problem Proposed Solution Performance Evaluation Conclusion and Future work
Core Network Architecture Several core networks are included
in one backbone. We assume one core network is
own by one operator with one management policy.
Core Network Architecture (cont’d) Assume our management system
is on top of DiffServ Network Bandwidth Broker – bandwidth
management. Edge Router – traffic classification,
admission control. Core Router – traffic forwarding.
DiffServ Architecture
Traffic Procedure
Resource Allocation Time in Core Network Real-time on demand
Resource is allocated in real-time. Flexible High Overhead Greedy(FCFS)
Preplan Resource is allocated before request
arrival. Global consideration
Resource Allocation Approach - Centralized All resources in a core network is
all controlled by Bandwidth Broker. Bandwidth Broker performs path
planning. Edge routers receive planned
paths.
Centralized Approach : Pros. And Cons. Pros :
Easy to implement. Cons :
Lack of flexibility. Heavy load in Bandwidth Broker. Fairness Problem
Resource Allocation Approach - Distributed Edge router submit resource request. Bandwidth Broker responses to edge
router request. Path planning is done by edge router. Pros.
Distribute load to different components Easy to implement resource re-allocation
Cons. It needs coordination.
Resource Allocation Approach - Hybrid Use centralized approach initially. Use distributed approach for
enormous traffic fluctuation.
Resource Waste Situation
Resource Allocation Approach in BBQ Preplan Distributed The forecast error is compromised
by more delicate methods.
Software Components in Core Network(I)
Software Components Architecture in Core Network(II)
Core Network Coordinator Bandwidth Broker – resource
manager. Long Path Planning Agent – deal long
path with LPPA in other core networks.
Short Path Planning Agent – plan short path in the core network.
Software Components Architecture in Core Network(III)
Ingress Router Bandwidth Order Agent, BOA – Order
bandwidth from BB. Local Short Path Planning Agent,
LSPPA – find a set of planned path for this ingress.
Admission Control Agent, ACA – Admission control function.
Historical request data Terms
Length of Time Period - Operator define
Concerned Time Period (CTP) The time period concerned by current
resource planning process Reference Time Period (RTP)
Resource Allocation Procedure
Resource Allocation Procedure Step 1 Step 1:
Execute path planning and transform path-based demand to link-based demand for BOA.
Execute by SPPA in CNC Input : Historical traffic pattern in last R.T.P.
of the core network, characteristics (bandwidth, quality entropy) of each link in core network.
Output : a set of planned path.
Path Demand to Link Demand
Resource Allocation Procedure Step 2
Step 2: BOA order bandwidth from BB. Input : Link-based bandwidth
demand. Output : a optimal resource demand.
Resource Allocation Procedure Step 3
Step 3: BB commits bandwidth resource
requested by each ingress. Input : Link-based bandwidth demand
by BOA in each Ingress. Output : a optimal committed
bandwidth.
Resource Allocation Procedure Step 4
Step 4: BOA forwards committed bandwidth
to LSPPA.
Resource Allocation Procedure Step 5
Step 5: LSPPA executes path planning with
the resource acquired by BOA. Input : Historic traffic pattern in last
R.T.P of this pattern, resource acquired by BOA of this ingress.
Output : a set of planned path, ready for the using of ACA in run-time.
Runtime Traffic Processing
Session 6
預算法全 IP核心網路服務品質管理之路徑規劃Planning in Budget-Based QoS Management for All-IP Core Networks
Path Planning in Core Network The packet forwarding must have
following assigned path to insure the quality.
The objective of path planning is to maximize profit defined by network operator.
Path Planning Agent in BBQ Long Path Planning Agent
Located in CNC, coordinate long path with LPPA in other core network.
Short Path Planning Agent Located in CNC, plans the short path for
entire core network. Local Short Path Planning Agent
Located in each Ingress Router, plans the short path start from this ingress.
Related Work of Path Planning Traditional routing algorithm :
RIPs, OSPF, etc… Consider only in one weight.
Quality related routing algorithm Multi-Constrained Path Problem(MCP Pr
oblem) NP-complete
Assumption of Path Planning The profit is function of bandwidth
and quality entropy. We assume the quality entropy is
additive. For a link i , the quality of a link is
guaranteed if the usage of bandwidth isn’t over the capacity of the link.
Problem of Path Planning Give a topology with capacity and
quality of each link To find a planned path with
maximal profit Each path in planned path set
must satisfies some constraints
Path Planning Symbol Table(I)
G(V,E) A directed graph, G, containing |V| nodes and |E| directed links; V denotes the set of all nodes, and E denotes the set of all links.
vi A node; vi V
ek A directed link ek = (vx, vy) vx is the start node,
vy is the end node of link ek; also denoted as exy
R Incoming traffic set.
ri A request i, consist of (si, di, bi, qi); ri R
Path Planning Symbol Table(II)
si The path set satisfied constraints of request i and selected by our algorithm
di Destination node of a request i
bi Bandwidth requirement of a request i
qi Quality entropy of a request i
mi Profit earn of a admitted request i
Pi All possible path satisfied constraints of request i
Path Planning Symbol Table(III)
pi A path; pi Pi
i The path set satisfied constraints of request i and selected by our algorithm
The final path set selected by our algorithm
Be(Pij) = be(Pij) , if links of Pij contains link e
= 0 , otherwiseL(Pij) The satisfied ratio of request i with Pij
q(e) Quality entropy of link e
Equations of Path Planning Profit of request i
Objective Model
maxi n
miPij
L P ij maxi n
bi
qi Pij i
L Pij
mi
bi
qi
Constraints Quality Constraint
Bandwidth Constraint
e
q e qi , for all P ij
P ij
be pij B e , for all e E
Proposed Solution A path must fit in two constrains –
Bandwidth, Quality Entropy. Multi Constrained Path Problem
NP-Complete We proposal a heuristic algorithm
for path planning – Greedy Path Planning Algorithm(G.P.P.A).
Greedy method.
Path Planning Algorithm Workflow
Performance Evaluation Evaluate the performance of path planning
algorithm. Compare to traditional OSPF algorithm. Performance Metrics
Profit Link Utilization S.D. P.U. Ratio – total profit / average link utilization
Sensitivity to Number of Nodes Connectivity Forecast error
Experiment Environment and tools FreeBSD GNU C++ 3.2 BBQ Computation Simulator
Simulate the process of path planning and the admission control in run-time.
Simply admission control function
Test Instances Generation Topology
Number of node is 10 – 50 The capacity of link is 1000Mbps Connectivity is 20% - 100% Definition of Connectivity
In a topology with n nodes, a node with n-1 links connect to other nodes is called 100% connectivity.
Test Instances Generation(cont’d) Request
The bandwidth demand of a request is 1~100Mbps.
The quality entropy of a request is 20 ~ 50
The number of request is about 100~ 200/per nodes.
The workflow of experiments
Experiments Experiment #1:
Performance evaluation by computation. Performance metrics
Profit, and P.U. Ratio.
Experiment #2, #3 and #4: Performance evaluation by simulation. Performance metrics
Profit, Link Utilization S.D., and P.U. Ratio. Exp #2 : Number of nodes test Exp #3 : Connectivity test Exp #4 : Forecast error test
Result of Experiment #1 :
Parameters Setup: 200 requests / per nodes 10 ~ 30 nodes Connectivity 20% ~ 100%
Objective: Observation of metrics:
Profit P.U. Ratio
Exp #1 : Preplan Test – Profit
10 15 20 25 300
50
100
150
200
250
300
350
400
450
500
550
預測規劃之獲利趨勢
連接率20.00%
40.00%
60.00%
80.00%
100.00%
節點數
獲利
Sensitivity to number of nodesNumber of nodes increases, whenever profit increases.
Exp #1:Preplan Test – P.U Ratio
10 15 20 25 30
0
5
10
15
20
25
30
35
40
45
50
預測規劃之利潤密度趨勢
連接率
20.00%
40.00%
60.00%80.00%
100.00%
節點數
獲利密度
Sensitivity to number of nodesNumber of nodes increases, whenever P.U. increases.
Result of Experiment #2 :
Parameters Setup: 200 requests / per nodes 10 ~ 50 nodes Connectivity 20%, 40%, 60%, 80%
Objective: Observation of metrics:
Profit Link Utilization S.D. P.U. Ratio
Exp #2 : Sensitivity to Number of Nodes – Profit
10 15 20 25 30
0255075
100125150175200225250275300325350375
連接率 40%
GPPA
OSPF
節點數
獲利
10 15 20 25 30
0
50
100
150
200
250
300
350
400
450
500
連接率 60%
GPPA
OSPF
節點數
獲利
200 Requests/Nodes
Sensitivity to number of nodes on profitProfit increases whenever number of nodes increases.
Exp #2 : Sensitivity to Number of Nodes – Profit (Cont,d)
10 15 20 25 30
0
50
100
150
200
250
300
350
400
450
500
連接率 80%
GPPA
OSPF
節點數
獲利
10 15 20 25 30
0
50
100
150
200
250
300
350
400
450
500
550
連接率 100%
GPPA
OSPF
節點數
獲利
200 Requests/Nodes
Sensitivity to number of nodes on profitProfit increases whenever number of nodes increases.
Exp #2 : Sensitivity to Number of Nodes - Link Utilization S.D.
10 15 20 25 30
02.5
57.510
12.515
17.520
22.525
27.530
32.535
37.5
連接率 40%
GPPA
OSPF
節點數
鏈結使用率標準差
10 15 20 25 30
02.5
57.510
12.515
17.520
22.525
27.530
32.5
35
連接率 60%
GPPA
OSPF
節點數
鏈結使用率標準差
200 Requests/Nodes
Sensitivity to number of nodes on Link
Utilization S.D. link utilization s.d. decreases whenever number of nodes increases
Exp #2 : Sensitivity to Number of Nodes – Link Utilization S.D. (Cont,d)
10 15 20 25 30
0
2.5
5
7.5
10
12.5
15
17.5
20
22.5
25
27.5
30
32.5
連接率 80%
GPPA
OSPF
節點數
鏈結使用率標準差
10 15 20 25 30
0
2.5
5
7.5
10
12.5
15
17.5
20
22.5
25
27.5
30
32.5
連接率 100%
GPPA
OSPF
節點數
鏈結使用率標準差
200 Requests/Nodes
Sensitivity to number of nodes on Link
Utilization S.D. link utilization s.d. decreases whenever number of nodes increases
Exp #2 : Sensitivity to Number of Nodes - P.U. Ratio
10 15 20 25 30 35 40 45 50
0
5
10
15
20
25
30
35
40
45
50
節點多寡影響
Heuristic
OSPF
節點數
獲利密度
Sensitivity to number of nodes on P.U. ratioThe P.U. ratio increases, whenever number of nodes increases.
Result of Experiment #3 :
Parameters Setup: 200 requests / per nodes Connectivity 20% ~ 100% 10 ~ 30 nodes
Objective: Observation of metrics:
Profit Link Utilization S.D. P.U. Ratio
Exp #3 : Sensitivity to Connectivity - Profit
20.00% 40.00% 60.00% 80.00% 100.00%
0
25
50
75
100
125
150
175
200
225
250
275
節點數 15
GPPA
OSPF
連接率
獲利
20.00% 40.00% 60.00% 80.00% 100.00%
0255075
100
125150175200225250275300325
350
節點數 20
GPPA
OSPF
連接率
獲利
200 Requests/Nodes
Sensitivity to connectivity on profitprofit increases whenever connectivity increases.
Exp #3 : Sensitivity to Connectivity – Profit (Cont’d)
20.00% 40.00% 60.00% 80.00% 100.00%
0
50
100
150
200
250
300
350
400
450
節點數 25
GPPA
OSPF
連接率
獲利
20.00% 40.00% 60.00% 80.00% 100.00%
0
50
100
150
200
250
300
350
400
450
500
550
節點數 30
GPPA
OSPF
連接率
獲利
200 Requests/Nodes
Sensitivity to connectivity on profitprofit increases whenever connectivity increases.
Exp #3 : Sensitivity to Connectivity – Link Utilization S.D.
20.00% 40.00% 60.00% 80.00% 100.00%
02.5
57.510
12.515
17.520
22.525
27.530
32.535
37.5
節點數 15
GPPA
OSPF
鏈結率
鏈結使用率標準差
20.00% 40.00% 60.00% 80.00% 100.00%
02.5
57.510
12.515
17.520
22.525
27.530
32.5
35
節點數 20
GPPA
OSPF
鏈結率
鏈結使用率標準差
200 Requests/Nodes
Sensitivity to connectivity on Link Utilization S.D. : Link Utilization S.D. decreases, whenever connectivity increases.
Exp #3 : Sensitivity to Connectivity – Link Utilization S.D.(Cont,d)
20.00% 40.00% 60.00% 80.00% 100.00%
02.5
57.510
12.515
17.520
22.525
27.530
32.5
35
節點數 25
GPPA
OSPF
鏈結率
鏈結使用率標準差
20.00% 40.00% 60.00% 80.00% 100.00%
0
2.5
5
7.5
10
12.5
15
17.5
20
22.5
25
27.5
30
32.5
節點數 30
GPPA
OSPF
鏈結率
鏈結使用率標準差
200 Requests/Nodes
Sensitivity to connectivity on Link Utilization S.D. : Link Utilization S.D. decreases whenever connectivity increases.
Exp #3 : Sensitivity to Connectivity – P.U. Ratio
20 40 60 80
05
101520253035404550556065707580
連接率影響
Heuristic
OSPF
連接率
獲利密度
Sensitivity to connectivity on P.U. ratio : P.U. ratio increases whenever connectivity increases.
Result of Experiment #4 :
Parameters Setup: 4000 requests Connectivity 20% 20 nodes Forecast error 10%, 20%, 30, …, 100%
Objective: Observation of metrics:
Profit Link Utilization S.D. P.U. Ratio
Exp #4 : Sensitivity to forecast error - Profit
1 5 10 20 30 40 50 60 70 80 90
0
25
50
75
100
125
150
175
200
225
250
預測誤差之影響
Heuristic
OSPF
需求誤差 (%)
獲利
Sensitivity to forecast error on profit: Profit is unstable whenever forecast error increases.
Exp #4 : Sensitivity to forecast error - P.U. Ratio
1 5 10 20 30 40 50 60 70 80 90
0
2
4
6
8
10
12
14
16
18
預測誤差之影響
Heuristic
OSPF
需求誤差 (%)
獲利密度
Sensitivity to forecast error on P.U. ratio: P.U. ratio is unstable whenever forecast error increases.
Summary Experiment #1
Profit and P.U. ratio will increase when the number of nodes or connectivity increased.
Experiment #2 When the number of nodes increase,
the profit and P.U. ratio will increase in both algorithm. But the increase rate of G.P.P.A. is large then OSPF.
Summary Experiment #3
When the connectivity increase, the profit and P.U. ratio will increase in both algorithm. But the increase rate of heuristic is large then OSPF.
Experiment #4 When the forecast error ratio increases,
the profit of OSPF is more than GPPA. But the P.U. ratio is still better than
OSPF.
Conclusion and Future Work Path planning could maximize the
profit under the same network resource.
G.P.P.A. could decrease the probability of link resource exhausted.
Implement our path planning algorithm in NCTUNS or NS2.
Link characteristics setting.
Session 7
預算法全 IP核心網路服務品質管理之分散式資源管理與允入控制
Distributed Resource Management and Admission Control in Budget-Based QoS
Management for All-IP Core Networks 指導教授 : 連耀南
學生 : 陳明志國立政治大學行動通訊實驗室
Outline Resource Pre-Planning Demand Forecast Resource Pre-Planning Problem CETP Resource Management Problem Proposed Solution Performance Evaluation Conclusion and Future Work
Resource Pre-Planning BOA order resource from BB in
advance PPA plan path by the available link
resource How much resource should BOA
order in advance? => forecast demand
Traffic Assumptions Periodical traffic pattern Traffic demand can be forecasted
by historical data
Demand Forecast Historical request data of Ingress Router Path-based demands will be
transformed to link-based demands Terms - Length of Time Period - Concerned Time Period (CTP) - Reference Time Period (RTP)
Length of Time Period Operator may change length of
time period by the variance in traffic demand
Shorter length of time period for larger variance traffic demand
Concerned Time Period (CTP) The time period concerned by
current resource planning process Refer to all RTP and make decision
Reference Time Period (RTP) Historical data of traffic demand Same time at the days of different we
eks (eg : all Monday 8-9 AM)
Historical Traffic Pattern List of all RTP pattern of a certain link
1 4 7
10 13 16 19 22 day1
0
20
40
60
80
100
Mbps
timeday
day1day2day3day4
Historical data of the CTP Bandwidth request of RTPs for a certain C
TP
Request Distribution Request distribution of RTPs for a certain
CTP
Problem of Resource Pre-Planning Forecast error may occur at run time Inaccurate planning will waist
bandwidth or on-demand order bandwidth
Optimal Request Bandwidth By the given traffic pattern and dema
nd distribution of a certain link, find best bandwidth request value of the link
Optimal Request Bandwidth (cont’)
Optimization Model for Request Bandwidth Expected bandwidth cost of the
CTP is sum of pre-order cost and expected on-demand cost
Minimize expected cost of the CTP
Parameter List C1 Pre-order request unit cost C2 On-demand request unit cost i Bandwidth demand at CTP Pi Probability of demanded bandwidth i E{C(θ)} Expected bandwidth cost Θ Optimal pre-order bandwidth
Mathematical model Continuous form
Discrete from
n
i
PidiiCCCE
*)(**)}({ 21
n
PidiiCCCE
*)(**)}({ 21
Cost figure
Optimal Condition for Continuous Form
n
PidiiCCCE
*)(**)}({ 21
]**[*)}({ 21 nn
PidiPidiiCCCE
]**[)}({
21 PPidiPCCd
CdE n
nPidiCC
d
CdE
21
)}({
PCdd
CdEd
*
)}({
2
Optimal Condition for Continuous Form (cont’) The optimal request bandwidth Θ can b
e found by the equation nPidi
C
C
2
1
Optimal Condition for Discrete Form Set bandwidth scale Find optimal value by existing
search algorithm
Charge Rate and Pre-Order Bandwidth Operator decides charge
rate(C2/C1) C2≦C1 , no pre-order bandwidth
(on-demand per-flow request) C2>C1 , higher charge rate , more
pre-order bandwidth
Charge Rate and Pre-Order Bandwidth (cont’)
Solutions for Resource Shortage On-demand order bandwidth at run
time Order upgrade resource to fill the
planned quantity
Pre-Order Examination Procedure
Admission Control Procedure FCFS Insufficient
resource Downgrade
request Reject
Current Execution Time Period (CETP) Execution time of the planned CTP Resource consumption may be
stable or unstable Need resource management
CETP resource management Set up resource level according to
remaining time or traffic distribution
Examine resource level at the check point (time length between two check point ≧ On-Demand order latency)
CETP resource management (cont’) Examine resource level at the check point
(time length between two check point ≧ On-Demand order latency)
Solutions for Resource Shortage at CETP Resource reallocation Downgrade traffic request On-demand batch order
Resource Reallocation Bandwidth of some path is
insufficient, PPA checks the links along the path, to locate insufficient bandwidth links
Reallocate link bandwidth from different path that has surplus bandwidth
Downgrade Traffic Request When available resource of demande
d quality is not enough, downgrade traffic request as substitute quality
Must define reasonable downgrade rule
- eg : in DiffServ, degrade AF1 service class to AF2 service class
On-Demand Batch Request BB reserve some bandwidth of the
links (Central Pool or ACA send-back function)
ACA check in-hand resource level at each check point
On-Demand Batch Request (cont’)
CETP Resource Management Optimization Model By given request distribution, unit
rejecting penalty and unit admitting revenue, find best resource level y*
Maximize expected profit
Parameter List Pt(D) PDF of traffic demand D at time t X Current in-hand resource Y Resource level y*t Optimal resource level at time t p Unit rejecting penalty r Unit admitting revenue C2 On-demand unit cost E{B(x)} Current Execution Time Period expected profit
CETP Resource Management Mathematical Model Continuous form
Discrete form
y t
y
t
dDDPyDpry
dDDPrDxyCxBE
)(*)]([
)(*)(*)}({02
yDt
y
Dt
DPyDpry
DPrDxyCxBE
))(*))(((
))(*()(*)}({0
2
Optimal Condition of Discrete Form (cont’) The optimal resource level y* at
time t can be found by the equation
*
0)()/()(
y
t dDDPrpcrp
Performance Evaluation Performance Evaluation Metrics Experiment Design Experiments and Results Summary
Performance Evaluation Metrics Resource pre-planning
Management cost Pre-order cost On-demand cost
Admission profit
Performance Evaluation Metrics (cont’) CETP resource management
Ratio of fully-satisfied traffic request Ratio of partially-satisfied traffic
request Ratio of rejected traffic request
Experiment Design Experiment Environment
Simulation : Network Simulator ns-2 on FreeBSD
Experiment Design (cont’) Test instance generation
Traffic request generation Traffic type : CBR and exponential Change traffic request generating
distribution
Network topology Single path topology, focus on the
result of resource management
Real Time Simulation Process
Topology and Run Time Snap Shot
Experiments Exp 1 : Pre-Planning
experiments Exp 2 : CETP resource
management experiments
Exp1 : Pre-Order Experiment design Parameter setup
Traffic load : 4 distribution sets Charging rate(C2/C1) Traffic type
Objective Observation matrics : management cost and pr
ofit Experiment sets :
Exp 1-1 : Sensitivity to traffic load Exp 1-2 : Sensitivity to charge rate Exp 1-3 : Sensitivity to traffic type
Exp1 : Pre-Order Experiment design (cont’) 4 distribution sets
Result of Optimal Request Value of CBR Exp (exp1-1) Resource planning policies : mean, theta, on-
demand
Profit of Optimal Request Value of CBR Exp (cont’)
Profit analysis
Summary of Optimal Request Value of CBR Exp (cont’) At set1 test, profit of “mean policy”
is the highest As forecast error raising (set 2 – set 4) ,
profit of “θ policy” is higher than other two policies
Theta Value of different charge(C2/C1) rate (exp1-2)
Request distribution : mean = 50, range=100
Charge rate : form 2 to 9
Result of Optimal Request Value of Different Charge Rate Charge rate(C2/C1) : 2, 3, 4
Profit of Optimal Request Value with Different Charge(C2/C1) Rate
Summary of Different Charge(C2/C1) Rate (exp1-2) Low fault tolerance at low charge
rate At high charge rate, BOA will order
more bandwidth in advance =>over estimate
Result of Optimal Request Value of Exponential Exp (exp1-3) Resource planning policies : mean, theta, on-
demand
Result of Optimal Request Value Performance Exponential Exp (cont’) Profit analysis
Summary of Optimal Request Value of exponential Exp (cont’)
At set 1 test, profit of “mean policy ” is still the highest
As forecast error raising (set 2 – set 4) , profit of “θ policy” is higher than other two policies
Exp2 : CETP Resource Management Experiments design Parameter setup
Traffic load : 5 distribution sets Traffic type
Objective Observation matrics
Ratio of fully-satisfied traffic request Ratio of partially-satisfied traffic request Ratio of rejected traffic request
Experiment sets : Exp 2-1 : Sensitivity to traffic load Exp 2-2 : Sensitivity to traffic type
Exp2 : CETP Resource Management Experiments design (cont’) 5 distribution sets
Ratio of Fully-Satisfied Request (CBR)
a) No management b) CETP resource management
CETP resource management improve fully-satisfied ratio under forecast error
Ratio of Partially-Satisfied Request (CBR)
a) No management b) CETP resource management
CETP resource management improve partially-satisfied ratio under forecast error
Ratio of Rejected Request (CBR)
a) No management b) CETP resource management
CETP resource management improve rejected ratio under forecast error
Exp2-1 : Summary CETP resource management
reduce reject ratio and partially-satisfied ratio
Reject ratio is remaining raising under higher forecast error
Ratio of Fully-Satisfied Request (exponential)
a) No management b) CETP resource management
CETP resource management improve fully-satisfied ratio under forecast error
Ratio of Partially-Satisfied Request (exponential)
a) No management b) CETP resource management
CETP resource management improve partially-satisfied ratio under forecast error
Ratio of Rejected Request (exponential)
a) No management b) CETP resource management
The improvement between a) and b) is unobvious
Exp2-2 : Summary CETP resource management
reduce partially-satisfied ratio mostly
Exp2 : Summary CETP resource management
reduce reject ratio and partially-satisfied ratio
Traffic type will affect management result
Conclusion and Future Work Resource Management :
Resource management makes more profit under certain forecasting error
Traffic type will affect resource management utilization
The parameters will affect management policies
Future Work : Multi-class resource planning Modify resource planning model to fit different traffic
types
Session 8
Forecasting Error TolerableResource Allocation
Forecasting Error TolerableResource Allocation
Allocation Approaches
Approaches Overview Resource Reallocation approach Central Pool approach Overbook approach Hybrid approach
Central pool & resource Reallocation Overbook & Resource Reallocation
Forecasting Error TolerableResource Allocation
Central Pool Approach
Central Pool Approach A portion of resource reserved in Cent
ral Pool. Real-time on-demand allocation of re
served resource. Too much reservation may result in to
o much processing overhead. Too little reservation may result in po
or performance.
Central Pool (1)
Central Pool (2)
Optimization Objective Find out one committed bandwidth le
vel (resource reservation amount 、 τ) that maximizes expected system profit.
Symbol Tableθ Required total bandwidth
τ Committed total bandwidth
C1 Per unit committed bandwidth
C2Per unit on-demand allocated bandwidth without pre-orderC1
’Per unit on-demand allocated bandwidth with pre-order
P Expected profit
pi Potential probability of incoming traffic i
N Maximum possible incoming traffic
Demanded Bandwidth Distribution, Booking Level & Pricing C1’<C1<C2
Central Pool Optimization Model (1)
1C
N
ii
ii CiCpiCp
)(')()(' 121
Pre-allocated bandwidth profit Expected real-time on-demand
allocated bandwidth profit
Central Pool Optimization Model (2)
Total expected profit
The complexity of the equation above is O(τ2) which can be solved by numeric approach.
N
ii
ii CiCpiCpCP
'' 1211
Forecasting Error TolerableResource Allocation
Overbook Approach
Overbook Approach Overbook approach commits more re
source than available. Overbook approach may admit too m
uch traffic. Too much admitted traffic results in p
erformance degradation.
Overbook (1)
Overbook (2)
Overbook (3)
Traffic Distribution (1) Demanded bandwidth distribution of a link
of an Ingress Router.
Traffic Distribution (2) Admitted bandwidth distribution of a link of an Ing
ress Router with committed bandwidth given.
Traffic Distribution (3) Demanded bandwidth distribution of a link.
Traffic Demand Distribution (4) Admitted bandwidth distribution of an link with
committed bandwidth given.
Optimization Objective Find out one committed bandwidth le
vel (overbook amount 、 τ) that maximizes expected system profit.
The demanded resource distribution is assumed to be continuous in the following optimization model deduction.
The optimization model using discrete distribution is also introduced.
Symbol Tableθ Required total bandwidthτ Committed total bandwidth
m1 Per unit overbooked bandwidthm2 Per unit packet loss penaltyR Net profit
p(b) Potential probability of incoming traffic bN Maximum possible incoming traffic
h(b) Packet loss function of committed bandwidth b
C Link capacity
Total Probability of Admitted Bandwidth over Booking Level The whole area under distribution cur
ve between booking level (τ) and N
The Probability of Admitted Bandwidth at Booking Level Sum of probability of incoming traffic b & total pro
bability of admitted bandwidth over booking level
Profit of Overbooked Bandwidth Additional profit due to overbook.
Possible Penalty of Overbooked Bandwidth
Optimization Model
N
CdbhbpdbbhbpmCmR
21
For )()()( bhbpbL
N
CdbbphmdbbLmCmR
221
the optimal booking level can be located after first differential
phmdbbphmLmmRN
2'
221'
LmdbbphmLmmN
2'
221
N
dbbphmm
'21
Optimal Booking Level
N
dbbphm
m
'
2
1
Performance Evaluation
Performance Metrics Traffic accepted rate System profit
Experiments Exp 1: Sensitivity to distribution devia
tion Observing traffic accepted rate.
Exp 2: Allocation approach test Observing system profit.
Parameters 5 Ingress Routers Bandwidth per flow: 448 kbps Path capacity: 50 flows Distribution: 0~20 flows
Mean – 10 flows Required bandwidth θ – 10 flows
Exp 1. Sensitivity to distribution deviation Central Pool (1)
0.7500
0.8000
0.8500
0.9000
0.9500
1.0000
1.0500
0% 10% 20% 30% 40% 50%
(%)資源保留比例
訊務(%)允入率
Dev 1
Dev 2
Dev 3
Dev 4
Dev 5
The improvement in traffic accepted rate is proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation Central Pool (2)
0.0000
0.0200
0.0400
0.0600
0.0800
0.1000
0.1200
10% 20% 30% 40% 50%
(%)資源保留比例
訊務允入率 (%)差值
Dev 1
Dev 2
Dev 3
Dev 4
Dev 5
The improvement in traffic accepted rate difference is directly proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation Central Pool (3)
0.00000.00500.01000.01500.02000.02500.03000.03500.04000.0450
10% 20% 30% 40% 50%
(%)資源保留比例
訊務允入(%)率差值
Dev 1
Dev 2
Dev 3
Dev 4
The improvement in traffic accepted rate difference is inversely proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation Overbook (1)
0.7500
0.8000
0.8500
0.9000
0.9500
1.0000
1.0500
0% 10% 20% 30% 40% 50%
(%)超額配置比例
訊務(%)允入率
Dev 1
Dev 2
Dev 3
Dev 4
Dev 5
The improvement in traffic accepted rate is proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation Overbook (2)
0.0000
0.0200
0.0400
0.0600
0.0800
0.1000
0.1200
0.1400
10% 20% 30% 40% 50%
(%)超額配置比例
訊務允入率(%)差值
Dev 1
Dev 2
Dev 3
Dev 4
Dev 5
The improvement in traffic accepted rate difference is directly proportional to resource reservation percentage.
Exp 1. Sensitivity to distribution deviation Overbook (3)
0.00000.00500.01000.01500.02000.02500.03000.03500.04000.0450
10% 20% 30% 40% 50%
(%)超額配置比例
訊務允入率差值
(%)
Dev 1
Dev 2
Dev 3
Dev 4
The improvement in traffic accepted rate difference is inversely proportional to resource reservation percentage.
Allocation approach test (1)
42
44
46
48
50
52
54
56
58
10% 20% 30% 40% 50%
比例
獲利Basic
CP
OB
The improvement in system profit of Central Pool approach is more stable than Overbook approach.
Allocation approach test (2)
-10
-8
-6
-4
-2
0
2
4
6
10% 20% 30% 40% 50%
(%)比例
獲利CP
OB
The additional system profit of Central Pool approach stably goes down, but goes up slowly and down quickly of Overbook approach.
Summary Central pool approach is recommend
ed when traffic deviation is small. Overbook approach is recommended
when traffic deviation is large. Central pool approach works better at
low Central Pool reservation. Overbook approach works better at hi
gh overbook percentage.
Conclusion & Future Work Resource allocation approach
Forecast error tolerable resource allocation approach is needed in large-scale end-to-end QoS architecture.
Future work Hybrid approach Central pool & Overbook
Thanks!
NCCU Mobile Communication Laboratory