Upload
jessica-chase
View
218
Download
0
Embed Size (px)
Citation preview
PIT and Cache Dynamics in CCN
Objects & Conclusion
• Cache-PIT dynamics in CCN• Cache and PIT aggregation effect save
bandwidth consumption• As network load increases, (bandwidth
saving)gains from cache hits diminish and are replaced by gains from PIT aggregation without dramatically impacting the overall level of savings achieved.
FIB
• CCN-Publication approach:– Objects are published before any requests are
issued(pre-populated based on the shortest path)– Shortest path to the object
• Exploration approach:– FIBs are only populated when an Interest packet is
satisfied– Flood on all interfaces if there is no FIB entry for
the object
Other Settings
• Cache replacement policy is LRU• Interests and Data packets follows first-come-
first-served order• Processing delay for each packet is set to a
constant 0.1 time units• Transmission links have infinite bandwidth
Topologies
Synthetic: Core&Edge(Core & Edge)Real-World: PoP-level ISP maps (no Core-Edge)
Topologies
centralized Degree Centrality(cDC)centralized Stress Centrality(cSC)centralized Betweenness Centrality(cBC)Close to 0 : similar values for the corresponding vertex centrality Close to 1 : significant imbalance, a few nodes having higher corresponding vertex centrality
Traffic workload
Interest Packet Generation
• PIT aggregation occurs when an Interest packet requesting an object is suppressed at an intermediate node because there is already a pending PIT entry generated by a previous Interest packet for that same object.
• Generate Interest packets at a high rate so that more Interest packets overlap within a single RTT
• Poisson– mean of the exponential(EM)=0.025 ------ congested – EM=1.0 ------ non-congested
Results
• Path Stretch:– {total number of hops that a packet travels in the
network} :– {number of hops along the shortest path route
between the node originating an Interest packet and the node which owns the Data object being requested}
PIT Aggregation: Topological Considerations(Set 1, no caching)
PIT Aggregation: Topological Considerations
• 39-node has the highest cBC value (0.41), which reflects the relative imbalance in 39-node of having a higher proportion of shortest paths passing through a smaller subset of central nodes.
• 39-node has core-edge separation. When the network is congested, these core nodes act as effective points for PIT aggregation.
PIT Aggregation: Topological Considerations
• Lower Path Stretch, used as a direct measure of bandwidth expenditure and savings, can be misleading.
• 70-node, diameter of only 3; many nodes with significant degree(cDC=0.7886)
• Requests satisfied from “owning” nodes only 1-hop away from the requesting nodes nevertheless contribute a Path Stretch of 1.
PIT aggregation & non-congested scenario
• The bars in the figure corresponding to EM=0.025 confirm that 39-node saves the most hops, followed by 51-node, 70-node, and 100-node, in that order, for all three workloadsets.
• Note, incidentally, that the bars corresponding to EM=1.0 (non-congested scenario) show virtually no savings from PIT aggregation.
Cache-PIT Interplay
Cache & PIT
• The amount of hops saved by PIT aggregation decreases, while hops saved due to cache hits increases.
• Increasing cache size does not dramatically improve overall savings: hops saved by PITs are replaced by hops saved by caches. Better-provisioned caches increase the chances of locating the desired object through a cache hit, which simultaneously obviates the possibility of having a pending PIT request at that node for the object.
Exploration approach vs CCN-Publication approach
• The Exploration approach accounts for the bandwidth costs incurred in populating the FIBs
• Comparison with CCN-Publication therefore necessitates that we also account for the cost of prepopulating FIBs in the latter
• only one named object is advertized per Publication packet
Expended Effort
• Measures the Path Stretch for an Interest packet (inclusive of flooded “clones”)
• If the packet is following a FIB-defined route right from the source node and does not need to be retransmitted, then its Expended Effort will be 1 (as in an IP network). Valuesgreater than 1 reflect extra bandwidth consumed, expressed in terms of multiples of the bandwidth expenditure for the end-to-end, shortest path.