27
1 48. DFN 48. DFN - - Betriebstagung Betriebstagung - - Berlin, 27.02.2008 Berlin, 27.02.2008

1 48. DFN-Betriebstagung - Berlin, 27.02 · 5 48. DFN-Betriebstagung - Berlin, 27.02.2008 Provider Backbone Bridging Operation SW 1 SW 2 SW 4 1/1 2/1 1/1 2/1 MAC A SW 3 MAC B 1/1

Embed Size (px)

Citation preview

1 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

2 Nortel Confidential Information

BUSINESS MADE SIMPLE

48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Ethernet für MAN/WAN

(Next Generation Ethernet)

48. DFN-Betriebstagung (Berlin, 27.02.2008)

Göran Friedl [email protected] Network Consultant, Nortel GmbH

3 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

(Ethernet) Next Gen Wish List

Ethernet as used today is lowest cost but has several challenges:• Service scalability • Customer segregation• Traffic engineering• Spanning Tree challenges:• stranded bandwidth• Poor convergence

• MAC explosions• Security

Cost per Mbps / CAPEX OPEXScalability Service FlexibilityNetwork Resiliency Customer SegregationSecurityEase of OAM CoS/QoSTraffic EngineeringResource Reservation

4 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Introducing Provider Backbone Bridges (PBB)

• IEEE 802.1ah (aka Mac-in-Mac) is a standard• Driven by Nortel and Cisco, supported by other

vendors

• Enables millions of service instances

• Improves security; ease of operations

802.1ahProvider BackboneBridges

802.1ad Interfaces

Provider Bridge Network (802.1ad)

Provider Backbone Bridge Network (802.1ah)

Provider Bridge Network (802.1ad)

802.1ah Interfaces

DASA

Payload

S-VIDC-VID

B-DAB-SAB-VIDI-SID

5 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Provider Backbone Bridging Operation

SW1

SW2

SW4

1/12/1

1/1 2/1MAC A SW

3MAC B

1/1

2/1

1/12/1

Bridge TableVID MAC Port205 Z 1/1205 W 2/1

Bridge TableVID MAC Port205 Z 1/1205 W 2/1

MAC C

2/2MAC Z

MAC W

Virtual Bridge SID - 100VID MAC Port.....

Virtual Bridge SID - 100VID MAC Port.....

Virtual Bridge SID - 100VID MAC Port50 A 1/150 B B-MAC W50 C B-MAC W

Virtual Bridge SID - 100VID MAC Port.....

Virtual Bridge SID - 100VID MAC Port.....

Virtual Bridge SID - 100VID MAC Port50 A B-MAC Z50 B 2/150 C 2/2

PBB Bridge TableVID MAC Port205 W 2/1

PBB Bridge TableVID MAC Port205 Z 1/1

A | C | 50

DATA

DA|SA|VID

DA|SA|VID

I-SID

- UNI

Customer MAC Learning Only at Edge NodesProvider Network Bridge Tables are small and stable

6 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PBB: Solving Current Ethernet Challenges

Ethernet Challenges

• Service Scalability

• Customer Segregation

• Traffic engineering

• Spanning Tree challenges: • Stranded bandwidth • Poor convergence

• MAC explosions

• Security

Overlapping customer V-LANs supported

Up to 16 million service instances using 24 bit

service ID

Customer MACs learned only at the edge

Core switches only learn UNI (Backbone MAC) addresses

Customer BPDUs transparently switched

7 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Introducing Provider Backbone Transport

• P2P traffic-engineered tunnels based on existing Ethernet forwarding principles

• Simple Layer 2 networking technology• Tunnels can be engineered for diversity, resiliency or load spreading• 50 ms recovery with fast 802.1ag CFM OAM

Ethernet Metro

Traffic engineered PBT trunks

E-LINE, E-LAN

E-LINE, E-LAN

8 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

•PBT paths are protected by fast switching to an alternative pre-provisioned path.

•Only services explicitly impacted by the failure are affected

•Resilience in mesh networks is handled by Spanning Tree Protocol – many clients are affected as STP re-converges.

•Simple aggregation networks can be protected by multi-link trunking or RPR between switches.

•Forwarding tables are populated by management tool.

•Packets are never broadcast

•MAC forwarding tables are used to forward packets on a per hop basis.

•When destination is unknown, packets are broadcast and learning algorithm populates forwarding tables

Forw

ardi

ngR

esili

ence

Conventional Broadcast Ethernet PBT-enabled Ethernet

Ethernet Technology Comparison Table

9 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PBT Value

PBT Provides… …At Ethernet Costs•Reuse existing (deployed) Ethernet technology

•Eliminate flooding and spanning tree

•Fraction of the cost of MPLS enabled switches (CAPEX)

•No learning curve for your Metro operators (OPEX)

•Connection oriented features •Traffic engineering•Resiliency•QoS•Comprehensive OAM•New standards in IEEE and ITU

•Seamless interworking to an MPLS WAN

•Solutions to QinQshortcomings

•MAC explosions, Service scalability etc…

10 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PBT: Solving Current Ethernet Challenges

Ethernet Challenges:

• Service Scalability

• Customer Segregation

• Traffic engineering

• Spanning Tree challenges: • Stranded bandwidth • Poor convergence

• MAC explosions

• Security

260 Service Scalability

Full segregation in P2P model

End to End TEWith QoS & 50 ms recovery

Disable STPNo blocked linksFast 802.1ag convergence

MAC Explosions Eliminated

Customer BPDUs transparently switched

11 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Transport and Service Architecture PBT becomes the common infrastructure

MPLS ServicesMPLS Services(RFC 2547 VPN, PWs etc.)(RFC 2547 VPN, PWs etc.)

Ethernet ServicesEthernet Services(EVPL, ELAN, ELINE, Multicast)(EVPL, ELAN, ELINE, Multicast)

> Keep existing Ethernet, MPLS…FR/ATM…ANY & ALL services> Capitalize on Ethernet as transport for significant savings

> Existing network-friendly solution!

PBTPBT

12 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Carrier Class Ethernet OAMFully Standards based

• Comprehensive OAM & OAM hierarchy (IEEE 802.1ag) • Fault detection & notification • Continuity check, loopback connectivity check, traceroute

• Service Monitoring and performance (ITU Y.1730/31)• Frame delay, delay variation, frame loss• AIS

• Resiliency and Protection switching (ITU G.8031)

• Link layer discovery (IEEE 802.1ab)

No need to involve a parallel MPLS control plane(possible misalignment/misconfiguration between control and data

plane)

13 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Ethernet OAMIEEE 802.1ag & ITU-T Y.1730/31

1. Continuity Check (CC)a) Multicast/unidirectional heartbeat b) Usage: Fault detection

2. Loopback – Connectivity Checka) Unicast bi-directional request/responseb) Usage: Fault verificationc) Not implemented in control plane

3. Traceroute (i.e., Link trace) a) Trace nodes in path to a specified target nodeb) Usage: Fault Isolationc) Traceroute is not available for MPLS PWs over MPLS tunnels

4. Alarm Indication Signal (AIS): Under discussion in Y.17ethoama) Propagate data path fault notificationsb) Usage: Alarm suppression

5. Discovery (not specifically supported by .1ag however Y.17ethoam and 802.1ab support it)a) Service (e.g. discover all PEs supporting common service instance)b) Network (e.g. discover all devices (PE and P) common to a domain)

6. Performance Monitoring (not specifically supported by .1ag however Y.17ethoam supports it)a) Frame Delayb) Frame Delay Variationc) Frame Loss Items in ORANGE are not available in MPLS OAM

EdgeSwitch

EdgeSwitch

TransitSwitch

Adapt Adapt

NNILink

NNILink

UNILink

UNILink

Link OAMTrunk OAM

Service OAM (SID)

customer demarcs

Link OAM Link OAM

14 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

• Configured MAC forwarding operation (just like PBT), this time by a shortest path Link State Routing Protocol (IS-IS)

• PLSB and existing Ethernet control protocols can operate side-by-side in the same network infrastructure

• Optimal shortest path p2mp trees for distribution of broadcast/multicast• Loop suppression without port blocking (packet-by-packet) with RPFC• Fast network convergence

root

AA->

<-A

A->

DX

TODAY with (R)STP

A

D

Shortest Paths with PLSB

XX

B

Multicast Tree from D to A, B & C

C

Provider Link State Bridging (PLSB; IEEE 802.1aq)Carrier grade ELAN without Spanning Tree

15 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PLSB Approach

• If Ethernet is going to be there….use it!• Take advantage of Ethernet’s more capable data plane

• Virtual partitions (VLANS), scalable multicast, comprehensive OAM

• PLSB uses a Link State protocol – IS-IS• IS-IS floods topology and service info (B-MAC and I-SID

information)• Integrate service discovery into the control plane • PLSB nodes use link state information to construct unicast

and per service (or I-SID) multicast connectivity

Combines well-known networking protocol with well-known data plane to build an efficient service infrastructure

16 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Virtual broadcast domains constructed via IS-IS

PLSB Fundamentals

• > For each B-VLAN assigned to this mode of operation :• flooding is disabled & all ports are unblocked• Control plane configures shortest path unicast

and multicast connectivity between PBBs• Loop suppression is applied to B-MAC packets

• PLSB delivers a better B-MAC layer for MAC-in-MAC• MAC-in-MAC already isolates C-MACs from B-MACs in the

core and keeps C-MAC state at the edge• C-MAC flooding, multicast and broadcast maps to scoped B-

MAC multicast• C-MAC to Virtual Port (= B-MAC) bindings are learned at the

edge as in conventional bridged operation

17 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PLSB Implementation

• PLSB discovers the network automatically setting up a shortest path distribution tree without blocking any links

• Each node creates its Shortest Path tree to all other nodes in the network

Network Topology Shortest Path tree from ES1

18 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Defining services using PLSB

• As PBB service endpoints (ISIDs) are added, all PLSB nodes will be aware of all service locations.

• Each PLSB node will know if it is on the shortest path for each ISID and install the appropriate FIB state to ensure connectivity, thereby creating per service multicast tree

Shortest Path tree from ES1 Multicast tree for all nodes supportingISID 100 (ES1, ES7, ES11)

19 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Router

Replication performed/distributed to many nodes

Only ONE copy of a packet per physical link

What Ethernet does well…packet replication for mcast/bcast

20 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Traffic following shortest pathRouter

Router

Ethernet Distributed replication model

Per VPN, customized Bcast Domain. Dynamically established (no core

provisioning)

Router

Router

No duplication of packets per physical link

fast resiliency

What Ethernet does well…now add PLSB!

21 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

The Next Stage of Ethernet Evolution

Traffic following spanning tree path Router

Router

Spanning tree Forces blocked links

Orphaning Bandwidth

• PBB has evolved the Ethernet data plane for pt-to-pt

• PLSB evolves the Ethernet control plane

• PLSB …• Eliminates blocked ports• Better traffic distribution• Greater 3 fold increase in

failure recovery (on order of 250ms)

• No catastrophic “root” failures• Shortest path forwarding• Broadcast containment within

service community

22 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

The Next Stage of Ethernet Evolution

• PBB has evolved the Ethernet data plane for pt-to-pt

• PLSB evolves the Ethernet control plane

• PLSB …• Eliminates blocked ports• Better traffic distribution• Greater 3 fold increase in

failure recovery (on order of 250ms)

• No catastrophic “root” failures• Shortest path forwarding• Broadcast containment within

service communityNo Blocked Links!!

No Orphaned Bandwidth

Traffic following shortest path

Tandem nodes can easily be service nodes too

23 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

PLSB Simplified Operation

Signal PWEsN2 manual session creation

Required for Auto-DiscoverySeparate RR topologies (to help scale)

Eases burden of statically managing VSI PWE’s

Base LDPs: build LSP tunnelsRedundant to IGP (same paths)

Base IGP: TopologyRequired for network topology knowledge

Physical LinksLink layer headers striped off, label lookup

per node

IGP (IS-IS or OSPF)

LDP or RSVP-TE

E-LDP

SONET, SDH, Ethernet, etc…

BGP-AD

Tunn

el L

SP P

roto

cols

VPN

Pro

toco

ls

PLSB (IS-IS)

Ethernet

One IGP for Topology & Discovery-One protocol now provides -auto-discovery-fast fault detection-network healing -shortest path

Physical Links: - Link layer headers reused as a label lookup through every node

Tunn

el +

VP

N P

roto

cols

Typical VPLS Implementation: PLSB Implementation:

Minimizing control plane = Minimized complexity = Reduced cost

24 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

State of the Standards

Standard Topic State

802.1ah

802.1Qay

802.1ag

802.1AB

802.1p/ad

802.1aq

Y.1731

PBB (Hierarchy in the metro) Complete

PBT (aka PBB-TE) expected completion late 2008

Connectivity Management Complete

Auto-discovery Complete

Class Based Queuing Complete

PLSB Working group since Jan 2008

Connectivity & Performance Mgmt Complete

25 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Nortel’s Metro Ethernet Networks Native Ethernet, existing network-friendly solutions

• Lowest cost (CAPEX & OPEX) Packet Optimized Metro Infrastructure

• High Density Ethernet leveraging industry silicon• Superior grooming and fill• Bandwidth efficient solutions for Access & Core

• Delivers the broadest suite of SLAs and Services in the industry

• TE capability for trunking• Advanced OAM Tools• Strong QoS

• Scales to massive service delivery • Allows for PW or Ethernet Service Adaptation• Scalable to millions of customers

• Strong, simple comprehensive OAM• Robust toolset for monitoring and debugging • Ethernet OAM integrated with Sonet-like service

Predictable, Scalable & Operational Metro Ethernet Networks

Cost per Mbps / CAPEX OPEXScalability Service FlexibilityNetwork Resiliency

Customer SegregationSecurityEase of OAM CoS/QoSTraffic EngineeringResource Reservation

26 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

27 48. DFN48. DFN--Betriebstagung Betriebstagung -- Berlin, 27.02.2008Berlin, 27.02.2008

Acronyms

DA destination address

CAPEX Capital expenses

IGP Interior Gateway Protocol

LDP Label Distribution Protocol

MPLS Multiprotocol label switching

OPEX Operational expenses

CFM Connectivity Fault Management

I-SID I-component Service ID

MAC address (media access control) address is the unique hardware address of a network node

MAC MiM Mac in Mac 802.1ah, also known as provider backbone bridging

OAM operations, admin and maintenance

OSS operations support system

PBB Provider Backbone bridging

PBT provider backbone transport; now PBB traffic engineering

PLSB Provider Link State Bridging

PW pseudowire

QoS quality of service

RPR resilient packet ring

SLA Sevice Level Agreement

SA source address

SID service ID

STP/RSTP spanning tree protocol, rapid spanning tree protocol

VLAN virtual local area network

VID VLAN ID

VPN virtual private network