Bandwidth Myth Wp

  • Upload
    amoseva

  • View
    223

  • Download
    0

Embed Size (px)

Citation preview

  • 7/28/2019 Bandwidth Myth Wp

    1/11

    Myths o BandwidthOptimization

    Increased use o IT in business, disaster recovery eorts,

    cloud computing, data center consolidation projects,and international presence all contribute to corporations

    back-end Internet trac growing at an accelerated rate.

    But bandwidth growth generally doesnt match the rate o

    data transer growthand when it does, bandwidth is still

    not a guarantee o throughput improvement.

    by Don MacVittie

    Technical Marketing Manager

    White Paper

  • 7/28/2019 Bandwidth Myth Wp

    2/11

    Contents

    Introduction 3

    Common Application Performance Myths 4

    Myth #1: Application Perormance Depends Only on Bandwidth 4

    Myth #2: TCP Requires Aggressive Back-O to Ensure Fairness 6

    Myth #3: Packet Compression Improves Application Perormance 7

    Myth #4: Quality o Service Technology Accelerates Applications 8

    F5 Brings It All Together 8

    Conclusion 11

    White PaperMyths o Bandwidth Optimization

  • 7/28/2019 Bandwidth Myth Wp

    3/11

    White PaperMyths o Bandwidth Optimization

    IntroductionMoores Law states that data density doubles approximately every 24 months,

    and Metcales Law says that the value o a network grows in proportion to

    the square o the number o users. Because these postulates have held true in

    practice, global enterprises have ound it advantageous to embed inormation

    technology into every aspect o their operations. However, this has led to

    increased bandwidth consumptionaccording to Plunkett Research, LTD,

    the worldwide data communications services industry now generates revenue

    in excess o $3.1 trillion annually.

    Despite a growing worldwide thirst or bandwidth, supply has outpaced demand

    by a wide margin. During the rapid expansion o the Internet in the 1990s, the

    data communications industry created an inrastructure that could deliver cheap

    bandwidth in high volumes. In act, bandwidth has become so plentiul that even

    the eects o Metcales Law are insucient to consume available capacity or many

    years to come. The result o this imbalance has been the commoditization o

    bandwidth, rapidly declining bandwidth prices, and a vendor environment that

    actively promotes the myth that high bandwidth can address almost any

    perormance problem.

    But as enterprise application deployments have expanded to the wide area network

    (WAN) and increasingly to the cloud, an environment where bandwidth is sometimes

    as plentiul as on the LAN, IT managers have noted a dramatic decrease in

    application perormance. Many o them wonder, why would two networks with

    identical bandwidth capacities, the LAN and the WAN, deliver such dierent

    perormance results?

    The answer is that application perormance is aected by many actors, associated

    with both network and application logic, that must be addressed to achieve

    satisactory application perormance. At the network level, application perormance

    is limited by high latency (the eect o physical distance and physical communications

    medium), jitter, packet loss, and congestion. At the application level, perormance is

    urther limited by the natural behavior o application protocols (especially when aced

    with latency, jitter, packet loss, and congestion at the network level); application

    protocols that engage in excessive handshaking across network links; and serialization

    o the applications themselves.

    Faster Application andNetwork Access

    55% o surveyed customers

    selected an F5 solution over the

    competition because o aster

    application and network access

    or users.

    Source: TechValidate Survey

    o BIG-IP users

    TVID: A0A-B7B-546

  • 7/28/2019 Bandwidth Myth Wp

    4/11

    White PaperMyths o Bandwidth Optimization

    Common Application PerormanceMyths

    Myth #1: Application Perormance Depends Only onBandwidth

    Application perormance and throughput are infuenced by many actors. Latency

    and packet loss have a proound eect on application perormance. Littles Law, the

    seminal description o queuing theory and an equation that models the eects ophysical distance (latency) and packet loss, illustrates the impact o these two actors

    on application perormance.

    In an application to networking, this law states:

    (throughput) =N (number o outstanding requests)

    T (response time)

    In terms o IP-based protocols, this translates to:

    TCP throughput =congestion window size

    round trip time

    Thereore, as the round trip time (RTT) o each request increases, the congestion

    window must increase or TCP throughput will decrease. Unortunately, TCP does not

    eectively manage large windows. As a result, even small amounts o latency and

    packet loss can quickly cause network perormance to drop or a given application

    to a raction o what would be expected. Even i bandwidth capacity were to be

    increased to 100 Mbps, the application would never consume more than 1 percent

    o the total capacity. Under these conditions, managers who add network capacity

    waste money on a resource that cannot be consumed. The ROI or adding network

    capacity that will sit idle is non-existent; more than just capacity is included when

    determining perormance.

    http://web.mit.edu/sgraves/www/papers/Little%27s%20Law-Published.pdfhttp://web.mit.edu/sgraves/www/papers/Little%27s%20Law-Published.pdf
  • 7/28/2019 Bandwidth Myth Wp

    5/11

    White PaperMyths o Bandwidth Optimization

    The Macroscopic Behavior o the TCP Congestion Avoidance Algorithm1 provides a

    short and useul ormula or the upper bound on the transer rate:

    Rate = (MSS/RTT)*(1 / sqrt{p})

    Where:

    Rate is the TCP transer rate or throughput

    MSSis the maximum segment size (xed or each Internet path, typically 1460 bytes)

    RTT is the round trip time (as measured by TCP)

    p is the packet loss rate

    Figure 1 illustrates this point:

    Distance Miles

    0 100 1000 10,

    450,000

    300,000

    150,000ThroughputMbps

    Figure 1: How TCP perormance is aected by physical distance.

    In WANs, sources o high round trip times (e.g., latency) include physical distance,

    inecient network routing patterns, and network congestionelements that are

    all present in abundance on the WAN.

    Today, many TCP protocol stacks are highly inecient when it comes to managing

    retransmissions. In act, some stacks may have to retransmit the whole congestion

    window i a single packet is lost. They also tend to back o exponentially (i.e.,

    reduce congestion windows and increase retransmission timers) in the ace o

    network congestiona behavior that is detected by TCP as packet loss. And while

    loss is oten insignicant in rame relay networks (less than .01 percent on average),

    1 Mahdavi, Jamshid; Mathis, Matthew; Ott, Teunis; and Semke, Jeery. The Macroscopic Behavior o the TCP CongestionAvoidance Algorithm. Computer Communication Review, a publication o ACM SIGCOMM, volume 27, number 3, July1997. ISSN # 0146- 4833. Accessed on November 14, 2011.

    http://cseweb.ucsd.edu/classes/wi01/cse222/papers/mathis-tcpmodel-ccr97.pdfhttp://cseweb.ucsd.edu/classes/wi01/cse222/papers/mathis-tcpmodel-ccr97.pdfhttp://cseweb.ucsd.edu/classes/wi01/cse222/papers/mathis-tcpmodel-ccr97.pdfhttp://cseweb.ucsd.edu/classes/wi01/cse222/papers/mathis-tcpmodel-ccr97.pdf
  • 7/28/2019 Bandwidth Myth Wp

    6/11

    White PaperMyths o Bandwidth Optimization

    it is very signicant in IP VPN networks that go into and out o certain markets like

    China, where loss rates commonly exceed 5 percent, and are oten much higher. In

    the latter scenario, high loss rates can have a catastrophic eect on perormance.

    When packet loss and latency eects are combined, the perormance drop-o is

    even more severe (see Figure 2).

    Packet Loss Percentage

    40

    20

    50

    30

    10

    ThroughputMbps

    0 2% 4% 6% 8%

    HTTP FTP

    Figure 2: Sample TCP perormance when packet loss is present.

    Myth #2: TCP Requires Aggressive Back-O to EnsureFairness

    Many network engineers believe that aggressive back-o in the ace o congestion

    is necessary to keep network access air. This is sometimes, but not always, true.

    Where congestion control is the responsibility o each host on a networka

    network being an environment in which hosts have no knowledge o the other

    hosts bandwidth needsaggressive back-o is necessary to ensure airness.

    However, i congestion is managed within the network inrastructure by a system

    that sees all trac on a given WAN connection, then much greater and moreecient throughput is possibleand aggressive back-o is not required.

    Standard protocol behavior species that when hosts consume bandwidth, they

    must do so independent o:

    The requirements o the application.

    The amount o available bandwidth.

    The amount o competition that exists or that bandwidth.

  • 7/28/2019 Bandwidth Myth Wp

    7/11

    White PaperMyths o Bandwidth Optimization

    The result is that applications are oten starved or bandwidth resources at the same

    time that the network is largely unused. This situation is obviously highly inecient.

    A much better solution to the TCP airness problem is to allow individual hosts to

    consume as much bandwidth as they need, so long as all other hosts receive

    adequate service when they need it. This can be accomplished by implementing a

    single congestion window, shared by all hosts, that is managed within the network

    itsel. The result is a system in which hosts get the bandwidth they need in periods

    o light competition, as well as when competition is more intense.

    This single window method delivers consistently higher utilization and greater overallthroughput than aggressive back-o. Hosts each see a clean, ast network that

    never loses packets (and thereore doesnt diminish TCP perormancesee Myth #1),

    and cumulative trac demands are matched to the overall buering capability o

    the network. As a result, IT managers experience optimally utilized networks, under

    the broadest range o network latency and loss conditions.

    Single window solutions can be constructed so that they are completely transparent to

    client systems. Components o such solutions may include TCP technologies such as

    selective acknowledgement, local congestion window management, improved

    retransmission algorithms, and packet dispersion. These capabilities are then combined

    with other technologies that match the throughput requirements o applications to theavailability o network resources, and that track the bandwidth requirements o all

    hosts utilizing the network. By aggregating the throughput o multiple, parallel WAN

    links, single window technology can achieve even greater throughput and reliability.

    Myth #3: Packet Compression Improves ApplicationPerormance

    While common packet compression techniques can reduce the amount o trac

    on the WAN, they tend to add latency to application transactions and thereore

    can impede application perormance. These techniques require that packets bequeued up, compressed, transmitted, decompressed on the receiver, and then

    retransmittedall o which can consume substantial resources and add substantial

    latency, actually slowing down the very applications that need acceleration.

    Next-generation application perormance solutions combine protocol streamlining

    with transparent data reduction techniques. Compared to packet-based solutions,

    next-generation solutions dramatically reduce the amount o data that needs to be

    transmitted, eliminate latency that is introduced by protocol behavior in the ace o

    physical distance, and can drive WAN perormance at gigabit speeds. Transparent data

  • 7/28/2019 Bandwidth Myth Wp

    8/11

    White PaperMyths o Bandwidth Optimization

    reduction techniques oten include multiple dictionaries where the level 1 dictionary

    is small and highly eective at reducing smaller patterns in data, and the level 2

    dictionary is a multi-gigabyte space that can be used to reduce much larger patterns.

    Myth #4: Quality o Service Technology AcceleratesApplications

    Quality o Service (QoS), i used properly, is a highly benecial set o technologies that

    can be helpul or managing application perormance. However, the only thing that QoS

    can do is divide existing bandwidth into multiple virtual channels. QoS does nothing to

    move more data or streamline protocol behavior. QoS simply decides, in an intelligent

    way, which packets to drop. And while it is better to drop packets in a controlled way

    than to leave it to chance, dropping packets does not accelerate applications.

    Many QoS implementations rely on port numbers to track applications. Because

    applications oten negotiate port assignments dynamically, these mechanisms must

    be congured to reserve a large port range to ensure coverage o the ports the

    application actually uses.

    For QoS to be most eective, it should be dynamic. First-generation QoS

    implementations reduce large links into multiple smaller links, statically reserving

    bandwidth whether it is needed or not. Channelizing a network this way can ensure

    bandwidth availability or critical applications like voice, but actually wastes bandwidth

    as it is reserved or the specic application, even when the application is not in use.

    Dynamic QoS solutions, on the other hand, ensure that bandwidth is reserved only

    when applications can use it. One common use o this technology is to extend

    enterprise backup windows by enabling continuous data backup when bandwidth

    becomes available.

    F5 Brings It All Together

    F5s WAN optimization solutions deliver dramatic replication perormance and

    greatly reduced WAN costs. F5 provides these benets by monitoring the limiting

    eects o network conditions, adjusting protocol behavior, and by managing all

    levels o the protocol stack, rom the network layer through to the application layer.

    Specically, F5 BIG-IP WAN Optimization Manager (WOM) integrates advanced

    transport acceleration technologies such as adaptive TCP acceleration, Symmetric

    Adaptive Compression, Symmetric Data Deduplication, and session-aware rate

    shaping with best-o-class application acceleration technologies including SSL

  • 7/28/2019 Bandwidth Myth Wp

    9/11

    White PaperMyths o Bandwidth Optimization

    encryption ofoading and termination. BIG-IP Local Trac Manager (LTM)and

    by extension, BIG-IP WOMis supported by a statistics generation and monitoring

    engine that enables organizations to manage application network behavior in real

    time. Any communication to a remote BIG-IP WOM device is protected rom prying

    eyes with IPsec tunnels, keeping corporate data sae even on the Internet.

    BIG-IP Local Traffic Manager+ WAN Optimization Manager

    BIG-IP Local Traffic Manager+ WAN Optimization Manager

    Internet or WAN

    ClientServers

    iSession tunnel

    Figure 3: Symmetric BIG-IP WOM devices optimize and secure trafc over the Internet.

    F5 delivers LAN-like replication perormance over the WAN. BIG-IP WOM optimizes

    replication solutions such as Oracle RMAN, Oracle Streams, Oracle Data Guard, and

    Oracle GoldenGate or database replication; Microsot Exchange DAG or mailbox

    replication; and VMware vMotion over long distances or VM migration, le transer,

    and other applicationsall resulting in predictable, ast perormance or all WAN users.

    BIG-IP WANOptimization Manager

    Data Center 1

    01001010011100

    11010101001010

    10010010010010

    01001110110110

    Servers

    Storage

    BIG-IP WANOptimization Manager

    Data Center 2

    01001010011100

    11010101001010

    10010010010010

    01001110110110

    Servers

    Storage

    Internet orWAN

    Figure 4: Accelerate all application trafc on the WAN.

    F5 WAN optimization solutions are deployed on F5 hardware, which eatures ault

    tolerance, massive scalability, and unparalleled perormance, or on virtual machines.

  • 7/28/2019 Bandwidth Myth Wp

    10/11

    White PaperMyths o Bandwidth Optimization

    For branch oce deployments, BIG-IP Edge Gateway eatures unctionality rom

    BIG-IP WOM, BIG-IP WebAccelerator, and BIG-IP Access Policy Manager (APM).

    Exchange compression/encryption enable (without BIG-IP WOM)

    Deduplication + compression (without Exchange encryption/compression)

    Throughput in Mbps (155/ 100/ 0.1%)

    0 50 100 150 200

    18.15

    151.36

    Figure 5: BIG-IP WOM improved throughput o Microsot Exchange replication more thaneight times more than Exchange compression and encryption.

    NetApp deduplication and compression (without BIG-IP WOM)

    BIG-IP WOM (deduplication and adaptive compression)

    Duration in Seconds

    0 2000 4000 6000 8000

    7396

    448

    Figure 6: NetApp SnapMirror perormance with BIG-IP WOM optimizations shows a 16xperormance boost over NetApp deduplication and compression.

    TCP performance with Symmetric Data Deduplication in BIG-IP WOM

    When there is a lot o highly repetitive data fowing over the WAN, and that data is

    large enough to be caught by the BIG-IP WOM deduplication engine (a conguration

    setting controls the denition o large enough), then perormance is massively

    improved. This is useul when someone is saving a lot o documents that contain the

    corporate logo or other commonly used set o bits.

    The key part o deduplication is that while compression works on a single stream,

    deduplication can cross all o the streams being transerred across the WAN. This is

    important because duplication is oten minimal within stream A, but when all o the

    streams being utilized across the WAN are considered, duplication rates are much

    higher. The amount o data reduction and the perormance implications o that

    data reduction are heavily dependent on a given environment (the amount o

    duplication) and the conguration o the BIG-IP WOM device. The larger the

    number o bytes that must match, the ewer duplications administrators will receive;

    1

  • 7/28/2019 Bandwidth Myth Wp

    11/11

    but the ewer entries must be checked or a duplication match, and more important,

    the longer a given set o bits will be saved or comparison. The smaller the number

    o bytes that must match, the aster the cache will ll up and older entries will be

    removed, but in theory the more matches administrators will nd.

    GoldenGate compression BIG-IP WOM deduplication

    Transfer Time in Seconds

    0 200 400 600 800 1000 1200

    Figure 7: Oracle GoldenGate compression versus BIG-IP WOM deduplication unctions showthat with some datasets, deduplication can massively improve throughput.

    ConclusionApplication perormance on the WAN is aected by a large number o actors

    in addition to bandwidth. The notion that bandwidth solves all, or even most,

    application perormance problems is, simply put, a myth. At the network level,application perormance is limited by high latency, jitter, packet loss, and congestion.

    At the application level, perormance is likewise limited by actors such as the natural

    behavior o application protocols that were not designed or WAN conditions;

    application protocols that engage in excessive handshaking; and the serialization

    o the applications themselves.

    BIG-IP WOM recognizes the critical interdependence between application-level and

    transport-level behavior. It delivers predictable replication perormance and increased

    throughput ranging rom 3x to over 60x on networks as diverse as premium-quality,

    class-o-service managed networks to commodity, best eortsbased IP VPNs. The

    architectural advantages o F5 products result in replication and backup solutions that

    deliver best-o-class perormance, massive scalability, and a return on investment that

    can be measured in months.

    White PaperMyths o Bandwidth Optimization

    F5 Networks, Inc.Corporate [email protected]

    F5 Networks, Inc. 401 Elliott Avenue West, Seattle, WA 98119 888-882-4447 www.5.com

    F5 [email protected]

    F5 Networks Ltd.Europe/Middle-East/[email protected]

    F5 NetworksJapan [email protected]

    2012 F5 Networks Inc All rights reserved F5 F5 Networks the F5 logo and IT agility Your way are trademarks o F5 Networks Inc in the U S and in certain other countries Other F5 trademarks are identif

    mailto:info%40f5.com?subject=http://www.f5.com/mailto:apacinfo%40f5.com?subject=mailto:emeainfo%40f5.com?subject=mailto:f5j-info%40f5.com?subject=mailto:f5j-info%40f5.com?subject=mailto:emeainfo%40f5.com?subject=mailto:apacinfo%40f5.com?subject=http://www.f5.com/mailto:info%40f5.com?subject=