分享
 
 
 

RFC2757 - Long Thin Networks

王朝other·作者佚名  2008-05-31
窄屏简体版  字體: |||超大  

Network Working Group G. Montenegro

Request for Comments: 2757 Sun Microsystems, Inc.

Category: Informational S. Dawkins

Nortel Networks

M. Kojo

University of Helsinki

V. Magret

Alcatel

N. Vaidya

Texas A&M University

January 2000

Long Thin Networks

Status of this Memo

This memo provides information for the Internet community. It does

not specify an Internet standard of any kind. Distribution of this

memo is unlimited.

Copyright Notice

Copyright (C) The Internet Society (2000). All Rights Reserved.

Abstract

In view of the unpredictable and problematic nature of long thin

networks (for example, wireless WANs), arriving at an optimized

transport is a daunting task. We have reviewed the existing

proposals along with future research items. Based on this overview,

we also recommend mechanisms for implementation in long thin

networks.

Our goal is to identify a TCP that works for all users, including

users of long thin networks. We started from the working

recommendations of the IETF TCP Over Satellite Links (tcpsat) working

group with this end in mind.

We recognize that not every tcpsat recommendation will be required

for long thin networks as well, and work toward a set of TCP

recommendations that are 'benign' in environments that do not require

them.

Table of Contents

1 IntrodUCtion ................................................. 3

1.1 Network Architecture .................................... 5

1.2 Assumptions about the Radio Link ........................ 6

2 Should it be IP or Not? ..................................... 7

2.1 Underlying Network Error Characteristics ................ 7

2.2 Non-IP Alternatives ..................................... 8

2.2.1 WAP ................................................ 8

2.2.2 Deploying Non-IP Alternatives ...................... 9

2.3 IP-based Considerations ................................. 9

2.3.1 Choosing the MTU [Stevens94, RFC1144] .............. 9

2.3.2 Path MTU Discovery [RFC1191] ....................... 10

2.3.3 Non-TCP Proposals .................................. 10

3 The Case for TCP ............................................. 11

4 Candidate Optimizations ...................................... 12

4.1 TCP: Current Mechanisms ................................. 12

4.1.1 Slow Start and Congestion Avoidance ................ 12

4.1.2 Fast Retransmit and Fast Recovery .................. 12

4.2 Connection Setup with T/TCP [RFC1397, RFC1644] .......... 14

4.3 Slow Start Proposals .................................... 14

4.3.1 Larger Initial Window .............................. 14

4.3.2 Growing the Window during Slow Start ............... 15

4.3.2.1 ACK Counting .................................. 15

4.3.2.2 ACK-every-segment ............................. 16

4.3.3 Terminating Slow Start ............................. 17

4.3.4 Generating ACKs during Slow Start .................. 17

4.4 ACK Spacing ............................................. 17

4.5 Delayed Duplicate Acknowlegements ....................... 18

4.6 Selective Acknowledgements [RFC2018] .................... 18

4.7 Detecting Corruption Loss ............................... 19

4.7.1 Without EXPlicit Notification ...................... 19

4.7.2 With Explicit Notifications ........................ 20

4.8 Active Queue Management ................................. 21

4.9 Scheduling Algorithms ................................... 21

4.10 Split TCP and Performance-Enhancing Proxies (PEPs) ..... 22

4.10.1 Split TCP Approaches .............................. 23

4.10.2 Application Level Proxies ......................... 26

4.10.3 Snoop and its Derivatives ......................... 27

4.10.4 PEPs to handle Periods of Disconnection ........... 29

4.11 Header Compression Alternatives ........................ 30

4.12 Payload Compression .................................... 31

4.13 TCP Control Block Interdependence [Touch97] ............ 32

5 Summary of Recommended Optimizations ......................... 33

6 Conclusion ................................................... 35

7 Acknowledgements ............................................. 35

8 Security Considerations ...................................... 35

9 References ................................................... 36

Authors' Addresses ............................................. 44

Full Copyright Statement ....................................... 46

1 Introduction

Optimized wireless networking is one of the major hurdles that Mobile

Computing must solve if it is to enable ubiquitous Access to

networking resources. However, current data networking protocols have

been optimized primarily for wired networks. Wireless environments

have very different characteristics in terms of latency, jitter, and

error rate as compared to wired networks. Accordingly, traditional

protocols are ill-suited to this medium.

Mobile Wireless networks can be grouped in W-LANs (for example,

802.11 compliant networks) and W-WANs (for example, CDPD [CDPD],

Ricochet, CDMA [CDMA], PHS, DoCoMo, GSM [GSM] to name a few). W-WANs

present the most serious challenge, given that the length of the

wireless link (expressed as the delay*bandwidth product) is typically

4 to 5 times as long as that of its W-LAN counterparts. For example,

for an 802.11 network, assuming the delay (round-trip time) is about

3 ms. and the bandwidth is 1.5 Mbps, the delay*bandwidth product is

4500 bits. For a W-WAN such as Ricochet, a typical round-trip time

may be around 500 ms. (the best is about 230 ms.), and the sustained

bandwidth is about 24 Kbps. This yields a delay*bandwidth product

roughly equal to 1.5 KB. In the near future, 3rd Generation wireless

services will offer 384Kbps and more. Assuming a 200 ms round-trip,

the delay*bandwidth product in this case is 76.8 Kbits (9.6 KB). This

value is larger than the default 8KB buffer space used by many TCP

implementations. This means that, whereas for W-LANs the default

buffer space is enough, future W-WANs will operate inefficiently

(that is, they will not be able to fill the pipe) unless they

override the default value. A 3rd Generation wireless service

offering 2 Mbps with 200-millisecond latency requires a 50 KB buffer.

Most importantly, latency across a link adversely affects

throughput. For example, [MSMO97] derives an upper bound on TCP

throughput. Indeed, the resultant expression is inversely related to

the round-trip time.

The long latencies also push the limits (and commonly transgress

them) for what is acceptable to users of interactive applications.

As a quick glance to our list of references will reveal, there is a

wealth of proposals that attempt to solve the wireless networking

problem. In this document, we survey the different solutions

available or under investigation, and issue the corresponding

recommendations.

There is a large body of work on the subject of improving TCP

performance over satellite links. The documents under development by

the tcpsat working group of the IETF [AGS98, ADGGHOSSTT98] are very

relevant. In both cases, it is essential to start by improving the

characteristics of the medium by using forward error correction (FEC)

at the link layer to reduce the BER (bit error rate) from values as

high as 10-3 to 10-6 or better. This makes the BER manageable. Once

in this realm, retransmission schemes like ARQ (automatic repeat

request) may be used to bring it down even further. Notice that

sometimes it may be desirable to forego ARQ because of the additional

delay it implies. In particular, time sensitive traffic (video,

audio) must be delivered within a certain time limit beyond which the

data is obsolete. Exhaustive retransmissions in this case merely

succeed in wasting time in order to deliver data that will be

discarded once it arrives at its destination. This indicates the

desirability of augmenting the protocol stack implementation on

devices such that the upper protocol layers can inform the link and

MAC layer when to avoid such costly retransmission schemes.

Networks that include satellite links are examples of "long fat

networks" (LFNs or "elephants"). They are "long" networks because

their round-trip time is quite high (for example, 0.5 sec and higher

for geosynchronous satellites). Not all satellite links fall within

the LFN regime. In particular, round-trip times in a low-earth

orbiting (LEO) satellite network may be as little as a few

milliseconds (and never extend beyond 160 to 200 ms). W-WANs share

the "L" with LFNs. However, satellite networks are also "fat" in the

sense that they may have high bandwidth. Satellite networks may often

have a delay*bandwidth product above 64 KBytes, in which case they

pose additional problems to TCP [TCPHP]. W-WANs do not generally

exhibit this behavior. Accordingly, this document only deals with

links that are "long thin pipes", and the networks that contain them:

"long thin networks". We call these "LTNs".

This document does not give an overview of the API used to access the

underlying transport. We believe this is an orthogonal issue, even

though some of the proposals below have been put forth assuming a

given interface. It is possible, for example, to support the

traditional socket semantics without fully relying on TCP/IP

transport [MOWGLI].

Our focus is on the on-the-wire protocols. We try to include the most

relevant ones and briefly (given that we provide the references

needed for further study) mention their most salient points.

1.1 Network Architecture

One significant difference between LFNs and LTNs is that we assume

the W-WAN link is the last hop to the end user. This allows us to

assume that a single intermediate node sees all packets transferred

between the wireless mobile device and the rest of the Internet.

This is only one of the topologies considered by the TCP Satellite

community.

Given our focus on mobile wireless applications, we only consider a

very specific architecture that includes:

- a wireless mobile device, connected via

- a wireless link (which may, in fact comprise several hops at

the link layer), to

- an intermediate node (sometimes referred to as a base station)

connected via

- a wireline link, which in turn interfaces with

- the landline Internet and millions of legacy servers and web

sites.

Specifically, we are not as concerned with paths that include two

wireless segments separated by a wired one. This may occur, for

example, if one mobile device connects across its immediate wireless

segment via an intermediate node to the Internet, and then via a

second wireless segment to another mobile device. Quite often,

mobile devices connect to a legacy server on the wired Internet.

Typically, the endpoints of the wireless segment are the intermediate

node and the mobile device. However, the latter may be a wireless

router to a mobile network. This is also important and has

applications in, for example, disaster recovery.

Our target architecture has implications which concern the

deployability of candidate solutions. In particular, an important

requirement is that we cannot alter the networking stack on the

legacy servers. It would be preferable to only change the networking

stack at the intermediate node, although changing it at the mobile

devices is certainly an option and perhaps a necessity.

We envision mobile devices that can use the wireless medium very

efficiently, but overcome some of its traditional constraints. That

is, full mobility implies that the devices have the flexibility and

agility to use whichever happens to be the best network connection

available at any given point in time or space. Accordingly, devices

could switch from a wired Office LAN and hand over their ongoing

connections to continue on, say, a wireless WAN. This type of agility

also requires Mobile IP [RFC2002].

1.2 Assumptions about the Radio Link

The system architecture described above assumes at most one wireless

link (perhaps comprising more than one wireless hop). However, this

is not enough to characterize a wireless link. Additional

considerations are:

- What are the error characteristics of the wireless medium? The

link may present a higher BER than a wireline network due to

burst errors and disconnections. The techniques below usually

do not address all the types of errors. Accordingly, a complete

solution should combine the best of all the proposals.

Nevertheless, in this document we are more concerned with (and

give preference to solving) the most typical case: (1) higher

BER due to random errors (which implies longer and more

variable delays due to link-layer error corrections and

retransmissions) rather than (2) an interruption in service due

to a handoff or a disconnection. The latter are also important

and we do include relevant proposals in this survey.

- Is the wireless service datagram oriented, or is it a virtual

circuit? Currently, switched virtual circuits are more common,

but packet networks are starting to appear, for example,

Metricom's Starmode [CB96], CDPD [CDPD] and General Packet

Radio Service (GPRS) [GPRS],[BW97] in GSM.

- What kind of reliability does the link provide? Wireless

services typically retransmit a packet (frame) until it has

been acknowledged by the target. They may allow the user to

turn off this behavior. For example, GSM allows RLP [RLP]

(Radio Link Protocol) to be turned off. Metricom has a

similar "lightweight" mode. In GSM RLP, a frame is

retransmitted until the maximum number of retransmissions

(protocol parameter) is reached. What happens when this limit

is reached is determined by the telecom operator: the physical

link connection is either disconnected or a link reset is

enforced where the sequence numbers are resynchronized and the

transmit and receive buffers are flushed resulting in lost

data. Some wireless services, like CDMA IS95-RLP [CDMA,

Karn93], limit the latency on the wireless link by

retransmitting a frame only a couple of times. This decreases

the residual frame error rate significantly, but does not

provide fully reliable link service.

- Does the mobile device transmit and receive at the same time?

Doing so increases the cost of the electronics on the mobile

device. Typically, this is not the case. We assume in this

document that mobile devices do not transmit and receive

simultaneously.

- Does the mobile device directly address more than one peer on

the wireless link? Packets to each different peer may traverse

spatially distinct wireless paths. Accordingly, the path to

each peer may exhibit very different characteristics. Quite

commonly, the mobile device addresses only one peer (the

intermediate node) at any given point in time. When this is

not the case, techniques such as Channel-State Dependent Packet

Scheduling come into play (see the section "Packet Scheduling"

below).

2 Should it be IP or Not?

The first decision is whether to use IP as the underlying network

protocol or not. In particular, some data protocols evolved from

wireless telephony are not always -- though at times they may be --

layered on top of IP [MOWGLI, WAP]. These proposals are based on the

concept of proxies that provide adaptation services between the

wireless and wireline segments.

This is a reasonable model for mobile devices that always communicate

through the proxy. However, we expect many wireless mobile devices to

utilize wireline networks whenever they are available. This model

closely follows current laptop usage patterns: devices typically

utilize LANs, and only resort to dial-up access when "out of the

office."

For these devices, an architecture that assumes IP is the best

approach, because it will be required for communications that do not

traverse the intermediate node (for example, upon reconnection to a

W-LAN or a 10BaseT network at the office).

2.1 Underlying Network Error Characteristics

Using IP as the underlying network protocol requires a certain (low)

level of link robustness that is expected of wireless links.

IP, and the protocols that are carried in IP packets, are protected

end-to-end by checksums that are relatively weak [Stevens94,

Paxson97] (and, in some cases, optional). For much of the Internet,

these checksums are sufficient; in wireless environments, the error

characteristics of the raw wireless link are much less robust than

the rest of the end-to-end path. Hence for paths that include

wireless links, exclusively relying on end-to-end mechanisms to

detect and correct transmission errors is undesirable. These should

be complemented by local link-level mechanisms. Otherwise, damaged IP

packets are propagated through the network only to be discarded at

the destination host. For example, intermediate routers are required

to check the IP header checksum, but not the UDP or TCP checksums.

Accordingly, when the payload of an IP packet is corrupted, this is

not detected until the packet arrives at its ultimate destination.

A better approach is to use link-layer mechanisms such as FEC,

retransmissions, and so on in order to improve the characteristics of

the wireless link and present a much more reliable service to IP.

This approach has been taken by CDPD, Ricochet and CDMA.

This approach is roughly analogous to the successful deployment of

Point-to-Point Protocol (PPP), with robust framing and 16-bit

checksumming, on wireline networks as a replacement for the Serial

Line Interface Protocol (SLIP), with only a single framing byte and

no checksumming.

[AGS98] recommends the use of FEC in satellite environments.

Notice that the link-layer could adapt its frame size to the

prevalent BER. It would perform its own fragmentation and reassembly

so that IP could still enjoy a large enough MTU size [LS98].

A common concern for using IP as a transport is the header overhead

it implies. Typically, the underlying link-layer appears as PPP

[RFC1661] to the IP layer above. This allows for header compression

schemes [IPHC, IPHC-RTP, IPHC-PPP] which greatly alleviate the

problem.

2.2 Non-IP Alternatives

A number of non-IP alternatives aimed at wireless environments have

been proposed. One representative proposal is discussed here.

2.2.1 WAP

The Wireless Application Protocol (WAP) specifies an application

framework and network protocols for wireless devices such as mobile

telephones, pagers, and PDAs [WAP]. The architecture requires a proxy

between the mobile device and the server. The WAP protocol stack is

layered over a datagram transport service. Such a service is

provided by most wireless networks; for example, IS-136, GSM

SMS/USSD, and UDP in IP networks like CDPD and GSM GPRS. The core of

the WAP protocols is a binary HTTP/1.1 protocol with additional

features such as header caching between requests and a shared state

between client and server.

2.2.2 Deploying Non-IP Alternatives

IP is such a fundamental element of the Internet that non-IP

alternatives face substantial obstacles to deployment, because they

do not exploit the IP infrastructure. Any non-IP alternative that is

used to provide gatewayed access to the Internet must map between IP

addresses and non-IP addresses, must terminate IP-level security at a

gateway, and cannot use IP-oriented discovery protocols (Dynamic Host

Configuration Protocol, Domain Name Services, Lightweight Directory

Access Protocol, Service Location Protocol, etc.) without translation

at a gateway.

A further complexity occurs when a device supports both wireless and

wireline operation. If the device uses IP for wireless operation,

uninterrupted operation when the device is connected to a wireline

network is possible (using Mobile IP). If a non-IP alternative is

used, this switchover is more difficult to accomplish.

Non-IP alternatives face the burden of proof that IP is so ill-suited

to a wireless environment that it is not a viable technology.

2.3 IP-based Considerations

Given its worldwide deployment, IP is an obvious choice for the

underlying network technology. Optimizations implemented at this

level benefit traditional Internet application protocols as well as

new ones layered on top of IP or UDP.

2.3.1 Choosing the MTU [Stevens94, RFC1144]

In slow networks, the time required to transmit the largest possible

packet may be considerable. Interactive response time should not

exceed the well-known human factors limit of 100 to 200 ms. This

should be considered the maximum time budget to (1) send a packet and

(2) oBTain a response. In most networking stack implementations, (1)

is highly dependent on the maximum transmission unit (MTU). In the

worst case, a small packet from an interactive application may have

to wait for a large packet from a bulk transfer application before

being sent. Hence, a good rule of thumb is to choose an MTU such that

its transmission time is less than (or not much larger than) 200 ms.

Of course, compression and type-of-service queuing (whereby

interactive data packets are given a higher priority) may alleviate

this problem. In particular, the latter may reduce the average wait

time to about half the MTU's transmission time.

2.3.2 Path MTU Discovery [RFC1191]

Path MTU discovery benefits any protocol built on top of IP. It

allows a sender to determine what the maximum end-to-end transmission

unit is to a given destination. Without Path MTU discovery, the

default IPv4 MTU size is 576. The benefits of using a larger MTU are:

- Smaller ratio of header overhead to data

- Allows TCP to grow its congestion window faster, since it

increases in units of segments.

Of course, for a given BER, a larger MTU has a correspondingly larger

probability of error within any given segment. The BER may be reduced

using lower level techniques like FEC and link-layer retransmissions.

The issue is that now delays may become a problem due to the

additional retransmissions, and the fact that packet transmission

time increases with a larger MTU.

Recommendation: Path MTU discovery is recommended. [AGS98] already

recommends its use in satellite environments.

2.3.3 Non-TCP Proposals

Other proposals assume an underlying IP datagram service, and

implement an optimized transport either directly on top of IP

[NETBLT] or on top of UDP [MNCP]. Not relying on TCP is a bold move,

given the wealth of experience and research related to it. It could

be argued that the Internet has not collapsed because its main

protocol, TCP, is very careful in how it uses the network, and

generally treats it as a black box assuming all packet losses are due

to congestion and prudently backing off. This avoids further

congestion.

However, in the wireless medium, packet losses may also be due to

corruption due to high BER, fading, and so on. Here, the right

approach is to try harder, instead of backing off. Alternative

transport protocols are:

- NETBLT [NETBLT, RFC1986, RFC1030]

- MNCP [MNCP]

- ESRO [RFC2188]

- RDP [RFC908, RFC1151]

- VMTP [VMTP]

3 The Case for TCP

This is one of the most hotly debated issues in the wireless arena.

Here are some arguments against it:

- It is generally recognized that TCP does not perform well in

the presence of significant levels of non-congestion loss. TCP

detractors argue that the wireless medium is one such case, and

that it is hard enough to fix TCP. They argue that it is easier

to start from scratch.

- TCP has too much header overhead.

- By the time the mechanisms are in place to fix it, TCP is very

heavy, and ill-suited for use by lightweight, portable devices.

and here are some in support of TCP:

- It is preferable to continue using the same protocol that the

rest of the Internet uses for compatibility reasons. Any

extensions specific to the wireless link may be negotiated.

- Legacy mechanisms may be reused (for example three-way

handshake).

- Link-layer FEC and ARQ can reduce the BER such that any losses

TCP does see are, in fact, caused by congestion (or a sustained

interruption of link connectivity). Modern W-WAN technologies

do this (CDPD, US-TDMA, CDMA, GSM), thus improving TCP

throughput.

- Handoffs among different technologies are made possible by

Mobile IP [RFC2002], but only if the same protocols, namely

TCP/IP, are used throughout.

- Given TCP's wealth of research and experience, alternative

protocols are relatively immature, and the full implications of

their widespread deployment not clearly understood.

Overall, we feel that the performance of TCP over long-thin networks

can be improved significantly. Mechanisms to do so are discussed in

the next sections.

4 Candidate Optimizations

There is a large volume of work on the subject of optimizing TCP for

operation over wireless media. Even though satellite networks

generally fall in the LFN regime, our current LTN focus has much to

benefit from it. For example, the work of the TCP-over-Satellite

working group of the IETF has been extremely helpful in preparing

this section [AGS98, ADGGHOSSTT98].

4.1 TCP: Current Mechanisms

A TCP sender adapts its use of bandwidth based on feedback from the

receiver. The high latency characteristic of LTNs implies that TCP's

adaptation is correspondingly slower than on networks with shorter

delays. Similarly, delayed ACKs exacerbate the perceived latency on

the link. Given that TCP grows its congestion window in units of

segments, small MTUs may slow adaptation even further.

4.1.1 Slow Start and Congestion Avoidance

Slow Start and Congestion Avoidance [RFC2581] are essential the

Internet's stability. However there are two reasons why the wireless

medium adversely affects them:

- Whenever TCP's retransmission timer expires, the sender assumes

that the network is congested and invokes slow start. This is

why it is important to minimize the losses caused by

corruption, leaving only those caused by congestion (as

expected by TCP).

- The sender increases its window based on the number of ACKs

received. Their rate of arrival, of course, is dependent on the

RTT (round-trip-time) between sender and receiver, which

implies long ramp-up times in high latency links like LTNs. The

dependency lasts until the pipe is filled.

- During slow start, the sender increases its window in units of

segments. This is why it is important to use an appropriately

large MTU which, in turn, requires requires link layers with

low loss.

4.1.2 Fast Retransmit and Fast Recovery

When a TCP sender receives several duplicate ACKs, fast retransmit

[RFC2581] allows it to infer that a segment was lost. The sender

retransmits what it considers to be this lost segment without waiting

for the full timeout, thus saving time.

After a fast retransmit, a sender invokes the fast recovery [RFC2581]

algorithm. Fast recovery allows the sender to transmit at half its

previous rate (regulating the growth of its window based on

congestion avoidance), rather than having to begin a slow start. This

also saves time.

In general, TCP can increase its window beyond the delay-bandwidth

product. However, in LTN links the congestion window may remain

rather small, less than four segments, for long periods of time due

to any of the following reasons:

1. Typical "file size" to be transferred over a connection is

relatively small (Web requests, Web document objects, email

messages, files, etc.) In particular, users of LTNs are not

very willing to carry out large transfers as the response time

is so long.

2. If the link has high BER, the congestion window tends to stay

small

3. When an LTN is combined with a highly congested wireline

Internet path, congestion losses on the Internet have the same

effect as 2.

4. Commonly, ISPs/operators configure only a small number of

buffers (even as few as for 3 packets) per user in their dial-

up routers

5. Often small socket buffers are recommended with LTNs in order

to prevent the RTO from inflating and to diminish the amount of

packets with competing traffic.

A small window effectively prevents the sender from taking advantage

of Fast Retransmits. Moreover, efficient recovery from multiple

losses within a single window requires adoption of new proposals

(NewReno [RFC2582]). In addition, on slow paths with no packet

reordering waiting for three duplicate ACKs to arrive postpones

retransmission unnecessarily.

Recommendation: Implement Fast Retransmit and Fast Recovery at this

time. This is a widely-implemented optimization and is currently at

Proposed Standard level. [AGS98] recommends implementation of Fast

Retransmit/Fast Recovery in satellite environments. NewReno

[RFC2582] apparently does help a sender better handle partial ACKs

and multiple losses in a single window, but at this point is not

recommended due to its experimental nature. Instead, SACK [RFC2018]

is the preferred mechanism.

4.2 Connection Setup with T/TCP [RFC1397, RFC1644]

TCP engages in a "three-way handshake" whenever a new connection is

set up. Data transfer is only possible after this phase has

completed successfully. T/TCP allows data to be exchanged in

parallel with the connection set up, saving valuable time for short

transactions on long-latency networks.

Recommendation: T/TCP is not recommended, for these reasons:

- It is an Experimental RFC.

- It is not widely deployed, and it has to be deployed at both ends

of a connection.

- Security concerns have been raised that T/TCP is more vulnerable

to address-spoofing attacks than TCP itself.

- At least some of the benefits of T/TCP (eliminating three-way

handshake on subsequent query-response transactions, for instance)

are also available with persistent connections on HTTP/1.1, which

is more widely deployed.

[ADGGHOSSTT98] does not have a recommendation on T/TCP in satellite

environments.

4.3 Slow Start Proposals

Because slow start dominates the network response seen by interactive

users at the beginning of a TCP connection, a number of proposals

have been made to modify or eliminate slow start in long latency

environments.

Stability of the Internet is paramount, so these proposals must

demonstrate that they will not adversely affect Internet congestion

levels in significant ways.

4.3.1 Larger Initial Window

Traditional slow start, with an initial window of one segment, is a

time-consuming bandwidth adaptation procedure over LTNs. Studies on

an initial window larger than one segment [RFC2414, AHO98] resulted

in the TCP standard supporting a maximum value of 2 [RFC2581]. Higher

values are still experimental in nature.

In simulations with an increased initial window of three packets

[RFC2415], this proposal does not contribute significantly to packet

drop rates, and it has the added benefit of improving initial

response times when the peer device delays acknowledgements during

slow start (see next proposal).

[RFC2416] addresses situations where the initial window exceeds the

number of buffers available to TCP and indicates that this situation

is no different from the case where the congestion window grows

beyond the number of buffers available.

[RFC2581] now allows an initial congestion window of two segments. A

larger initial window, perhaps as many as four segments, might be

allowed in the future in environments where this significantly

improves performance (LFNs and LTNs).

Recommendation: Implement this on devices now. The research on this

optimization indicates that 3 segments is a safe initial setting, and

is centering on choosing between 2, 3, and 4. For now, use 2

(following RFC2581), which at least allows clients running query-

response applications to get an initial ACK from unmodified servers

without waiting for a typical delayed ACK timeout of 200

milliseconds, and saves two round-trips. An initial window of 3

[RFC2415] looks promising and may be adopted in the future pending

further research and experience.

4.3.2 Growing the Window during Slow Start

The sender increases its window based on the flow of ACKs coming back

from the receiver. Particularly during slow start, this flow is very

important. A couple of the proposals that have been studied are (1)

ACK counting and (2) ACK-every-segment.

4.3.2.1 ACK Counting

The main idea behind ACK counting is:

- Make each ACK count to its fullest by growing the window based

on the data being acknowledged (byte counting) instead of the

number of ACKs (ACK counting). This has been shown to cause

bursts which lead to congestion. [Allman98] shows that Limited

Byte Counting (LBC), in which the window growth is limited to 2

segments, does not lead to as much burstiness, and offers some

performance gains.

Recommendation: Unlimited byte counting is not recommended. Van

Jacobson cautions against byte counting [TCPSATMIN] because it leads

to burstiness, and recommends ACK spacing [ACKSPACING] instead.

ACK spacing requires ACKs to consistently pass through a single ACK-

spacing router. This requirement works well for W-WAN environments

if the ACK-spacing router is also the intermediate node.

Limited byte counting warrants further investigation before we can

recommend this proposal, but it shows promise.

4.3.2.2 ACK-every-segment

The main idea behind ACK-every-segment is:

- Keep a constant stream of ACKs coming back by turning off

delayed ACKs [RFC1122] during slow start. ACK-every-segment

must be limited to slow start, in order to avoid penalizing

asymmetric-bandwidth configurations. For instance, a low

bandwidth link carrying acknowledgements back to the sender,

hinders the growth of the congestion window, even if the link

toward the client has a greater bandwidth [BPK99].

Even though simulations confirm its promise (it allows receivers to

receive the second segment from unmodified senders without waiting

for a typical delayed ACK timeout of 200 milliseconds), for this

technique to be practical the receiver must acknowledge every segment

only when the sender is in slow start. Continuing to do so when the

sender is in congestion avoidance may have adverse effects on the

mobile device's battery consumption and on traffic in the network.

This violates a SHOULD in [RFC2581]: delayed acknowledgements SHOULD

be used by a TCP receiver.

"Disabling Delayed ACKs During Slow Start" is technically

unimplementable, as the receiver has no way of knowing when the

sender crosses ssthresh (the "slow start threshold") and begins using

the congestion avoidance algorithm. If receivers follow

recommendations for increased initial windows, disabling delayed ACKs

during an increased initial window would open the TCP window more

rapidly without doubling ACK traffic in general. However, this

scheme might double ACK traffic if most connections remain in slow-

start.

Recommendation: ACK only the first segment on a new connection with

no delay.

4.3.3 Terminating Slow Start

New mechanisms [ADGGHOSSTT98] are being proposed to improve TCP's

adaptive properties such that the available bandwidth is better

utilized while reducing the possibility of congesting the network.

This results in the closing of the congestion window to 1 segment

(which precludes fast retransmit), and the subsequent slow start

phase.

Theoretically, an optimum value for slow-start threshold (ssthresh)

allows connection bandwidth utilization to ramp up as aggressively as

possible without "overshoot" (using so much bandwidth that packets

are lost and congestion avoidance procedures are invoked).

Recommendation: Estimating the slow start threshold is not

recommended. Although this would be helpful if we knew how to do it,

rough consensus on the tcp-impl and tcp-sat mailing lists is that in

non-trivial operational networks there is no reliable method to probe

during TCP startup and estimate the bandwidth available.

4.3.4 Generating ACKs during Slow Start

Mitigations that inject additional ACKs (whether "ACK-first-segment"

or "ACK-every-segment-during-slow-start") beyond what today's

conformant TCPs inject are only applicable during the slow-start

phases of a connection. After an initial exchange, the connection

usually completes slow-start, so TCPs only inject additional ACKs

when (1) the connection is closed, and a new connection is opened, or

(2) the TCPs handle idle connection restart correctly by performing

slow start.

Item (1) is typical when using HTTP/1.0, in which each request-

response transaction requires a new connection. Persistent

connections in HTTP/1.1 help in maintaining a connection in

congestion avoidance instead of constantly reverting to slow-start.

Because of this, these optimizations which are only enabled during

slow-start do not get as much of a chance to act. Item (2), of

course, is independent of HTTP version.

4.4 ACK Spacing

During slow start, the sender responds to the incoming ACK stream by

transmitting N+1 segments for each ACK, where N is the number of new

segments acknowledged by the incoming ACK. This results in data

being sent at twice the speed at which it can be processed by the

network. Accordingly, queues will form, and due to insufficient

buffering at the bottleneck router, packets may get dropped before

the link's capacity is full.

Spacing out the ACKs effectively controls the rate at which the

sender will transmit into the network, and may result in little or no

queueing at the bottleneck router [ACKSPACING]. Furthermore, ack

spacing reduces the size of the bursts.

Recommendation: No recommendation at this time. Continue monitoring

research in this area.

4.5 Delayed Duplicate Acknowlegements

As was mentioned above, link-layer retransmissions may decrease the

BER enough that congestion accounts for most of packet losses; still,

nothing can be done about interruptions due to handoffs, moving

beyond wireless coverage, etc. In this scenario, it is imperative to

prevent interaction between link-layer retransmission and TCP

retransmission as these layers duplicate each other's efforts. In

such an environment it may make sense to delay TCP's efforts so as to

give the link-layer a chance to recover. With this in mind, the

Delayed Dupacks [MV97, Vaidya99] scheme selectively delays duplicate

acknowledgements at the receiver. It is preferable to allow a local

mechanism to resolve a local problem, instead of invoking TCP's end-

to-end mechanism and incurring the associated costs, both in terms of

wasted bandwidth and in terms of its effect on TCP's window behavior.

The Delayed Dupacks scheme can be used despite IP encryption since

the intermediate node does not need to examine the TCP headers.

Currently, it is not well understood how long the receiver should

delay the duplicate acknowledgments. In particular, the impact of

wireless medium access control (MAC) protocol on the choice of delay

parameter needs to be studied. The MAC protocol may affect the

ability to choose the appropriate delay (either statically or

dynamically). In general, significant variabilities in link-level

retransmission times can have an adverse impact on the performance of

the Delayed Dupacks scheme. Furthermore, as discussed later in

section 4.10.3, Delayed Dupacks and some other schemes (such as Snoop

[SNOOP]) are only beneficial in certain types of network links.

Recommendation: Delaying duplicate acknowledgements may be useful in

specific network topologies, but a general recommendation requires

further research and experience.

4.6 Selective Acknowledgements [RFC2018]

SACK may not be useful in many LTNs, according to Section 1.1 of

[TCPHP]. In particular, SACK is more useful in the LFN regime,

especially if large windows are being used, because there is a

considerable probability of multiple segment losses per window. In

the LTN regime, TCP windows are much smaller, and burst errors must

be much longer in duration in order to damage multiple segments.

Accordingly, the complexity of SACK may not be justifiable, unless

there is a high probability of burst errors and congestion on the

wireless link. A desire for compatibility with TCP recommendations

for non-LTN environments may dictate LTN support for SACK anyway.

[AGS98] recommends use of SACK with Large TCP Windows in satellite

environments, and notes that this implies support for PAWS

(Protection Against Wrapped Sequence space) and RTTM (Round Trip Time

Measurement) as well.

Berkeley's SNOOP protocol research [SNOOP] indicates that SACK does

improve throughput for SNOOP when multiple segments are lost per

window [BPSK96]. SACK allows SNOOP to recover from multi-segment

losses in one round-trip. In this case, the mobile device needs to

implement some form of selective acknowledgements. If SACK is not

used, TCP may enter congestion avoidance as the time needed to

retransmit the lost segments may be greater than the retransmission

timer.

Recommendation: Implement SACK now for compatibility with other TCPs

and improved performance with SNOOP.

4.7 Detecting Corruption Loss

4.7.1 Without Explicit Notification

In the absence of explicit notification from the network, some

researchers have suggested statistical methods for congestion

avoidance [Jain89, WC91, VEGAS]. A natural extension of these

heuristics would enable a sender to distinguish between losses caused

by congestion and other causes. The research results on the

reliability of sender-based heuristics is unfavorable [BV97, BV98].

[BV98a] reports better results in constrained environments using

packet inter-arrival times measured at the receiver, but highly-

variable delay - of the type encountered in wireless environments

during intercell handoff - confounds these heuristics.

Recommendation: No recommendation at this time - continue to monitor

research results.

4.7.2 With Explicit Notifications

With explicit notification from the network it is possible to

determine when a loss is due to congestion. Several proposals along

these lines include:

- Explicit Loss Notification (ELN) [BPSK96]

- Explicit Bad State Notification (EBSN) [BBKVP96]

- Explicit Loss Notification to the Receiver (ELNR), and Explicit

Delayed Dupack Activation Notification (EDDAN) (notifications

to mobile receiver) [MV97]

- Explicit Congestion Notification (ECN) [ECN]

Of these proposals, Explicit Congestion Notification (ECN) seems

closest to deployment on the Internet, and will provide some benefit

for TCP connections on long thin networks (as well as for all other

TCP connections).

Recommendation: No recommendation at this time. Schemes like ELNR and

EDDAN [MV97], in which the only systems that need to be modified are

the intermediate node and the mobile device, are slated for adoption

pending further research. However, this solution has some

limitations. Since the intermediate node must have access to the TCP

headers, the IP payload must not be encrypted.

ECN uses the TOS byte in the IP header to carry congestion

information (ECN-capable and Congestion-encountered). This byte is

not encrypted in IPSEC, so ECN can be used on TCP connections that

are encrypted using IPSEC.

Recommendation: Implement ECN. In spite of this, mechanisms for

explicit corruption notification are still relevant and should be

tracked.

Note: ECN provides useful information to avoid deteriorating further

a bad situation, but has some limitations for wireless applications.

Absence of packets marked with ECN should not be interpreted by ECN-

capable TCP connections as a green light for aggressive

retransmissions. On the contrary, during periods of extreme network

congestion routers may drop packets marked with explicit notification

because their buffers are exhausted - exactly the wrong time for a

host to begin retransmitting aggressively.

4.8 Active Queue Management

As has been pointed out above, TCP responds to congestion by closing

down the window and invoking slow start. Long-delay networks take a

particularly long time to recover from this condition. Accordingly,

it is imperative to avoid congestion in LTNs. To remedy this, active

queue management techniques have been proposed as enhancements to

routers throughout the Internet [RED]. The primary motivation for

deployment of these mechanisms is to prevent "congestion collapse" (a

severe degradation in service) by controlling the average queue size

at the routers. As the average queue length grows, Random Early

Detection [RED] increases the possibility of dropping packets.

The benefits are:

- Reduce packet drops in routers. By dropping a few packets

before severe congestion sets in, RED avoids dropping bursts of

packets. In other Words, the objective is to drop m packets

early to prevent n drops later on, where m is less than n.

- Provide lower delays. This follows from the smaller queue

sizes, and is particularly important for interactive

applications, for which the inherent delays of wireless links

already push the user experience to the limits of the non-

acceptable.

- Avoid lock-outs. Lack of resources in a router (and the

resultant packet drops) may, in effect, obliterate throughput

on certain connections. Because of active queue management, it

is more probable for an incoming packet to find available

buffer space at the router.

Active Queue Management has two components: (1) routers detect

congestion before exhausting their resources, and (2) they provide

some form of congestion indication. Dropping packets via RED is only

one example of the latter. Another way to indicate congestion is to

use ECN [ECN] as discussed above under "Detecting Corruption Loss:

With Explicit Notifications."

Recommendation: RED is currently being deployed in the Internet, and

LTNs should follow suit. ECN deployment should complement RED's.

4.9 Scheduling Algorithms

Active queue management helps control the length of the queues.

Additionally, a general solution requires replacing FIFO with other

scheduling algorithms that improve:

1. Fairness (by policing how different packet streams utilize the

available bandwidth), and

2. Throughput (by improving the transmitter's radio channel

utilization).

For example, fairness is necessary for interactive applications (like

telnet or web browsing) to coexist with bulk transfer sessions.

Proposals here include:

- Fair Queueing (FQ) [Demers90]

- Class-based Queueing (CBQ) [Floyd95]

Even if they are only implemented over the wireless link portion of

the communication path, these proposals are attractive in wireless

LTN environments, because new connections for interactive

applications can have difficulty starting when a bulk TCP transfer

has already stabilized using all available bandwidth.

In our base architecture described above, the mobile device typically

communicates directly with only one wireless peer at a given time:

the intermediate node. In some W-WANs, it is possible to directly

address other mobiles within the same cell. Direct communication

with each such wireless peer may traverse a spatially distinct path,

each of which may exhibit statistically independent radio link

characteristics. Channel State Dependent Packet Scheduling (CSDP)

[BBKT96] tracks the state of the various radio links (as defined by

the target devices), and gives preferential treatment to packets

destined for radio links in a "good" state. This avoids attempting to

transmit to (and expect acknowledgements from) a peer on a "bad"

radio link, thus improving throughput.

A further refinement of this idea suggests that both fairness and

throughput can be improved by combining a wireless-enhanced CBQ with

CSDP [FSS98].

Recommendation: No recommendation at this time, pending further

study.

4.10 Split TCP and Performance-Enhancing Proxies (PEPs)

Given the dramatic differences between the wired and the wireless

links, a very common approach is to provide some impedance matching

where the two different technologies meet: at the intermediate node.

The idea is to replace an end-to-end TCP connection with two clearly

distinct connections: one across the wireless link, the other across

its wireline counterpart. Each of the two resulting TCP sessions

operates under very different networking characteristics, and may

adopt the policies best suited to its particular medium. For

example, in a specific LTN topology it may be desirable to modify TCP

Fast Retransmit to resend after the first duplicate ack and Fast

Recovery to not shrink the congestion window if the LTN link has an

extremely long RTT, is known to not reorder packets, and is not

subject to congestion. Moreover, on a long-delay link or on a link

with a relatively high bandwidth-delay product it may be desirable to

"slow-start" with a relatively large initial window, even larger than

four segments. While these kinds of TCP modifications can be

negotiated to be employed over the LTN link, they would not be

deployed end-to-end over the global Internet. In LTN topologies where

the underlying link characteristics are known, a various similar

types of performance enhancements can be employed without endangering

operations over the global Internet.

In some proposals, in addition to a PEP mechanism at the intermediate

node, custom protocols are used on the wireless link (for example,

[WAP], [YB94] or [MOWGLI]).

Even if the gains from using non-TCP protocols are moderate or

better, the wealth of research on optimizing TCP for wireless, and

compatibility with the Internet are compelling reasons to adopt TCP

on the wireless link (enhanced as suggested in section 5 below).

4.10.1 Split TCP Approaches

Split-TCP proposals include schemes like I-TCP [ITCP] and MTCP [YB94]

which achieve performance improvements by abandoning end-to-end

semantics.

The Mowgli architecture [MOWGLI] proposes a split approach with

support for various enhancements at all the protocol layers, not only

at the transport layer. Mowgli provides an option to replace the

TCP/IP core protocols on the LTN link with a custom protocol that is

tuned for LTN links [KRLKA97]. In addition, the protocol provides

various features that are useful with LTNs. For example, it provides

priority-based multiplexing of concurrent connections together with

shared flow control, thus offering link capacity to interactive

applications in a timely manner even if there are bandwidth-intensive

background transfers. Also with this option, Mowgli preserves the

socket semantics on the mobile device so that legacy applications can

be run unmodified.

Employing split TCP approaches have several benefits as well as

drawbacks. Benefits related to split TCP approaches include the

following:

- Splitting the end-to-end TCP connection into two parts is a

straightforward way to shield the problems of the wireless link

from the wireline Internet path, and vice versa. Thus, a split TCP

approach enables applying local solutions to the local problems on

the wireless link. For example, it automatically solves the

problem of distinguishing congestion related packet losses on the

wireline Internet and packet losses due to transmission error on

the wireless link as these occur on separate TCP connections.

Even if both segments experience congestion, it may be of a

different nature and may be treated as such. Moreover, temporary

disconnections of the wireless link can be effectively shielded

from the wireline Internet.

- When one of the TCP connections crosses only a single hop wireless

link or a very limited number of hops, some or all link

characteristics for the wireless TCP path are known. For example,

with a particular link we may know that the link provides reliable

delivery of packets, packets are not delivered out of order, or

the link is not subject to congestion. Having this information for

the TCP path one could expect that defining the TCP mitigations to

be employed becomes a significantly easier task. In addition,

several mitigations that cannot be employed safely over the global

Internet, can be successfully employed over the wireless link.

- Splitting one TCP connection into two separate ones allows much

earlier deployment of various recent proposals to improve TCP

performance over wireless links; only the TCP implementations of

the mobile device and intermediate node need to be modified, thus

allowing the vast number of Internet hosts to continue running the

legacy TCP implementations unmodified. Any mitigations that would

require modification of TCP in these wireline hosts may take far

too long to become widely deployed.

- Allows exploitation of various application level enhancements

which may give significant performance gains (see section 4.10.2).

Drawbacks related to split TCP approaches include the following:

- One of the main criticisms against the split TCP approaches is

that it breaks TCP end-to-end semantics. This has various

drawbacks some of which are more severe than others. The most

detrimental drawback is probably that splitting the TCP connection

disables end-to-end usage of IP layer security mechanisms,

precluding the application of IPSec to achieve end-to-end

security. Still, IPSec could be employed separately in each of the

two parts, thus requiring the intermediate node to become a party

to the security association between the mobile device and the

remote host. This, however, is an undesirable or unacceptable

alternative in most cases. Other security mechanisms above the

transport layer, like TLS [RFC2246] or SOCKS [RFC1928], should be

employed for end-to-end security.

- Another drawback of breaking end-to-end semantics is that crashes

of the intermediate node become unrecoverable resulting in

termination of the TCP connections. Whether this should be

considered a severe problem depends on the expected frequency of

such crashes.

- In many occasions claims have been stated that if TCP end-to-end

semantics is broken, applications relying on TCP to provide

reliable data delivery become more vulnerable. This, however, is

an overstatement as a well-designed application should never fully

rely on TCP in achieving end-to-end reliability at the application

level. First, current APIs to TCP, such as the Berkeley socket

interface, do not allow applications to know when an TCP

acknowledgement for previously sent user data arrives at TCP

sender. Second, even if the application is informed of the TCP

acknowledgements, the sending application cannot know whether the

receiving application has received the data: it only knows that

the data reached the TCP receive buffer at the receiving end.

Finally, in order to achieve end-to-end reliability at the

application level an application level acknowledgement is required

to confirm that the receiver has taken the appropriate actions on

the data it received.

- When a mobile device moves, it is subject to handovers by the

serving base station. If the base station acts as the intermediate

node for the split TCP connection, the state of both TCP endpoints

on the previous intermediate node must be transferred to the new

intermediate node to ensure continued operation over the split TCP

connection. This requires extra work and causes overhead. However,

in most of the W-WAN wireless networks, unlike in W-LANs, the W-

WAN base station does not provide the mobile device with the

connection point to the wireline Internet (such base stations may

not even have an IP stack). Instead, the W-WAN network takes care

of the mobility and retains the connection point to the wireline

Internet unchanged while the mobile device moves. Thus, TCP state

handover is not required in most W-WANs.

- The packets traversing through all the protocol layers up to

transport layer and again down to the link layer result in extra

overhead at the intermediate node. In case of LTNs with low

bandwidth, this extra overhead does not cause serious additional

performance problems unlike with W-LANs that typically have much

higher bandwidth.

- Split TCP proposals are not applicable to networks with asymmetric

routing. Deploying a split TCP approach requires that traffic to

and from the mobile device be routed through the intermediate

node. With some networks, this cannot be accomplished, or it

requires that the intermediate node is located several hops away

from the wireless network edge which in turn is unpractical in

many cases and may result in non-optimal routing.

- Split TCP, as the name implies, does not address problems related

to UDP.

It should noted that using split TCP does not necessarily exclude

simultaneous usage of IP for end-to-end connectivity. Correct usage

of split TCP should be managed per application or per connection and

should be under the end-user control so that the user can decide

whether a particular TCP connection or application makes use of split

TCP or whether it operates end-to-end directly over IP.

Recommendation: Split TCP proposals that alter TCP semantics are not

recommended. Deploying custom protocols on the wireless link, such as

MOWGLI proposes is not recommended, because this note gives

preference to (1) improving TCP instead of designing a custom

protocol and (2) allowing end-to-end sessions at all times.

4.10.2 Application Level Proxies

Nowadays, application level proxies are widely used in the Internet.

Such proxies include Web proxy caches, relay MTAs (Mail Transfer

Agents), and secure transport proxies (e.g., SOCKS). In effect,

employing an application level proxy results in a "split TCP

connection" with the proxy as the intermediary. Hence, some of the

problems present with wireless links, such as combining of a

congested wide-area Internet path with a wireless LTN link, are

automatically alleviated to some extent.

The application protocols often employ plenty of (unnecessary) round

trips, lots of headers and inefficient encoding. Even unnecessary

data may get delivered over the wireless link in regular application

protocol operation. In many cases a significant amount of this

overhead can be reduced by simply running an application level proxy

on the intermediate node. With LTN links, significant additional

improvement can be achieved by introducing application level proxies

with application-specific enhancements. Such a proxy may employ an

enhanced version of the application protocol over the wireless link.

In an LTN environment enhancements at the application layer may

provide much more notable performance improvements than any transport

level enhancements.

The Mowgli system provides full support for adding application level

agent-proxy pairs between the client and the server, the agent on the

mobile device and the proxy on the intermediate node. Such a pair may

be either explicit or fully transparent to the applications, but it

is, at all times, under the end-user control. Good examples of

enhancements achieved with application-specific proxies include

Mowgli WWW [LAKLR95], [LHKR96] and WebExpress [HL96], [CTCSM97].

Recommendation: Usage of application level proxies is conditionally

recommended: an application must be proxy enabled and the decision of

employing a proxy for an application must be under the user control

at all times.

4.10.3 Snoop and its Derivatives

Berkeley's SNOOP protocol [SNOOP] is a hybrid scheme mixing link-

layer reliability mechanisms with the split connection approach. It

is an improvement over split TCP approaches in that end-to-end

semantics are retained. SNOOP does two things:

1. Locally (on the wireless link) retransmit lost packets, instead

of allowing TCP to do so end-to-end.

2. Suppress the duplicate acks on their way from the receiver back

to the sender, thus avoiding fast retransmit and congestion

avoidance at the latter.

Thus, the Snoop protocol is designed to avoid unnecessary fast

retransmits by the TCP sender, when the wireless link layer

retransmits a packet locally. Consider a system that does not use the

Snoop agent. Consider a TCP sender S that sends packets to receiver R

via an intermediate node IN. Assume that the sender sends packet A,

B, C, D, E (in that order) which are forwarded by IN to the wireless

receiver R. Assume that the intermediate node then retransmits B

subsequently, because the first transmission of packet B is lost due

to errors on the wireless link. In this case, receiver R receives

packets A, C, D, E and B (in that order). Receipt of packets C, D and

E triggers duplicate acknowledgements. When the TCP sender receives

three duplicate acknowledgements, it triggers fast retransmit (which

results in a retransmission, as well as reduction of congestion

window). The fast retransmit occurs despite the link level

retransmit on the wireless link, degrading throughput.

SNOOP [SNOOP] deals with this problem by dropping TCP dupacks

appropriately (at the intermediate node). The Delayed Dupacks (see

section 4.5) attempts to approximate Snoop without requiring

modifications at the intermediate node. Such schemes are needed only

if the possibility of a fast retransmit due to wireless errors is

non-negligible. In particular, if the wireless link uses a stop-and-

go protocol (or otherwise delivers packets in-order), then these

schemes are not very beneficial. Also, if the bandwidth-delay

product of the wireless link is smaller than four segments, the

probability that the intermediate node will have an opportunity to

send three new packets before a lost packet is retransmitted is

small. Since at least three dupacks are needed to trigger a fast

retransmit, with a wireless bandwidth-delay product less than four

packets, schemes such as Snoop and Delayed Dupacks would not be

necessary (unless the link layer is not designed properly).

Conversely, when the wireless bandwidth-delay product is large

enough, Snoop can provide significant performance improvement

(compared with standard TCP). For further discussion on these topics,

please refer to [Vaidya99].

The Delayed Dupacks scheme tends to provide performance benefit in

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有