In the first week of May, a message was posted on the NANOG list by someone
who had a dispute with one of his ISPs.
When it became obvious this dispute wasn't going to
be resolved, the ISP wasn't content with no longer providing any service,
but they also contacted the other ISP this network connected to, and asked
them to stop routing the /22 out of their range the (ex-)customer was using.
The second ISP complied and the customer network was cut off from the internet.
(This all happened on a sunday afternoon, so it is likely there is more to
the story than what was posted on the NANOG list.)
The surprising thing was that many people on the list didn't think this
was a very unreasonable thing to do. It is generally accepted that a network
using an ISP's address space should stop using these addresses when it no
longer connects to that ISP, but in the cases I have been involved with there
was always a reasonable time to renumber. Obviously
depending on such a grace period is a very dangerous thing to do. You have
been warned.
Permalink - posted 2002-06-30
During the second week of April there was some discussion on reordering
of packets on parallel links at Internet Exchanges. Equipment vendors try
very hard to make sure this doesn't happen, but this has the risk that
balancing traffic over parallel links doesn't work as good as it should.
It is generally accepted that reordering leeds to inefficiency or even
slowdowns in TCP implementations, but it seems unlikely reordering will
happen much hosts are connected at the speed of the parallel links (ie,
Gigabit Ethernet) or there is significant congestion.
Permalink - posted 2002-06-29
►
April fools day is coming up again! Don't let it catch you by surprise.
Over the years, a number of RFCs have been published on April first, such as...
Full article / permalink - posted 2002-04-01
▼
April fools day is coming up again! Don't let it catch you by surprise.
Over the years, a number of RFCs have been published on April first, such as:
0894
Standard for the transmission of IP datagrams over Ethernet
networks. C. Hornig. Apr-01-1984. (Format: TXT=5697 bytes) (Also
STD0041) (Status: STANDARD)
But not all RFCs stubbornly ignore their publishing day. The first
"fools day compatible" RFC dates back to 1978:
0748
Telnet randomly-lose option. M.R. Crispin. Apr-01-1978. (Format:
TXT=2741 bytes) (Status: UNKNOWN)
Probably the most famous of all is RFC 1149, which was updated in RFC 2549:
1149
Standard for the transmission of IP datagrams on avian carriers.
����
D. Waitzman. Apr-01-1990. (Format: TXT=3329 bytes) (Updated by
����
RFC2549) (Status: EXPERIMENTAL)
2549
IP over Avian Carriers with Quality of Service. D. Waitzman.
����
Apr-01-1999. (Format: TXT=9519 bytes) (Updates RFC1149) (Status:
����
INFORMATIONAL)
It took some time, but in 2001 the Bergen Linux User Group
implemented
RFC 1149 and carried out some tests.
Can't get enough? Here are more April first RFCs:
RFC 1097,
RFC 1217,
RFC 1313,
RFC 1437,
RFC 1438,
RFC 1605,
RFC 1606,
RFC 1607,
RFC 1776,
RFC 1924,
RFC 1925,
RFC 1926,
RFC 1927,
RFC 2100,
RFC 2321,
RFC 2323,
RFC 2324,
RFC 2550,
RFC 2551,
RFC 2795.
Permalink - posted 2002-04-01
The second half of Februari saw two main topics on the NANOG list:
DS3 performance and satellite latency. The long round trip times for
satellite connections wreak havoc on TCP performance. In order to be able
to utilize the available bandwidth, TCP needs to keep sending data without
waiting for an acknowledgment for at least a full round trip time. Or in
other words: TCP performance is limited to the window size multiplied by
the round trip time. The TCP window (amount of data TCP will send before
stopping and waiting for an acknowledgment) is limited by two factors: the
send buffer on the sending system and the 16 bit window size field in the
TCP header. So on a 600 ms RTT satellite link the maximum TCP performance
is limited to 107 kilobytes per second (850 kbps) by the size of the
header field, and if a sender uses a 16 kilobyte buffer (a fairly common size)
this drops to as little as 27 kilobytes per second (215 kbps). Because of the
TCP slow start mechanism, it takes several seconds to reach this speed as well. Fortunately,
RFC 1323, TCP Extensions for High Performance
introduces a "window scale" option to increase the TCP window to a maximum of
1 GB, if both ends of the connection allocate enough buffer space.
The other subject that received a lot of attention, the maximum usable
bandwidth of a DS3/T3 line, is also related to TCP performance. When the
line gets close to being fully utilized, short data bursts (which are very
common in IP) will fill up the send queue. When the queue is full, additional
incoming packets are discarded. This is called a "tail drop". If the TCP
session which loses a packet doesn't support "fast retransmit", or if several
packets from the same session are dropped, this TCP session will go into
"slow start" and slow down a lot. This often happens to several TCP
sessions at the same time, so those now all perform slow start at the same
time. So they all reach the point where the line can't handle the traffic
load at the same time, and another small burst will trigger another round
of tail drops.
A possible solution is to use Random Early Detect (RED) queuing
rather than First In, First Out (FIFO). RED will start dropping more and more
packets as the queue fills up, to trigger TCP congestion avoidance and
slow down the TCP sessions more gently. But this only works if there aren't
(m)any tail drops, which is unlikely if there is only limited buffer space.
Unfortunately, Cisco uses a default queue size of 40 packets. Queuing theory
tells us this queue will be filled entirely (on average) at 97% line
utilization. So at 97%, even a one packet burst will result in a tail drop.
The solution is to increase the queue size, in addition to enabling
RED. On a Cisco:
interface ATM0
random-detect
hold-queue 500 out
This gives RED the opportunity to start dropping individual packets long
before the queue fills up entirely and tail drops occur. The price is a
somewhat longer queuing delay. At 99% utilization, there will be an
average of 98 packets in the queue, but at 45 Mbps this will only introduce
a delay of 9 ms.
Permalink - posted 2002-03-31
Between August 20 and 26 an interesting subject came up on the NANOG list:
when using Gigabit Ethernet for exchange points, there can be a nice performance gain
if a larger MTU than the standard 1500 byte one is used. However, this
will only work if all attached layer 3 devices agree on the MTU. This can
be accomplished by having several VLANs and setting the MTU for each
subinterface if a single MTU can't be agreed upon.
Permalink - posted 2001-09-30
older posts
- newer posts