PapersFlow Research Brief

Physical Sciences · Computer Science

Network Traffic and Congestion Control
Research Guide

What is Network Traffic and Congestion Control?

Network Traffic and Congestion Control is the set of mechanisms and algorithms used in computer networks to manage data packet flows, prevent queue overflows, and ensure efficient resource utilization under varying load conditions.

This field encompasses 73,378 works focused on TCP performance, active queue management, bandwidth estimation, internet topology, multicast routing, delay analysis, wireless networks, and QoS routing. Key contributions include random early detection (RED) gateways that detect incipient congestion via average queue size (Floyd and Jacobson 1993). Congestion avoidance protocols adjust transmission rates to maintain network stability (Jacobson 1995).

Topic Hierarchy

100%
graph TD D["Physical Sciences"] F["Computer Science"] S["Computer Networks and Communications"] T["Network Traffic and Congestion Control"] D --> F F --> S S --> T style T fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan
73.4K
Papers
N/A
5yr Growth
861.1K
Total Citations

Research Sub-Topics

Why It Matters

Network Traffic and Congestion Control directly impacts the performance of the global Internet, enabling reliable data transfer for applications from web browsing to real-time video. Floyd and Jacobson (1993) introduced RED gateways, which drop packets probabilistically based on average queue size to signal congestion early, preventing router buffer overflows in backbone networks and adopted in routers worldwide. Jacobson (1995) developed additive increase/multiplicative decrease (AIMD) algorithms in TCP, which have sustained Internet growth by stabilizing throughput amid exponential traffic increases. Kelly et al. (1998) formalized proportional fairness using shadow prices, optimizing rate allocation in large-scale networks and influencing data center fabrics. These mechanisms underpin services like RTP for real-time audio/video (Schulzrinne et al. 2003), supporting platforms with billions of daily streams.

Reading Guide

Where to Start

"Random early detection gateways for congestion avoidance" by Floyd and Jacobson (1993), as it introduces foundational active queue management concepts with clear queue dynamics and remains essential for understanding modern router behavior.

Key Papers Explained

"Congestion avoidance and control" by Jacobson (1995) established TCP's AIMD core, which Floyd and Jacobson (1993) extended via RED to prevent synchronization. Kelly et al. (1998) generalized AIMD to network-wide proportional fairness using shadow prices, building on Jacobson (1995). Parekh and Gallager (1993) complemented these with GPS for weighted fairness in integrated services, influencing Kelly et al. (1998). Paxson and Floyd (1995) critiqued Poisson assumptions underlying early models like Jacobson (1995).

Paper Timeline

100%
graph LR P0["Random early detection gateways ...
1993 · 6.3K cites"] P1["A generalized processor sharing ...
1993 · 3.7K cites"] P2["Congestion avoidance and control
1995 · 5.3K cites"] P3["Rate control for communication n...
1998 · 5.0K cites"] P4["On power-law relationships of th...
1999 · 4.2K cites"] P5["RTP: A Transport Protocol for Re...
2003 · 5.7K cites"] P6["OpenFlow
2008 · 8.3K cites"] P0 --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> P6 style P6 fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan

Most-cited paper highlighted in red. Papers ordered chronologically.

Advanced Directions

Research continues on TCP enhancements for high-bandwidth-delay networks, active queue management beyond RED like CoDel or FQ-CoDel, and datacenter congestion control such as DCQCN, though no recent preprints are available in the data.

Papers at a Glance

# Paper Year Venue Citations Open Access
1 OpenFlow 2008 ACM SIGCOMM Computer C... 8.3K
2 Random early detection gateways for congestion avoidance 1993 IEEE/ACM Transactions ... 6.3K
3 RTP: A Transport Protocol for Real-Time Applications 2003 5.7K
4 Congestion avoidance and control 1995 ACM SIGCOMM Computer C... 5.3K
5 Rate control for communication networks: shadow prices, propor... 1998 Journal of the Operati... 5.0K
6 On power-law relationships of the Internet topology 1999 4.2K
7 A generalized processor sharing approach to flow control in in... 1993 IEEE/ACM Transactions ... 3.7K
8 Wide area traffic: the failure of Poisson modeling 1995 IEEE/ACM Transactions ... 3.7K
9 Integrated Services in the Internet Architecture: an Overview 1994 3.2K
10 Centrality and network flow 2005 Social Networks 3.0K

Frequently Asked Questions

What is Random Early Detection (RED) in congestion control?

RED gateways detect incipient congestion by monitoring average queue size and probabilistically drop arriving packets to notify connections. This approach avoids global synchronization of TCP retransmissions and stabilizes queues before full buffers form. Floyd and Jacobson (1993) demonstrated RED's effectiveness in packet-switched networks.

How does TCP congestion avoidance work?

TCP uses additive increase/multiplicative decrease (AIMD) to probe for available bandwidth while backing off during congestion. Senders increment congestion windows linearly in congestion avoidance phase and halve them on packet loss signals. Jacobson (1995) showed this achieves fair bandwidth sharing and network stability.

What is proportional fairness in rate control?

Proportional fairness maximizes aggregate utility via shadow prices, balancing throughput and equity in communication networks. Algorithms converge to stable equilibria around system optima. Kelly et al. (1998) proved stability for additive increase/multiplicative decrease generalizations in large-scale networks.

Why does wide-area traffic fail Poisson modeling?

Packet interarrivals exhibit heavy-tailed distributions rather than exponential, due to TCP feedback and bursty sessions. Fifteen wide-area traces confirmed non-Poisson arrivals for TCP sessions and packets. Paxson and Floyd (1995) quantified this mismatch, invalidating traditional queueing models.

What role does Generalized Processor Sharing play in flow control?

GPS allocates service in infinitesimal shares proportional to weights, providing fairness and delay bounds in integrated services networks. It enables rate-based flow control for virtual circuits. Parekh and Gallager (1993) analyzed GPS for single-node cases, bounding delays for leaky-bucket constrained sessions.

What are the power-laws in Internet topology?

Internet degree, AS-path length, and hop-count distributions follow power-laws, holding across 45% size growth from 1997-1998. These patterns reveal self-similarity despite apparent randomness. Faloutsos et al. (1999) measured exponents like outdegree α ≈ -2.2 from three snapshots.

Open Research Questions

  • ? How can congestion control adapt to power-law topologies for stable end-to-end performance?
  • ? What queue management policies minimize delay in wireless networks with variable capacity?
  • ? How do multicast routing and QoS interact under heavy-tailed traffic loads?
  • ? Can bandwidth estimation improve fairness in integrated services without per-flow state?
  • ? What delay bounds hold for GPS in multi-node networks with dynamic topologies?

Research Network Traffic and Congestion Control with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Network Traffic and Congestion Control with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers