Algorithmic problems
on the Internet
The Internet is a great source of problems. I list below some that
I have worked on in the recent past, or continue to work on at present.
Projects:
- Competitive Analysis of the TCP Congestion
Control Algorithm with Professors Jeff Edmonds, Patrick
Dymond.
(pdf
file of the journal version submitted) [Appeared in SPAA 03]
ABSTRACT :
While the well-known Transport Control Protocol (TCP) is a de
facto standard for reliable communication on the Internet, and performs
well in practice, the question ``how good is the TCP/IP congestion control
algorithm?'' is not completely resolved. In this paper, we provide some
answers to this question using the competitive analysis framework. First,
we prove that for networks with a single bottleneck (or point of congestion),
TCP is competitive to the optimal global algorithm in minimizing the user-perceived
latency or flow time of the sessions. Specifically, we show that
with O(1) times as much bandwidth and O(1) extra time per job, TCP is O(1)-competitive
against an optimal global algorithm. We motivate the need for allowing
TCP to have extra resources by observing that existing lower bounds for
non-clairvoyant scheduling algorithms imply that no online, distributed,
non-clairvoyant algorithm can be competitive with an optimal offline algorithm
if both algorithms were given the same resources. Second, we show that TCP
is fair by proving that it converges quickly to allocations where every session
gets its fair share of network bandwidth.
- A New Active Queue Management Algorithm
with Professors Jeff Edmonds, Patrick Dymond and Mr Kashif Ali. [submtted]
ABSTRACT :
In this paper, we first propose a new active queue management algorithm
RQM based on rates of traffic passing through a router instead of the
buffer occupancy inside a router. Our algorithm does not require per-session
states to be maintained at the routers. We prove that our algorithm is
competitive to the optimal under some simplifying assumptions. Using simulations,
we show that our algorithm is superior to the well-studied RED and REM
algorithms in many scenarios.
- Lower Bounds and Top-down Algorithms for
Loss-based Multicast Tree Construction
[submitted]
ABSTRACT :
While robust measurements of network dynamics are essential for
the design and management of internetworks, administrative, economic
and privacy issues make it impractical to monitor every link in the network.
Therefore it becomes necessary to infer network and performance characteristics
from end-to-end measurements. One characteristic that is frequently needed
to build reliable multicast protocols is the topology of the tree used
for multicast communications. Several algorithms have been proposed in
the literature for making use of correlations between packets lost at each
of the receivers to infer a multicast tree. In this work, we consider two
aspects of the multicast tree inference problem that has not received much
attention in the literature. The first problem we look at is the sample
complexity or the number of samples required to infer multicast trees
with given error bounds. Our results quantify the tradeoff between the number
of samples and the probability of computing the tree correctly, and provide
lower bounds on the number of samples needed to compute the tree with a
given error probability. We prove that these bounds are tight by proving
that some existing algorithms achieve sample complexities that are within
constant factors of our lower bounds. Second, we design efficient algorithms
for inferring the logical multicast tree and the loss rates on each link
of the tree. Our algorithms differ from existing ones in that they construct
multicast trees in a top-down manner.
- The effectiveness of vaccinations on the
spread of email-based computer viruses with Mr. Hui
Wang
Accepted for publication in CCECE 2005.
ABSTRACT :
In the last decade, computer viruses have caused tremendous losses to organizations.
New viruses continue to cause havoc, in spite of having better antivirus
software. It is thus imperative that we understand what factors significantly
influence the spread of viruses. In this paper, we model the networks of
users as graphs. For simplicity, we assume that every user works only on
their own computer. We assume that nodes in the graph are initially susceptible
to infection. Once infected, a node may spread the virus (using perhaps,
the inbox or address book) until either the virus is removed and the node
is immunized. We assume that an immunized node never gets the same virus
again. We study the effect of vaccinations in containing the spread
of email-borne computer viruses on some traditional structured interconnection
graphs, some simple small-world graphs, as well as ``Internet-like''
small-world graphs generated by the BRITE package. We test the effectiveness
of vaccinations in containing the spread of viruses by looking at the total
number of nodes infected for different values of the delay D which is incurred
in detecting a virus, developing a vaccine and immunizing each infected node.
Our simple user model assumes that a user is likely to open an email attachment
(and activate the virus) with probability p and delete it with probability
1 - p. Intuitively, one would expect that the fraction of nodes infected
would show a 0-1 behavior as p is varied or as D is varied. Using simulations,
we demonstrate that while this is true for the traditional graphs we used,
it does not hold for small-world graphs. Since user networks are known to
be small-world graphs, this implies that vaccinations are far less effective
on real networks than they would be if the user networks were like traditional
interconnection graphs.
Please send me email if you want copies of these papers.
Back to my homepage