Opportunistic Routing in Wireless Networks: A Stochastic/Adaptive Control Approach
Opportunistic routing for multi-hop wireless networks has seen recent research interest to overcome deficiencies of traditional routing. Specifically, the routing decisions are made opportunistically, choosing the next relay based on the actual transmission outcomes in addition to an expected sense of future opportunities. First, we, briefly, cast opportunistic routing as a Markov decision problem (MDP) and introduce a stochastic variant of distributed bellman-ford which provides a unifying framework for almost all versions of opportunistic routing such as SDF, GeRaF, and EXOR.
To formulate and identify the optimal routing strategy, MDP formulations rely on the availability of probabilistic (Markov) models. However, a perfect probabilistic model of channel qualities and network topology is restrictive in practical network settings. In the second part of the talk, we provide an adaptive algorithms to deal with the estimation aspect of the problem when imperfect probabilistic model of channel qualities and network topology is available. Specifically, we build on our earlier work where the robustness of the proposed algorithms to modeling errors is investigated. We then use a
reinforcement learning framework to propose an adaptive opportunistic routing algorithm which minimizes the expected average cost per packet independently of the initial knowledge about the channel quality and statistics across the network.
Lastly and time permitting, we touch upon the issue of congestion and throughput optimality under various traffic conditions. We propose a combination of the previous MDP framework and backpressure routing to arrive at policies with significantly more desirable delay/throughput performance.