Communications and Signal Processing Seminar
Network Algorithms and Delay Performance in Data Centers
Add to Google Calendar
Data center networks interconnect massive server farms used to process big data for a variety of applications. In such networks, resource allocation algorithms are used to distribute computing and network resources, to competing data-processing tasks. The main objective of the algorithms is to ensure very small latencies for delay-critical applications. These algorithms operate at different time scales: at the slow time-scale of jobs, at the intermediate time-scale of flows (communication messages between parallel jobs) and at the fast time-scale of packets. In the first part of the talk, we will present an overview of the architecture of data center networks and the various resource allocation problems that arise in such networks. In the second part of the talk, we will discuss a long-standing open problem which lies at the intersection of algorithms, probability, and control theory, which has again resurfaced in the context of resource allocation in data center networks. We will present our recent solution to one version of this open problem, where we developed a new mathematical technique to understand the performance of algorithms for high-dimensional resource allocation problems.
R. Srikant is the Fredric G. and Elizabeth H. Nearing Endowed Professor of Electrical and Computer Engineering and the Coordinated Science Lab at the University of Illinois at Urbana-Champaign. His research interests are in the areas of communication networks, cloud computing, applied probability, and machine learning. He received the 2015 IEEE INFOCOM Achievement Award and has received several Best Paper awards, including the 2015 INFOCOM Best Paper Award and the 2017 Applied Probability Society Best Publication Award. He was the Editor-in-Chief of the IEEE/ACM Transactions on Networking from 2013-2017.