IIT Dharwad, India, Department of Computer Science and Engineering
I am an Assistant Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT), Dharwad, India. Prior to this, I held post-doctoral research positions at National University of Singapore (2017-19), IIT Bombay (2015-17) and TU Braunschweig (2012-15). I pursued Ph.D. from IIT Bombay (India) and Bachelors/Masters from IIT Delhi (India). My research interests are in software defined networking, data center network architectures, network function virtualization, and high-speed networks.
IIT Dharwad, India, Department of Computer Science and Engineering
National University of Singapore, Department of Electrical & Computer Engineering
National University of Singapore, Department of Electrical & Computer Engineering
IIT Bombay, India, Department of Computer Science and Engineering
IIT Bombay, India, Department of Computer Science and Engineering
TU Braunschweig, Germany, Institut für Datentechnik und Kommunikationsnetze
UIUC, USA, National Center for Supercomputing Applications
Ph.D. in Computer Science
Indian Institute of Technology, Bombay, India
Integrated M.Tech. in Mathematics and Computing
Indian Institute of Technology, Delhi, India
My research largely involves design and analysis of network architectures and algorithms. In particular, I’ve worked on high-speed transport architectures, data-center architectures, software defined networking (SDN), network function virtualization (NFV), and network migration.
In recent years, Software Defined Networking (SDN) has emerged as a pivotal element not only in data-centers and wide-area networks, but also in next generation networking architectures such as Vehicular ad hoc network and 5G. SDN is characterized by decoupled data and control planes, and logically centralized control plane. The centralized control plane in SDN offers several opportunities as well as challenges. A key design choice of the SDN control plane is placement of the controller(s), which impacts a wide range of network issues ranging from latency to resiliency, from energy efficiency to load balancing, and so on. In this paper, we present a comprehensive survey on the controller placement problem (CPP) in SDN. We introduce the CPP in SDN and highlight its significance. We present the classical CPP formulation along with its supporting system model. We also discuss a wide range of the CPP modeling choices and associated metrics. We classify the CPP literature based on the objectives and methodologies. Apart from the primary use-cases of the CPP in data-center networks and wide area networks, we also examine the recent application of the CPP in several new domains such as mobile/cellular networks, 5G, named data networks, wireless mesh networks and VANETs. We conclude our survey with discussion on open issues and future scope of this topic.
Software defined networks (SDNs) could be a game changer for next generation provider networks. OpenFlow (OF)-the dominant SDN protocol, is rigid in its south bound interface-any new protocol field that the hardware must support, must await complete OF standardization. In contrast, OF alternatives such as protocol oblivious forwarding and forwarding and control element separation have simpler schemes for insertion of new protocol identifiers. Even with these there is an inherent limitation on network hardware-the tables must support specific formats and configuration at each node as per protocol semantics. We ask the question-can we design an open system (white-box) - one that is carrier-class, yet able to meet the requirements of any protocol forwarding/action with a minimal set of data-plane functions. We propose bitstream, a low-latency, source-routing based scheme that can support addition of new protocols, be compatible with existing protocols, and facilitate a minimum semantic set for acting on a packet. A prototype is built to show bitstream working. The controller architecture is detailed from a provider perspective, as to how it can be integrated in a provider network using YANG models. The hardware architecture is also presented, showing the functioning of a bitstream capable 400 Gb/s whitebox. The issue of protocol processing optimization is considered and its impact on service latency is shown. The results from the test-bed validate the carrier-class features of the bitstream model.
Software-defined networking (SDN) has considerably shaken the telecommunications industry, with almost every major vendor upgrading their product portfolio and all providers building use cases to inculcate the SDN concept. Significant collaborative activity is underway toward a common set of SDN standards. However, with a huge amount of existing network gear, one question remains: How should providers adopt SDN given the existing infrastructure? To this end, we have developed a well-standardized technology with minor tweaks and created a hardware paradigm whose forwarding plane conforms to carrier-class standards, but whose control plane caters to the SDN philosophy. This paper discusses our experience building such a control plane and its subsequent deployment. We describe the design and implementation of a network management system (NMS) for carrier-class networks using carrier Ethernet manifestations. The management system subscribes to the SDN philosophy, thereby facilitating user-control-based provisioning and service definitions. A centralized controller communicates with carrier Ethernet switch routers (CESRs) that provision services based on multiple identifiers such as IPv6, IPv4, MAC, CTAG/STAG, and port-based identifiers. In this paper, we describe the design of the controller in the NMS and the control state-machine in the CESR, as well as their interactions. The paper details the concepts underlying the SDN system as well as its module-level and service-level implementation aspects. Our key contribution is that the CESR we built along with the SDN NMS is put to the test in a tier 1 provider network, thereby facilitating real network performance measurement. A citywide network was built and we present its results in this paper.
With Internet traffic doubling almost every other year, data-center (DC) scalability is destined to play a critical role in enabling future communications. While many DC architectures are proposed to tackle this issue by leveraging optics in DCs, most fail to scale efficiently. We demonstrate flexible interconnection of scalable systems integrated using optical networks (FISSION), which is a scalable, fault-tolerant DC architecture based on a switchless optical-bus backplane and carrier-class switches. The FISSION DC can scale to a large number of servers (even up to 1 million servers) using off-the-shelf optics and commodity electronics. Its backplane comprises multiple, concentric bus-based fiber rings to create a switchless core. Sectors, which constitute top-of-the-rack switches along with server interconnection pods, are connected to this switchless backplane in a unique opto-electronic architectural framework to handle contention. The FISSION protocol uses segment-routing paradigms for use within the DC as well as an SDN-based standardized carrier-class protocol. In this paper, we highlight, through three use cases, a FISSION test-bed, and show its feasibility in a realistic setting. These use cases include communication within the FISSION DC, in situ addition deletion of servers at scale and equal-cost multipath (ECMP) provisioning.
Network function virtualization and software-defined networking have the potential to change provider revenue streams and offer new services. We measure the impact of NFV on large provider networks by accurately modeling a contemporary service provider. In our model, we consider actual equipment that is currently deployed and understand the impact of NFV on CapEx, OpEx, and service delivery. Apart from accurately modeling a contemporary provider network, we also inculcate robustness in the model to factor in for uncertainty of network traffic. We answer the key questions: what functions in a network can be virtualized and which functions need to continue as traditional hardware? We also harp upon the question as to what new services can be considered and in which circumstances? Our model considers various combinations of network architectures that are used in contemporary networks. The model is supported by extensive analysis and simulation that verify our results from cost, performance, and scalability (of services and the model itself) perspectives.
Service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization--oriented equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. A simulation study validates our NV-induced model.
Carrier Ethernet systems are actively replacing SONET/SDH networks for transport of provider data. After much standardization in both the IEEE and the IETF, Carrier Ethernet has evolved as a rich carrier-class technology. The standards do not focus on specific implementation aspects of providing guaranteed service-oriented features that are intrinsic towards successful replacement of SONET/SDH by Carrier Ethernet solutions. In this paper, we consider the engineering aspects of the Carrier Ethernet software defined control plane for provisioning, managing and maintaining services. Specifically, we propose methods for provisioning guaranteed unicast services considering the underlying network state. The problem is difficult due to the complex nature of admission control coalesced with subjective user requirement that is not easy to rationalize across a network. We also investigate the problem of sub-50 ms protection of multicast connections and propose a unique, tractable algorithm within the control plane for enabling carrier-class multicast services. Finally, we focus on overall control traffic reduction schemes. An intuitive merging algorithm is proposed that minimizes control traffic in Carrier Ethernet domains. A simulations study extensively evaluates the proposed techniques.
Internet traffic is doubling almost every other year which implies that datacenter (DC) scalability will play a critical role in enabling future communications. In this paper, we propose FISSION (Flexible Interconnection of Scalable Systems Integrated using Optical Networks)-a scalable, fault-tolerant DC architecture based on a switchless optical-bus backplane and carrier-class switches, and its supporting protocol. The FISSION DC enables unprecedented scalability using affordable optics and standardized electrical switches. It is architecturally bifurcated into sectors that internally have a nonblocking carrier-class switching interconnection structure. Sectors are connected in the switchless backplane using optical buses. Each sector can receive traffic on all wavelengths (achieved through optical-bus property without any switch reconfiguration) and across all fibers, but a sector transmits on only a group of wavelengths and only in one of the fiber rings in the backplane. The switches function based on an SDN methodology that facilitate mapping of complex protocols and addresses to DC-specific addressing that is scalable and easier to use. We present an analysis to optimize the FISSION architecture. A simulation model is proposed that (1) compares the FISSION approach to other contemporary designs; (2) provides scalability analysis and protocol performance measurement; and, (3) provides optical layer modeling to validate working of the FISSION framework at high line-rates. Our architecture, which provides 100% bisection bandwidth, is validated by simulation results exhibiting negligible packet loss and low end-to-end latency.
Utilizing the dormant path diversity through multipath routing in the Internet to reach end users-thereby fulfilling their QoS requirements-is rather logical. While offering better resource utilization, better reliability, and often even much better quality of experience (QoE), multipath routing and provisioning was shown to help network and data center operators achieve traffic engineering in the form of load balancing. In this survey, we first highlight the benefits and basic Internet multipath routing components. We take a top-down approach and review various multipath protocols, from application to link and physical layers, operating at different parts of the Internet. We also describe the mathematical foundations of the multipath operation, as well as highlight the issues and challenges pertaining to reliable data delivery, buffering, and security in deploying multipath provisioning in the Internet. We compare the benefits and drawbacks of these protocols operating at different Internet layers and discuss open issues and challenges.
Conventionally, network migration models study competition between emerging and incumbent technologies by considering the resulting increase in revenue and associated cost of migration. We propose to advance the science in the existing network migration models by considering additional critical factors, including (1) synergistic relationships across multiple technologies, (2) reduction in operational expenditures as a reason to migrate, and (3) implications of local network effects on migration decisions. To this end, we propose a novel agent-based migration model considering these factors. Based on the model, we analyze the case study of network migration to two emerging networking paradigms, i.e., IETF Path Computation Element (PCE) and Software-Defined Networking (SDN). We validate our model using extensive simulations. Our results demonstrate the synergistic effects of migration to multiple complementary technologies, and show that a technology migration may be eased by the joint migration to multiple technologies. In particular, we find that migration to SDN can be eased by joint migration to PCE, and that the benefits derived from SDN are best exploited in combination with PCE, than by itself.
The multi-layer network design problem and the mobile backhaul problem are both interesting from the perspective of choosing the correct technology and protocol as well as choosing the appropriate node architecture to meet a wide variety of overlay traffic demands. Network operators encounter two variants of the multi-layer/backhaul problem: 1) For a given set of uncertain traffic demands, which set of technologies would minimize the network cost? 2) Would these technology choices be invariant to the changing traffic demands? This problem of technology choice can essentially be abstracted to a switching and grooming problem with the added complexity of unknown traffic demands, which at best may be approximated to some known statistical parameters. To solve this complex switching and grooming problem, our goal is to make use of the theory of robust optimization with the assumption of known boundary conditions on traffic. We present a comprehensive optimization model that considers technology choices in terms of protocols, physical layer parameters, link boundary conditions, and transmission layer constraints. Validated by simulations, our model shows the stand-off conditions between various technologies and how a network operator must take proactive steps to be able to meet requirements of the next-generation networks and services. Our main result showcases network design using two technology alternatives [1) Multi-Protocol Label Switching (MPLS) + Optical Transport Network (OTN) + Reconfigurable Optical Add-Drop Multiplexer (ROADM) and 2) Carrier Ethernet (CE) + OTN + ROADM] and the effect of robustness on these choices. A heuristic is used for comparative purposes as well as to exhaustively model the dynamic case of brown-field networks.
A light-trail is a generalization of a lightpath allowing multiple nodes to be able to communicate along the path, leading to all-optical spatial traffic grooming. A light-trail exhibits properties of dynamic bandwidth provisioning, optical multicasting and sub-wavelength grooming, and architecturally is analogous to a shared wavelength optical bus. Arbitration within the bus is conducted by an out-of-band control channel. The bus feature results in a node that has a large pass-through loss, restricting the size of a light-trail to metro environments. Due to this limitation, it is difficult to extend the light-trail concept to regional or core networks. In this paper we exhaustively investigate the concept of multi-hop light-trails (MLTs) - a method to provide multi-hop communication in light-trails, thus enhancing their reach. Node architecture and protocol requirements for creating MLTs are discussed. We then discuss design issues for MLTs in regional area networks through a problem formulation that is solved using convex optimization. The problem formulation takes into consideration issues such as routing MLTs as well as assigning connections (defined as sub-wavelength traffic requests) to MLTs. Two polynomial-time heuristic algorithms for creation of MLTs are presented. One of the algorithms is a static implementation, while the other is a dynamic implementation - with unknown traffic. A detailed delay analysis is also presented that enables computation of end-to-end delay over MLTs using different flow assignment algorithms. A simulation study validates the MLT concept.
The growth of broadband services in India has been a fraction of the otherwise very impressive cellular growth of almost 500 million+ connections in the past decade. We extend our work in , in which we defined and analyzed the causes of the stagnation of broadband penetration in India, and propose a comprehensive solution. Specifically, we understand the problem from a business perspective, and then propose a novel techno-economic model that is shown to lead to enhanced broadband penetration. This model is based on a concept that we term global content balancing - showcasing the relationship between content and the proliferation of broadband. We show the working of this model and postulate reasons for its acceptability and success. A simulations study evaluates our solution and compares it to usage-based approaches, showing a return on investment improvement.
Although multi-domain survivability is a major concern for operators, few studies have considered the design of post-fault restoration schemes. This paper proposes two such strategies, based upon hierarchical routing and signaling crankback, to handle single and multi-link failure events in multi-domain IP/MPLS networks (also extendible to optical DWDM networks). Specifically, the hierarchical routing strategy relies upon abstracted domain information to compute inter-domain loose routes, whereas the crankback scheme applies signaling re-tries to restore paths in a domain-by-domain manner. The performance of these proposed solutions is then analyzed and compared via simulation.
Dynamic Bandwidth Allocation (DBA) is an important problem for upstream transmission in Fiber-to-the-Home (FTTH) systems. We propose a generalized scheduling mechanism for bandwidth allocation with a view to dissolution of the paradox between efficiency (utilization) and dynamism. Our scheme is shown to work for both TDM PONs as well as hybrid TDM/WDM PONs and pure WDM PONs as well as Next Generation PONs (NGPONs). While conventional bandwidth scheduling schemes pose efficiency as well as fairness issues, our proposed algorithm overcomes these. Three extensions as part of our scheduling technique include: 1) a K -out-of- N scheme to increase efficiency, with a general choice of K being a performance driven parameter; 2) strategic scaling to promote dynamism and reduce bandwidth starvation; and 3) a valuation based strategy that is uniquely tailored to reflect different service requirements. A thorough stochastic analysis based on a Markov-model is presented to compute the network-wide parameters such as delay, optimality and throughput. A detailed simulation model measures the performance of our scheme for latency, dynamism, efficiency and blocking comparing the analytical results with other techniques for dynamic bandwidth allocation in PONs.
Light-trails (LTs) have been proposed as a solution for optical networking to provide support for emerging services such as video-on-demand, pseudo-wires, data-centers, etc. To provision these services we require features such as dynamic bandwidth provisioning, optical multicasting, sub-wavelength grooming and a low-cost hardware platform-all of which are available through the LT concept. Architectural, performance, resilience and implementation studies of LTs have led to consideration of this technology in metropolitan networks. In the area of architecture and performance, significant literature is available in terms of static network optimization. An area that has not yet been considered and which is of service provider importance (from an implementation perspective) is the stochastic behavior and dynamic growth of the LT virtual topology. In this paper, we propose a two-stage scheduling algorithm that efficiently allocates bandwidth to nodes within a LT and also grows the virtual topology of LTs based on basic utility theory. The algorithm facilitates growth of the LT topology fathoming across all the necessary and sufficient parameters. The algorithm is formally stated, analyzed using Markov models and verified through simulations, resulting in 45% betterment over existing linear program (LP) or heuristic models. The outcome of the growth algorithm is an autonomic optical network that suffices for service provider needs while lowering operational and capital costs. This paper presents the first work in the area of dual topology planning-at the level of connections as well as at the level of the network itself.
In this paper, we report on the dynamic services provisioned optical transport (DynaSPOT) test-bed-a next-generation metro ring architecture that facilitates provisioning of emerging services such as Triple Play, Video-on-Demand (VoD), pseudowire edge-to-edge emulation (PWE3), IPTV, and Data Center Storage traffic. The test-bed is based on the recently proposed strongly connected light-trail (SLiT) technology that enables the triple features of dynamic provisioning, spatial subwavelength grooming and optical multicasting-that are quintessential for provisioning of the aforementioned emerging services. SLiT technology entails the use of a bidirectional optical wavelength bus that is time-shared by nodes through an out-of-band control channel. To do so, the nodes in a SLiT exhibit architectural properties that facilitate bus function. These properties at the network side include ability to support the dual signal flow of drop and continue as well as passive add, while at the client side include the ability to store data in order to support time-shared access. The latter (client side) improvisation is done through a new type of transponder card-called the trailponder that provides for (electronic) storage of data and fast transmission (burst-mode) onto the SLiT. Further in order to efficiently provision services over the SLiT, there is a need for an efficient algorithm that facilitates meeting of service requirements. To meet service requirements we propose a dynamic bandwidth allocation algorithm that allocates data time-slots to nodes based on a valuation method. The valuation method is principally based on an auctioning scheme whereby nodes send their valuations (bids) and a controller node responds to bids by sending a grant message. The auctioning occurs in the control layer, out-of-band and ahead in time. The novelty of the algorithm is the ability to take into consideration the dual service requirements of bandwidth request, as well as delay sensitivity. At the hardware level, implementation is complex-as our trailponders are layer-2 devices that have limited service differentiation capability. Here, we propose a dual VLAN tag and GFP-based unique approach that is used for providing service differentiation at layer-2. Another innovation in our test-bed is the ability to support multispeed traffic. While some nodes function at 1 Gb/s, and others function at 2.5 Gb/s (using corresponding receivers), a select few nodes can support both 1- and 2.5-Gb/s operation. This novel multispeed support coalesced with the formerly mentioned multiservice support is a much needed boost for services in the metro networks. We showcase the test-bed and associated results, as well as descriptions of hardware subsystems.
Given the rate at which communication technologies and protocols evolve, network operators are often cautious of fully migrating to a new technology like Software Defined Networking (SDN) at one go, and prefer to do so in phases. Consequently, the number of SDN switches and in turn the amount of SDN control traffic in the network increases gradually. This control traffic processing occurs at the SDN controller(s), which are, hence, required incrementally in such hybrid SDN/legacy networks. The placement of these controllers significantly affects several aspects such as control latency, resiliency, and load balancing. All existing controller placement strategies place controller(s) in a pure SDN network at a point in time, which is contrary to the SDN predominant migration scenario described above. Even when suitably adapted for hybrid networks, the existing controller placement strategies lead to inefficient placements. In this paper, we introduce and formulate the controller placement problem for hybrid SDN/legacy networks over a period of time, with an aim to maximize the switch-controller control channel resilience. We consider an SDN migration trajectory and deduce a resilient controller placement schedule using an optimization approach. Based on 138 real network topologies, we comprehensively evaluate our approach against a well-known existing resilient controller placement strategy (suitably adapted herein for hybrid SDN/legacy networks for a fair comparison), and demonstrate that our approach is more effective with up to 77% higher resiliency, while requiring up to 33% fewer controllers.
Network Function Virtualization (NFV) has emerged as a hot topic for both industry and academia. NFV offers a radically new way to design and operate networks, by abstracting physical network functions (PNFs) to virtual NFs (VNFs). This disruptive innovation opens up a wide area of research, as well as introduces new challenges and opportunities - particularly in provisioning VNF forwarding graphs (or network service chains), and the resulting VNF placement issue. While forwarding graphs are often provisioned in the packet domain for fine-grained control over the respective traffic, we argue that doing so leads to lower efficiency; instead, provisioning forwarding graphs using optical transport proves to be far more efficient in intra-datacenter (DC) scenarios. While optical service chaining for NFV has already been proposed, we emphasize the use of optical bus architectures for the same. We present an architecture conducive for intra-DC NFV orchestration that can easily be extended to inter-DC scenarios. We deploy switchless optical bus architectures in both the frontplane and backplane of the DC. Our design particularly relies on readily available optical components, and scales easily. We validate our model using extensive simulations. Our results suggest that use of optical transport to provision VNF forwarding graphs can result in significant performance enhancement over packet-based electrical switch provisioning, in terms of packet drops and latency.
Network Function Virtualization (NFV) has the potential to transform the way providers do business. In particular, NFV can be an ideal solution for the current provider situation - whereby revenue is decreasing (due to competition), bandwidth requirements are increasing and the cost of provisioning increases with the bandwidth requirements. In such a situation, NFV can be a real game-changer, in terms of providing alternate avenues towards saving CapEx and OpEx, while also facilitating a new set of portfolio services to the end user. We model a realistic service provider and measure the impact of NFV on current network deployment. We then compute price-points at which it would start to make sense for a provider to indulge in NFV. Our simulations and optimizations study has built-in robustness that facilitate stability of the results across traffic variations as well as provider types.
We examine three strategies of VNF placement in a provider network: static service chains; seamless VNF duplication and VNF-dynamic-splitting. A constrained optimization applied to a large provider evaluates these strategies and showcases cost-latency trade-off.
We propose a dual optical architecture, with optics in both the front plane (connecting servers within a rack) and in the backplane (connecting TOR switches). The architecture is shown to scale to a million servers.
We propose a scalable data-center architecture and associated protocol using the double use of optics in both the backplane as well as the frontplane, segregated by an electrical SDN switch. The advantage of our architecture is seemingly infinite scalability in conjunction with the ability to transport large chunks of data (with full bisection bandwidth) between servers across the data center. We present the architecture, system design, scalability issues as well as power profiles. Further, we present a protocol that facilitates software defined networking and communication within the data center. A simulation model is shown that validates the architecture for different manifestations of the data-center using various services and variations of the architecture. The proposed architecture results in low latency and excellent throughput, while reducing total cost of wiring within the data-center.
An infinitely scalable data-center using coherent optics-driven backplane that leads to supporting a series of optical buses is proposed. This architecture, called FISSION (Flexible Interconnection of Scalable Systems Integrated using Optical Networks), facilitates the creation of sectors of Ethernet switches connected in a non-blocking fashion and wired to the optical bus-based backplane. We have shown this architecture to be extremely scalable by incrementally adding any number of fiber-based backplanes without physical and protocol limitations. In this paper, we investigate how to load-balance across such a large data-center. We propose a load-balancing algorithm based on converting service requirements and intra-data-center statistics into natural language aids that fall into the premise of software defined networking within a data-center. The load balancing algorithm is thoroughly evaluated from the perspective of scalability, service support and blocking.
As the Software Defined Networking (SDN) paradigm gains momentum, network operators wonder about how to go about replacing their legacy IP routers with SDN-compliant ones. While forklift network upgrades are impractical in operational networks, the promise of SDN is too compelling. Thus, a viable solution is to gradually migrate over time, leading to hybrid OSPF/SDN networks. Although recent studies proposed device architectures, protocols and algorithms required in such hybrid networks, a single routing domain's SDN migration trajectory - as a whole - has hardly been studied. Significance of the sequence in which IP routers are replaced with SDN routers and optimal sequence of router replacements need to be investigated. In this paper, we address these questions based on SDN's traffic engineering (TE) gains and an operator's SDN investment constraints. We propose optimization techniques and heuristics to plan an effective migration schedule in a single routing domain and demonstrate the significance of such a schedule using relevant network management metrics. Our results suggest: (a) the sequence of IP routers migrating to SDN largely determine the resulting TE gains, and, (b) to determine the best migration sequence, novel greedy algorithms perform almost as good as optimization techniques.
Software Defined Networking (SDN) is an emerging network control paradigm focused on logical centralization and programmability. At the same time, distributed routing protocols, most notably OSPF and IS-IS, are still prevalent in IP networks, as they provide shortest path routing, fast topological convergence after network failures, and, perhaps most importantly, the confidence based on decades of reliable operation. Therefore, a hybrid SDN/OSPF operation remains a desirable proposition. In this paper, we propose a new method of hybrid SDN/OSPF operation. Our method is different from other hybrid approaches, as it uses SDN nodes to partition an OSPF domain into sub-domains thereby achieving the traffic engineering capabilities comparable to full SDN operation. We place SDN-enabled routers as subdomain border nodes, while the operation of the OSPF protocol continues unaffected. In this way, the SDN controller can tune routing protocol updates for traffic engineering purposes before they are flooded into sub-domains. While local routing inside sub-domains remains stable at all times, inter-sub-domain routes can be optimized by determining the routes in each traversed sub-domain. As the majority of traffic in non-trivial topologies has to traverse multiple subdomains, our simulation results confirm that a few SDN nodes allow traffic engineering up to a degree that renders full SDN deployment unnecessary.
Over the years, consumer base and per-consumer demand for network bandwidth have been monotonically increasing. Network operators struggle to accommodate such demands using legacy networking philosophies, and are lately resorting to converged IP+Transport networking paradigms. Such converged networking paradigms are, however, challenging to realize, given the separate ecosystems of these two networks. This stems from the differences in technologies, management, organizational, operational and business practices of the Internet and transport networks. Such differences often result in duplication of services, thereby costing the operator. Attempts to unify these two networks have mostly been complex and disruptive. An alternative approach to address the same issue is through adaptation, i.e. abstracting every operation and transforming it to a format that is accessible for network operations across layers and vendors. On these lines, the ONE adapter, an ontology-based communication adapter, is designed to facilitate coordination between different management ecosystems. In this paper, we present results of performance evaluation of ONE in four use-cases, namely, IP link and Service Provisioning, MPLS Path Provisioning, and IP offloading. Our measured results demonstrate that the promise of ONE adapter is orders of magnitude greater than legacy networking approaches, resulting in substantial CapEx and OpEx savings for network operators.
Conventionally, network migration models study competition between emerging and incumbent technologies by considering the revenue increase and migration cost. We propose to extend the existing network migration models with new critical factors, including (i) synergistic relationships across multiple technologies, (ii) reduction in operation expenditures (OpEx), and, (iii) effect of social factors on human decisions. To this end, we propose a novel agent-based migration model considering these factors. Based on the model, we analyze the case study of optimal path computation with joint migration to two emerging networking paradigms, i.e., IETF Path Computation Element (PCE) and Software-Defined Networking (SDN). Our results demonstrate the synergistic effects of migration to multiple complementary technologies, and shows that a technology migration may be eased by the joint migration to multiple technologies.
Control-plane scalability for Carrier Ethernet networks is a concern that directly impacts transport networks and has been addressed in this paper. Specifically, connectivity fault management optimization is proposed and validated through a analytical/simulations study.
Although multi-domain survivability is a major concern, few studies have considered post-fault restoration schemes. This paper proposes two such strategies (based upon hierarchical routing and signaling crankback) to handle single link failures in multi-domain IP/MPLS networks (also extendible to optical DWDM networks). The performance of the proposed solutions is then compared via simulation.
Cloud computing and IT-service provisioning is critical for the growth of enterprises in being able to provision computationally intensive applications. A high-speed networks infrastructure is necessary for the proliferation of cloud computing to meet disparate IT application demands. Light-trails - a generalization of lightpath with ability to provision sub-wavelength demands, meet dynamic bandwidth needs and cater to optical multicasting in a low-cost platform are investigated as a candidate technology for cloud computing. A time-slotted light-trail system is assumed and an algorithm is proposed based on utility concepts. Scheduling connections over light-trails in a timely manner to meet the tasks of an IT-service are considered. Memory resource management as a constraint is further incorporated in the algorithm thereby making the IT application pragmatically reside over the light-trails infrastructure. An exhaustive simulations study showcases the benefits of light- trails for cloud computing - the results obtained are over a wide range of services with serial, parallel and mixed set of constituent tasks.
Data center interconnectivity is particularly very important and essential for emerging applications such as cloud computing and financial trading. Current data center architectures are built using Ethernet switches or IP routers - both with significant cost and performance deficiencies. We propose for the first time, extending MPLS-TP into the data-center. To this end, a new look-up protocol for MPLS-TP is proposed. Autonomic communication within the data center is possible using our look-up protocol that enables fast creation and deletion of LSPs. The MPLS-TP based data-center that is architected in this paper leads to performance betterments over both IP and Ethernet. To this end, a comprehensive simulations model is also presented. Operations within the data-center using MPLS-TP are also extended to inter-data-center operations using LSP setup across a core network. OAM and performance issues are investigated.
Light-trail, a spatial sub-wavelength optical grooming solution for WDM networks, facilitates dynamic bandwidth provisioning and optical multicasting - essential for cloud computing services. We evaluate through simulations a novel protocol for cloud computing over light-trails.
The Omnipresent Ethernet architecture has been proposed for end-to-end communication using pure Ethernet. We extend the scheme to an all-optical variant that supports a novel scheme called optical bit-switching which is verified through simulations.
We report achieving optical packet transport using mature off-the-shelf components as an alternative technology to PON called light-mesh. Novel service provisioning algorithm and implementation details are discussed with delay/utilization profiles presented.
We report on achieving multi-rate, dynamic, sub-wavelength provisioning of VoIP, video, storage and data services over Strongly connected Light-trails (SLiTs). A novel service differentiation subsystem and associated control are discussed and test-bed results are presented.
In almost a decade, Software Defined Networking (SDN) has transitioned from university laboratories to real-life networks. As networks evolve from legacy networks to hybrid OSPF/SDN networks to pure SDN networks, the associated challenge of appropriate controller placement – i.e. optimal count and location of controllers in an SDN network – need to be suitably addressed. Several studies investigated the controller placement problem at a point in time primarily from a latency minimization approach, in addition to various secondary objectives. However, considering long-term network growth, it is also important to evaluate the best timing to introduce a controller in an SDN network. In this paper, we introduce and formulate the incremental (multi-period) controller placement problem to add controllers in an SDN network in a phased manner over a finite planning horizon. We consider a practical scenario with increasing network traffic, as well as falling capital and operational expenditures of SDN controllers on account of vendor competition and technology maturity.We propose a multiobjective optimization model to derive the optimal multi-period controller roll-out plan. Using extensive simulations, we evaluate our solution in terms of various performance metrics and find that our multi-period placement schedule attains significant cost savings over single-period placement schedule at the cost of a small and acceptable increase in switch-controller latency.
We propose a pCDC-ROADM architecture using a M×N wavelength selective switch that delivers comparable performance to a full CDC-ROADM. Simulation results for the NSFNET topology and optical performance validate our claims.
We propose a pCDC-ROADM architecture using a M×N wavelength selective switch that delivers comparable performance to a full CDC-ROADM. Simulation results for the NSFNET topology and optical performance validate our claims.
Conducted lab sessions for the first course in programming at IIT Dharwad, India.
Delivered guest lectures on OpenFlow in an advanced course on SDN and NFV at IIT Bombay, India.
Delivered guest lectures on discrete event simulation and Gurobi optimization solver in an advanced course on SDN and NFV at IIT Bombay, India.
Delivered lectures on MPLS, GMPLS, Carrier Ethernet, hybrid data-center architectures, etc. in an advanced class on broadband communications at TU Braunschweig, Germany.
Assisted in conducting an introductory course on optical and access networks at IIT Bombay, India.
Assisted in conducting an introductory course on probability and linear algebra at IIT Bombay, India.
Delivered a crash course on MATLAB, including both lecture and lab sessions, to ~100 students at IIT Bombay, India.
Member, Technical Program Committee
Session Chair and Web Chair
Local Arrangements Chair
Co-Founder, Kontemplat Club, Indian Institute of Technology, Bombay