Research
My big-picture research vision is to transform the Internet from a global network for information exchange into an Earth-scale network of sensors. This vision, once realized on a macro-scale, would enable real-time monitoring of the planet’s health, help in disaster prediction and response, aid in urban planning and development, and beyond. On a micro-scale, such a vision would enable seamless autonomous navigation both outdoors and in GPS-denied indoor environments, building-scale health and wellness analytics, factory-scale automation monitoring, and unlock novel digital experiences for end-users, to name a few applications.
As a key step towards realizing this vision, I am exploring the potential of performing joint radar imaging and communication with emerging 6G wireless networks, which are expected to utilize millimeter-wave (mmWave) and terahertz (THz) spectrum traditionally reserved for radar imaging and sensing (e.g., in airport scanners and automotive radars). The large bandwidths available in the mmWave and THz bands is useful for both fine-grained ranging and higher data rate communications. Moreover, the unique depth penetration and scattering properties of mmWave and THz signals enables imaging through visible light occlusions like fog and smoke.
In my research, I am developing two core primitives that enable joint radar imaging and communication functionality – (i) imaging using cellular data signals, which are not optimized for radar imaging, and (ii) harnessing multi-bounce scattering from ambient objects in the environment to enable downlink radar imaging and communication as well as the ability to image around-corners & behind-the-system, and estimate the full-velocity vectors of moving objects in the environment. My research spans theoretical analysis, algorithm design, and experimental demonstrations, and I enjoy tackling research problems from all three perspectives.
Leveraging Cellular Data Signals for Radar Imaging
![asilomar2021_img](https://nishant.rice.edu/files/2021/07/Figures_Joint_Img_Comm.jpg)
![consensus_admm](https://nishant.rice.edu/files/2023/02/consensus_admm.png)
Cellular data signals are optimized for communicating information, making them sub-optimal for radar imaging. Moreover, cellular data signals are unknown to the receiving node, and must be estimated (decoded). Furthermore, emerging cellular base stations are full-duplex, i.e., can simultaneously support an uplink data flow from cellular user T2 to the base station receiver R1 as well as a downlink data flow from base station transmitter T1 to a cellular user R2. However, the limited resources available to the cellular network in space (number of antennas at a base station), time (time duration over which the wireless channel is quasi-static), power (limited by regulatory agencies) and frequency (limited signaling bandwidth) necessitate devising optimal resource allocation schemes that support radar imaging without hurting communication flows.
In [1], I take first steps towards addressing the aforementioned challenges. Focusing on an uplink setting with a full-duplex base station illuminating the environment and receiving uplink data signals from a cellular user, I present a joint imaging and data decoding scheme that not only enables simultaneous data decoding and radar imaging, but also utilizes the decoded data as an additional illumination opportunity to improve imaging performance. Additionally, I analyze fundamental trade-offs between the radar imaging resolution and communication data rate, and show that the proposed scheme is optimal from a resource allocation perspective (in high signal-to-noise ratio regimes) and outperforms orthogonal resource allocation (such as time- or frequency-division multiplexing). Moreover, in certain scenarios, I demonstrate the possibility of imaging opportunistically, i.e., with no reduction in communication data rate.
In [2], I take first steps towards extending the scheme proposed in [1] to enable joint imaging and decoding with a distributed network of base stations that all receive uplink transmissions from a cellular user. Such a scenario is of interest in light of cell-free network architectures under consideration in 6G. I show that the proposed distributed algorithm is provably convergent, with performance close to the centralized scheme from [1] under certain assumptions.
Key Publications
[1] Nishant Mehrotra and Ashutosh Sabharwal, “On the Degrees of Freedom Region for Simultaneous Imaging & Uplink Communication,” IEEE Journal on Selected Areas in Communications, Special Issue on Integrated Sensing and Communication, 2022.
[2] Nishant Mehrotra, Ashutosh Sabharwal and César Uribe, “Consensus ADMM-Based Distributed Simultaneous Imaging & Communication,” IFAC NecSys, 2022.
Harnessing Multi-Bounce Scattering from Ambient Objects for Improved Radar Imaging
![multipath_overview](https://nishant.rice.edu/files/2024/05/Screenshot-2024-05-13-153315.png)
Consider a base station downlink beamforming data to a cellular user. Traditional radar imaging processing assumes the signals transmitted by the base station reflect once from an object in the environment before being received back at the base station, i.e., signals follow the scattering path: base station -> object -> base station. Under such an assumption, a base station serving a user at a fixed location can only image objects located within the base station’s transmit beam, and imaging objects outside the transmit beam requires beam scanning, which is not only time-consuming but also sacrifices airtime that could have been devoted to communication.
Our key insight is that some fraction of transmitted signals are also scattered to secondary objects, resulting in multi-bounce scattering paths: base station -> object 1 -> object 2 -> … -> object n -> base station. Conventionally, such multi-bounce paths are rejected since they result in false detections (“ghosts“) at incorrect range and angle bins. Instead, we propose to exploit such multi-bounce scattering paths, thus enabling three novel functionalities: (i) radar imaging at base stations serving downlink users without beam scanning, (ii) imaging around-corners and behind-the-system, and (iii) estimating the full-velocity vectors (both radial and tangential velocities) of moving objects in the environment.
![Figures_Multipath](https://nishant.rice.edu/files/2022/01/Figures_Multipath.png)
In [3], I derive the fundamental performance limits for radar imaging using multi-bounce scattering. I quantify the imaging performance via the imaging degrees of freedom (DoF), which equals the minimum number of basis functions required to represent images captured through an electromagnetic system, and is directly related to the achievable imaging resolution. Such analyses are also of great interest in the context of emerging holographic MIMO (continuous aperture) systems being considered in 6G. There has been a rich tapestry of theoretical results on the imaging DoF for various systems (both optical & radio frequency-based) under the single-bounce scattering assumption. However, in the presence of multi-bounce scattering, prior theoretical analyses have claimed no increase in the DoF, in stark contrast to experimental results demonstrating the contrary. In [3], I unify the contradictory viewpoints in the literature and demonstrate that for a finite aperture system, under certain multi-bounce scattering scenarios, there is a finite DoF gain (and hence an improvement in the imaging resolution) on exploiting multi-bounce due to the formation of virtual apertures.
![Teaser Figure](https://nishant.rice.edu/files/2024/05/Teaser-Figure.jpg)
In [4] and [5], I take first steps towards experimentally demonstrating the multi-bounce gains predicted in [3]. In [4], I present a multi-bounce radar imaging framework that enables two novel functionalities: (i) radar imaging at base stations serving downlink users without beam scanning by exploiting double-bounce paths of the form: base station -> object 1 -> object 2 -> base station, and (ii) imaging around-corners and behind-the-system by exploiting triple-bounce paths of the form: base station -> object 1 -> object 2 -> object 1 -> base station and base station -> object 1 -> object 2 -> object 3 -> base station.
While prior research has explored using multi-bounce for radar imaging, the scenarios considered are usually limited to around-corner sensing with reflections from an occluded scene sampled only along triple-bounce paths. The most common approaches for around-corner sensing require prior knowledge of the directly illuminated objects in the environment, obtained using additional hardware, such as by placing and illuminating dedicated reflectors in the environment or by mapping the radar’s illuminated surroundings via lidar. The proposed framework avoids these requirements by modeling and exploiting single-, double- and triple-bounce diffuse scattering from completely unknown environments with only a single millimeter-wave system. Our implementation with a commercial 77 GHz automotive MIMO radar demonstrates 2×-10× improvement in the median localization error for humans standing outside the system’s field-of-view across 5 different indoor and outdoor scenarios, exploiting multi-bounce from a wide variety of everyday objects and surfaces, such as human bodies, indoor furniture, and extended room and building features.
![tangveldb](https://nishant.rice.edu/files/2024/05/Screenshot-2024-05-13-153703.png)
In [5], I demonstrate that exploiting double-bounce scattering paths due to static objects in the environment enables estimating full-velocity vectors (both radial and tangential velocities) of moving objects in the environment in real-time (within a single radar or communication data frame). Such functionality is typically not possible in radar imaging systems, which are capable of only estimating the radial velocities of moving objects in the environment in real-time or the full-velocity vectors at a delay of multiple frames by tracking objects in the environment. Real-time full-velocity vector estimates as provided by our approach would be key in preventing accidents at traffic intersections where vehicles or pedestrians move tangentially to radars mounted on vehicles, and furthermore aid in designing environment-aware beamformers in communication systems. A key feature of the proposed approach is its standalone operation with a single multi-antenna system (radar or a communication base station), removing the need for multi-module radar/camera/base station networks proposed in prior work for the problem.
Key Publications
[3] Nishant Mehrotra and Ashutosh Sabharwal, “When Does Multipath Improve Imaging Resolution?,” IEEE Journal on Selected Areas in Information Theory, Special Issue on Information Theoretic Foundations of Future Communication Systems, 2022.
[4] Nishant Mehrotra, Divyanshu Pandey, Akarsh Prabhakara, Yawen Liu, Swarun Kumar and Ashutosh Sabharwal, “Hydra: Exploiting Multi-Bounce Scattering for Beyond-Field-of-View mmWave Radar,” ACM MobiCom, 2024.
[5] Nishant Mehrotra, Divyanshu Pandey, Upamanyu Madhow, Yasamin Mostofi and Ashutosh Sabharwal, “Instantaneous Velocity Vector Estimation using a Single MIMO Radar via Multi-Bounce Scattering,” IEEE/NIST CISA, 2024.