Detecting Driver Phone Use Leveraging Car Speakers

A student team led by Profs. Marco Gruteser (ECE) and Richard Martin (CS), both members of WINLAB, and Prof. Yingying Chen of Stevens Institute of Technology received the best paper award at the 2011 ACM International Conference on Mobile Computing and Networking (MobiCom).

The paper "Detecting Driver Phone Use Leveraging Car Speakers", authored by Jie Yang, Simon Sidhom, Gayathri Chandrasekharan, Tam Vu, Nicolae Cecan, Hongbo Liu, Yingying Chen, Marco Gruteser, and Richard Martin addresses the problem of sensing when a smartphone is used by a driver, with particular emphasis on distinguishing between a driver and passenger.

This is a key milestone for enabling numerous driver safety and phone interface enhancements. The project developed a detection system that leverages the existing car stereo infrastructure, in particular, the car speakers and handsfree Bluetooth system.

It uses an acoustic ranging approach wherein the phone send a series of customized high frequency beeps via the car stereo. The beeps are spaced in time across the left, right, and if available, front and rear speakers. After sampling the beeps, it times their arrival via a sequential change-point detection scheme, and then uses a differential ranging approach to estimate the phone's distance from the car's center. From these differences a passenger or driver classification can be made. Experiments with two different phones and two different cars showed that our customized beeps were imperceptible to most users, robust to background noise, and achieved a classification accuracy of 90-95 percent depending on the degree of calibration.

The project also received considerable attention from the popular press. It featured into news stories on National Public Radio and was the basis of a joke in Jay Leno's Tonight Show monologue. It also was covered extensively in the MIT Technology Review, the online blog section of the Wall Street Journal, an Inside Science TV segment, CNET news, and numerous other online news services.

A video demo clip on this work can be found here


Prof. Jha received NSF grant "Building a standards-based Cyberinfrastructure for Hydrometeorologic Modeling"

Dr. Shantenu Jha received NSF funding for the project "Collaborative Research: Standards-Based Cyberinfrastructure for Hydrometeorologic Modeling: US-European Research Partnership". This is a two-year project with budget $154,429. The work is in collaboration with Dave Gochis and Richard Hooper, eminent climate modeling scientists at National Center for Atmospheric Research (NCAR) and Consortium of Universities for the Advancement of Hydrologic Science respectively.

The project abstract follows.

Skillful prediction of high-impact rainfall and streamflow events at lead times effective for proper hazard mitigation remains a significant challenge in nearly every region of the world. Additionally, rapid landscape change and an evolving hydroclimate system further complicate prediction problems as they reduce or even eliminate statistical stationarity assumptions upon which a great body of historical and current hydrological prediction is based. While new generations of sophisticated modeling systems within the disciplines of hydrology and meteorology have emerged in the past decade, their use for integrated, cross-discipline prediction system development by researchers and operational agencies remains limited. This is due to many factors including excessively-narrow, stove-piped model development efforts, limited data discovery opportunities, labor intensive pre- and post-processing efforts and severe limitations in community-wide access to sufficient computational capacity.

In essence, significant cyberinfrastructure (CI) challenges must be addressed in order to bring state-of-the-art disciplinary science into inter-disciplinary prediction practice

This project, Standards-based CyberInfrastructure for HydroMeteorology (SCIHM), seeks to link two disciplines--hydrology and meteorology--each of which has a sophisticated CI already developed within their respective disciplines. This linkage will be accomplished with hydrometeorology use cases in Europe and America that will be executed in both the European and American grid computing environments using federated data and computing standards. With research and development partners from several American and European institutions, the project is designed to take advantage of standards-based CI for hydrometeorological applications. In doing so, we will foster a unified standards-based hydrometeorological infrastructure where researchers and students from Europe and the US can rapidly simulate complex physical processes and predict extreme weather events and their hydrological, environmental and societal impacts, taking advantage of scalable on demand high-performance cloud-based computational resources and shared data space. Computational and storage layers will be seamlessly integrated with standards-based domain data services, analysis tools and models, enabling researchers and practitioners to quickly tune predictive models to their areas of interest, discover and access distributed sources of information, and engage in a collaborative analysis and interpretation of prediction results.

Ubiquitous Rainfall Sensing Adaptive System for Urban Sustainability

Civil and Environmental Engineering Assistant Professor and also member of the ECE Graduate Faculty David Hill and ECE Assistant Professor Dario Pompili collaborate on sensing and modeling extreme weather events. Such events have profound effects on the sustainability of urban centers. At the same time human activities are increasing the variability of the climate and increasing the frequency of these events; driving the need for more dynamic decision making tools.

Recent research has explored "smart" urban infrastructure to mitigate extreme weather risks; however, these methods all rely on real-time observations of the environment at appropriate time and space scales. Unfortunately, this observational resolution is not feasible with traditional sensing technologies.

The overall objective of this research is to address urban sustainability through the development of modeling methods suitable for forecasting environmental phenomena in a changing world, and through the development of technology that can enable autonomous infrastructure to adapt to rapidly evolving environmental conditions. This pilot study will support this objective by building long-term research collaborations with Rutgers faculty necessary to meet this multidisciplinary challenge and by developing a real-time system to explore ubiquitous sensing of the environment. This research will focus on rainfall estimation and measure success by the ability to provide ac-curate rainfall estimates at resolutions higher than the minimum threshold suggested by the literature.

Specifically, this research will answer the question of whether it is necessary to use data from dense net-works of dedicated rainfall sensors to achieve accurate real-time rainfall measurements at spatio-temporal resolutions sufficient to enable predictive control of smart infrastructure, or whether networks of heterogeneous ubiquitous sensors can provide sufficient information to achieve the same level of observational resolution and accuracy. This research scope focuses the research on creating a real-time system for adaptive sensing of rainfall using ubiquitous sensors, which will permit the PI to establish research collaborations and external funding to pursue the ambitious overall objective of enabling predictive control of urban infrastructure during extreme weather events.

Designing & Prototyping Standards-based Application Access to Clouds

ECE Assistant Professor Shantenu Jha is working on the design and prototype a standards-based interface to Cloud computing that is syntactically and semantically consistent with existing Grid computing interfaces, whilst extending Grid-based job and resource models.

This is a high-risk, short-term strategic project to design and prototype a global standards-based Access layer to Clouds, that bridges the divide between grids and clouds.

The project has elements of theory, software design and implementation, as well as engagement with the technical standards-development organization, thus it has elements which are not typical of research projects. However, the successful design, implementation and integration will have disproportionate impact, initially on the Cloud standards (and thus potentially on commercialization efforts) and ultimately on Scientific Applications (of the type that are common on our Campus) and their uptake of Distributed Computing Infrastructures, such as NSF funded FutureGrid and XSEDE.

Prof. Pompili wins ONR Young Investigator Award

ECE Assistant Professor Dario Pompili has won a Young Investigator Program grant from the Office of Naval Research (ONR), one of only 26 awarded nationwide in 2012 for his proposal titled: "Investigating Fundamental Problems for Real-time In-situ Data Processing in Heterogeneous Mobile Computing Grids".

The YIP program invests in academic scientists and engineers who show exceptional promise for creative study. Pompili earned his Ph.D. in electrical and computer engineering at the Georgia Institute of Technology in 2007; since then, he has been an assistant professor of electrical and computer engineering at Rutgers University. In 2011, he received the NSF CAREER award for his work on underwater multimedia acoustic communication and the Rutgers/ECE Outstanding Young Researcher award.

The objective of his three-year YIP project, titled "Investigating Fundamental Problems for Real-time In-situ Data Processing in Heterogeneous Mobile Computing Grids," is to enable real-time in-situ vital sign data processing so to extract non-measurable physiological parameters, to interpret it under context, and to acquire actionable knowledge about the soldier's health.

To realize this objective, which requires computing capabilities that go beyond those of an individual sensor mote's and/or portable device's, the collective computational capabilities of hand-held computers, rugged PDAs, and tactical computers carried by soldiers and/or armored vehicles in the vicinity as well as remote computing clusters need to be exploited.


This research project focuses on the fundamental research challenges to organizing these resources into an elastic resource pool (a hybrid computing grid). The most significant challenge is presented by the inherent uncertainty in the environment that can be attributed to unpredictable node mobility, varying rate of battery drain, and high susceptibility to hardware failures. The significant contributions of this research are i) a role-based architectural framework for reliable grid coordination under uncertainty, i.e., for handling resource/service discovery, service request arrivals, and workload distribution and management, and ii) a novel uncertainty- and energy-aware resource allocation engine, which will distribute the workload tasks optimally among the networked computing devices so to ensure Quality of Service (QoS) in terms of application response time and energy consumption.



Dr. Pompili is site co-director of the Cloud and Autonomic Computing Center (CAC) and part of the Rutgers Discover Informatics Institute (RDI2). He is Assistant Professor of Electrical and Computer Engineering. You can find more information about Dr. Pompili at


Prof. Greg Burdea @ American Museum of Natural History

Professor Greg Burdea has been featured in a new exhibit, "Brain: The Inside Story," at the American Museum of Natural History, in New York City. Open now through August 15th, the exhibit seeks to provide visitors "a new perspective and keen insight into their own brains." Professor Burdea's research contributes quite well to such an aim, and it’s no surprise that the Museum would incorporate his work. Not a surprise except to Prof. Burdea, a member of the Rutgers ECE faculty, who had no idea about his involvement with the exhibit. "The first time I knew I was featured was when a colleague told me she had visited the museum with her children and saw my photo on a poster. It was a total surprise."

The Museum incorporated Prof. Burdea's work on the "plasticity" of the brain, to demonstrate how the brain can be rehabilitated and heal after an injury--a stroke, for example--waking up dormant neurons, re-training existing neurons, or re-connecting neural pathways in the brain. The key, says Prof. Burdea, "is that the patient exercises repeatedly, over long periods, and be engaged in the therapy, not being bored, not feeling pushed into the therapy, for the brain plasticity to occur."

Professor Burdea has made significant contributions to the study of the brain's plasticity. In 2003, one such contribution was his work and research he published on Virtual Rehabilitation--a term he coined to represent the advances he and his team made in the lab with patients who received brain therapy while playing virtual reality games. The patients would gladly play and participate, while being mostly unaware that they were, in fact, engaged in a process of brain therapy and rehabilitation.

Currently, Prof. Burdea is directing the Tele-Rehabilitation Institute, which focuses on remote rehabilitation and has in recent years received notice from around the world. In addition, his team’s research pioneers an integrative virtual rehabilitation approach, treating both motor and cognitive/emotive disabilities in a single treatment.
More information on the Tele-Rehabilitation Institute can be found online @

Article by Sean Patrick Cooper

Non-invasive Continuous Ocular Glucose Sensor

ECE Professors Jeff Walling and Jaesok Jeon are collaborating on the development of a low power ocular sensor that continually monitors blood glucose levels using a chemical sensor embedded in a contact lens. Continuous monitoring of blood glucose levels will improve monitoring of diabetic patients and can also aid epidemiological studies for diet other healthcare related issues. This technology allows for non-invasive monitoring of blood sugar potentially ending the need for painful needle sticks among diabetic patients. To enable continuous monitoring, a transparent MEMS relay based analog-to-digital converter will digitize the signal so that it can be stored on a low-power memory cell that will be embedded on a small CMOS chip on the edge of the contact lens. All readings will be transmitted via a backscatter transmitter to a receiver embedded in a standard contact lens case that will also serve to wirelessly charge the contact lenses stored energy for operation during the next day.

Professor Madabhushi, Co-Investigator for $3.3 million NIH Grant Awarded for Prostate Cancer Research

NEW BRUNSWICK, N.J. – The National Institutes of Health has awarded a $3.3 million grant to a research team that includes Rutgers University to increase the reliability of imaging prostate cancer.

The team, led by Riverside Research Institute and involving clinicians from Boston’s Beth Israel Deaconess Medical Center and engineers at GE Global Research, will research ways to help urologists zero in on suspicious tissue in the prostate gland while they perform needle biopsies or localized treatments for prostate cancer.

The National Institutes of Health (NIH) awarded the grant under its industrial-academic partnership program to fund work that can quickly move from the research lab to patient care.The researchers are developing technology to pinpoint the locations of suspected cancerous tissue using both magnetic resonance images acquired just before the biopsy or treatment and ultrasound images acquired at the time of the procedure.

Currently, urologists typically use conventional ultrasound images to guide them to various regions of the prostate gland, from which they extract samples of tissue. While conventional ultrasound can image the gland well, it cannot reveal the presence or location of suspicious tissue inside the gland. If the biopsy samples don’t yield cancerous tissue, there’s still a chance that cancer is present.

“As a result, urologists aren’t always confident about ruling out cancer after a negative biopsy guided by conventional ultrasound,” said Anant Madabhushi, associate professor of Biomedical Engineering and member of the Graduate faculty of Electrical and Computer Engineering at Rutgers and co-investigator on the NIH grant.

For more in depth story go HERE

Slow light on a Silicon Chip - What’s the Limit?

The information bandwidth of lightwave is much higher than today’s electronic information technology. Processing information on lightwave thus has a significant advantage. Temporarily slowing down light on silicon chip allows us to complete the information processing in a small chip before light rushes off the chip. However, significant loss of light intensity occurs as light slows down. This fundamentally limits our capability in optical processing information on a small chip. In the December 15 issue of the journal Physical Review B , Assistant Professor Wei Jiang in Rutgers electrical and computer engineering department elucidates the fundamental mechanism and limit behind such light intensity loss. This could help develop next-generation optical information processing technology on a silicon chip.

Slowing down light is an intriguing topic in optics for decades. In the past, the slow light effect is obtained in bulky, low-temperature apparatuses and/or expensive materials. In the last decade, periodic structures, so-called photonic crystal waveguides, emerged to offer slow light on a compact, inexpensive silicon chip.

Photonic crystal waveguides can be extended to a relatively long distance to achieve both a relatively long delay time and a wide bandwidth, while most other slow light approaches are limited to either a narrow bandwidth or a short delay. However, the optical loss due to random scattering from small bumps and dents on a silicon chip fundamentally limits the capability of photonic crystal waveguides.


Prior experimental work on optical loss in slow-light photonic crystal waveguides showed large variation. Prior numerical simulations of optical loss were limited to one or a few instance of structures with specific parameters and were unable to account for the variation. Prof. Jiang has developed an analytic theory that reveals the general characteristics of optical loss in a photonic crystal waveguide over a wide range of parameters. The theory indicates that that spatial phase and polarization variation may hold the key to optical loss reduction. Furthermore, Prof. Jiang and graduate student Weiwei Song have developed numeric code that enables efficient, accurate optical loss simulations over a large parameter range. Currently, Song is using this code to search for a low-loss photonic crystal waveguide design.

Once loss is reduced, slow-light photonic crystal waveguides will find a broad range of applications in optical signal processing and optical delay lines for phased array antennas. The research was conducted in the Center for Silicon Nanomembranes, supported by Air Force Office of Scientific Research through the Multidisciplinary University Research Initiative.

New NSF Grant for Profs. M. Gruteser, K. Dana and N. Mandayam

Professors M. Gruteser, K. Dana and N. Mandayam have been awarded a grant from the National Science Foundation for the project entitled “Visual MIMO Networks”. This is a 4-year project and was funded in the amount of $685,000.

Below is a brief description of the project.

Visual MIMO Networks

The increasingly ubiquitous use of cameras creates an exciting novel opportunity to build camera-based optical wireless networks. Optical wireless is not only a potential low-cost alternative where it can take advantage of existing cameras and light emitting devices, but it’s highly directional transmissions can present advantages over radio-frequency (RF) based wireless communications. For example, they render such communications virtually interference-free and hard to eavesdrop.


While the optical channel differs fundamentally from the RF channel, this project recognizes that it also allows multiple spatially separated channels between an array of transmitter elements and the array of camera pixels, akin to an RF multiple-input multiple-output (MIMO) system. 

This inter-disciplinary project therefore brings together expertise in the areas of mobile networks, communications, and computer vision to analyze, design, and prototype a network stack for such visual MIMO communications. This stack addresses the fundamentally different visual channel and receiver constraints through innovative visual signal acquisition, tracking, interference cancellation, and modulation techniques at the physical layer as well as vision-aware link and MAC layer protocols.

Visual MIMO networks can potentially support applications ranging from secure communication between cell phones, over localization of 911 callers through surveillance cameras, to interference-free car-to-car communications. 

The project also makes an experimental visual MIMO testbed available to the research community at large. In addition to publications, the project takes advantage of WINLAB's biannual industry meetings to disseminate results and provides a variety of appealing educational activities involving K-12 and undergraduate students.


Subscribe to Rutgers University, Electrical & Computer Engineering RSS