Research Highlights



Phase Messaging for Time of Flight Cameras

Ubiquitous light emitting devices and low-cost commercial digital cameras facilitate optical wireless communication system such as visual MIMO where handheld cameras communicate with electronic displays. While intensity-based optical communications are more prevalent in camera-display messaging, we present a novel method that uses modulated light phase for messaging and time-of-flight (ToF) cameras for receivers. With intensity-based methods, light signals can be degraded by reflections and ambient illumination. By comparison, communication using ToF cameras is more robust against challenging lighting conditions. Additionally, the concept of phase messaging can be combined with intensity messaging for a significant data rate advantage. In this work, we design and construct a phase messaging array (PMA), which is the first of its kind, to communicate to a ToF depth camera by manipulating the phase of the depth camera's infrared light signal. The array enables message variation spatially using a plane of infrared light emitting diodes and temporally by varying the induced phase shift. In this manner, the phase messaging array acts as the transmitter by electronically controlling the light signal phase. The ToF camera acts as the receiver by observing and recording a time-varying depth. We show a complete implementation of a 3x3 prototype array with custom hardware

-WenjiaYuan, Richard Howard, Kristin Dana, Ashwin Ashok, Ramesh Raskar, Marco Gruteser, and Narayan Mandayam, Phase Messaging Method for Time-of-flight Cameras, in ICCP: Proceedings of IEEE International Conference on Computational Photography, 2014, [PDF] BIBTEX]



Visual MIMO

The growing ubiquity of cameras in hand-held devices and the prevalence of electronic displays in signage creates a novel framework for wireless communications. Traditionally, the term MIMO is used for multiple-input multiple-output where the multiple-input component is a set of radio transmitters and the multiple-output component is a set of radio receivers. We employ the concept of visual MIMO where pixels are transmitters and cameras are receivers. In this manner, the techniques of computer vision can be combined with principles from wireless communications to create an optical line-of-sight communications channel. Two major challenges are addressed: (1) The message for transmission must be embedded in the observed display so that the message is hidden from the observer and the electronic display can simultaneously be used for its originally intended purpose (e.g. signage, advertisements, maps); (2) Photometric and geometric distortions during the imaging process corrupt the information channel between the transmitter display and the receiver camera. These distortions must be modeled and removed. In this paper, we present a real-time messaging paradigm and its implementation in an operational visual MIMO optical systems. As part of the system, we develop a novel algorithm for photographic message extraction which includes automatic display detection, message embedding and message retrieval.


-Wenjia Yuan, Kristin Dana, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Spatially Varying Radiometric Calibration for Camera-Display Messaging, IEEE Global Conference on Signal and Information Processing (GlobalSIP), December 2013 [PDF]  [BIBTEX]     

-Wenjia Yuan, Kristin Dana, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Dynamic and Invisible Messaging for Visual MIMO, Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), pp. 345-352, January 2012 [PDF]  [BIBTEX]


Robotic Bridge Inspection

Detection of cracks on bridge decks is a vital task for maintaining the structural health and reliability of concrete bridges. Robotic imaging can be used to obtain bridge surface image sets for automated on-site analysis. We present a novel automated crack detection algorithm, the STRUM (spatially-tuned robust multi-feature) classifer, and demonstrate results on real bridge data using a state-of-the-art robotic bridge scanning system. By using machine learning classification, we eliminate the need for manually tuning threshold parameters. The algorithm uses robust curve fitting to spatially localize potential crack regions even in the presence of noise. Multiple visual features that are spatially tuned to these regions are computed. Feature computation includes examining the scale space of the local feature in order to represent the information and the unknown salient scale of the crack. The classification results are obtained with real bridge data from hundreds of crack regions over two bridges. This comprehensive analysis shows a peak STRUM classifier performance of 95\% compared with 69\% accuracy from a more typical image-based approach. In order to create a composite global view of a large bridge span, an image sequence from the robot is aligned computationally to create a continuous mosaic. A crack density map for the bridge mosaic provides a computational description as well as an global view of the spatial patterns of bridge deck cracking.



Advanced Watermarking with a Texture Camera

We present a method for transparent watermarking using a custom bidirectional imaging device. The two innovative concepts of our approach are reflectance coding and multiview imaging. In reflectance coding, information is embedded in the angular space of the bidirectional reflectance distribution function (BRDF) and this information can vary at each surface point. In order to achieve a transparent watermark, reflectance coding is implemented using a spatial variation of the Brewster angle. The novel multiview imaging method measures the reflectance over a range of viewing and illumination angles in order to instantly reveal the unknown Brewster angle. Unlike typical in-lab measurements of the Brewster angle or the refractive index, this method does not require accurate prior knowledge of the surface normal so that imaging in non-lab conditions is feasible. Furthermore, a range of incident angles are examined simultaneously, eliminating the need for scanning incidence angles. The approach is well-suited for transparent watermarking where the observer cannot see the watermark because it is comprised of spatial variations of refractive index. The transparency and angular coding of the watermark has great utility in deterring counterfeit attempts. In this paper, we present the imaging device and demonstrate it’s effectiveness in detecting and measuring changes in refractive index. This device acts as the decoder in a transparent watermark system.

-Kristin J. Dana, G. Livescu, R. Makonahalli, Transparent Watermarking using Bidirectional Imaging, IEEE International Workshop on Projector Camera Systems, in conjunction with CVPR, June 2009 [PDF]  [BIBTEX]


Texture Camera

Capturing surface appearance is a challenging task because reflectance varies as a function of viewing and illumination direction. In addition, most real-world surfaces have a textured appearance, so reflectance also varies spatially. We present a texture camera that can conveniently capture spatially varying reflectance on a surface. Unlike other bidirectional imaging devices, the design eliminates the need for complex mechanical apparatus to move the light source and the camera over a hemisphere of possible directions. To facilitate fast and convenient measurement, the device uses a curved mirror so that multiple views of the same surface point are captured simultaneously. Simple planar motions of the imaging components also permit change of illumination direction and region imaging. We present the current prototype of this device, imaging results, and an analysis of the important imaging properties

-Kristin J. Dana and Jing Wang, Device for Convenient Measurement of spatially varying bidirectional reflectance, Journal of the Optical Society of America A, pp. 1-12, January 2004 [PDF]  [BIBTEX]

-Jing Wang and Kristin J. Dana, Relief Texture Shape from Specularities,  IEEE Transactions on Pattern Recognition and Machine Intelligence, vol. 28, no. 3, pp. 446-457, March 2006 [PDF]  [BIBTEX]



Computational Skin Texture

Quantitative characterization of skin appearance is an important but difficult task. The skin surface is a detailed landscape, with complex geometry and local optical properties. In addition, skin features depend on many variables such as body location (e.g. forehead, cheek), subject parameters (age, gender) and imaging parameters (lighting, camera). As with many real world surfaces, skin appearance is strongly affected by the direction from which it is viewed and illuminated. Computational modeling of skin texture has potential uses in many applications including realistic rendering for computer graphics, robust face models for computer vision, computer-assisted diagnosis for dermatology, topical drug efficacy testing for the pharmaceutical industry and quantitative comparison for consumer products. In this work we present models and measurements of skin texture with an emphasis on faces. We develop two models for use in skin texture recognition. Both models are image-based representations of skin appearance that are suitably descriptive without the need for prohibitively complex physics-based skin models. Our models take into account the varied appearance of the skin with changes in illumination and viewing direction. We also present a new face texture database comprised of more than 2400 images corresponding to 20 human faces, 4 locations on each face (forehead, cheek, chin and nose) and 32 combinations of imaging angles. The complete database is made publicly available for further research.

-Oana G. Cula, Kristin J. Dana, Frank P. Murphy and Babar K. Rao, Skin Texture Modeling, International Journal of Computer Vision, vol. 62, no. 1-2, pp. 97-119, April-May 2005 [PDF]  [BIBTEX]


Send me mail at "kdana at ece dot rutgers dot edu"