SIGMOBILE Research Highlights

SIGMOBILE has established a new research highlights selection process. Papers of high quality and broad appeal will be selected from SIGMOBILE sponsored conferences by a committee including SIGMOBILE's major conferences representatives and elected officials. These papers will be recommended for consideration for the Communications of the ACM Research Highlights section and published in ACM GetMobile (if they did not already appear) and on the web.

Current Awardees

  • ACM MobiSys 2018

    AIM: Acoustic Imaging on a Mobile

    Mao, and Wang, Mei and Qiu, Lili

    The popularity of smartphones has grown at an unprecedented rate, which makes smartphone based imaging especially appealing. In this paper, we develop a novel acoustic imaging system using only an off-the-shelf smartphone. It is an attractive alternative to camera based imaging under darkness and obstruction. Our system is based on Synthetic Aperture Radar (SAR). To image an object, a user moves a phone along a predefined trajectory to mimic a virtual sensor array. SAR based imaging poses several new challenges in our context, including strong self and background interference, deviation from the desired trajectory due to hand jitters, and severe speaker/microphone distortion. We address these challenges by developing a 2-stage interference cancellation scheme, a new algorithm to compensate trajectory errors, and an effective method to minimize the impact of signal distortion. We implement a proof-of-concept system on Samsung S7. Our results demonstrate the feasibility and effectiveness of acoustic imaging on a mobile.

  • ACM MobiSys 2018

    RuntimeDroid: Restarting-Free Runtime Change Handling for Android Apps

    Farooq, Umar and Zhao, Zhijia

    Portable devices, like smartphones and tablets, are often subject to runtime configuration changes, such as screen orientation changes, screen resizing, keyboard attachments, and language switching. When handled improperly, such simple changes can cause serious runtime issues, from data loss to app crashes.

    This work presents, to our best knowledge, the first formative study on runtime change handling with 3,567 Android apps. The study not only reveals the current landscape of runtime change handling, but also points out a common cause of various runtime change issues -- activity restarting. On one hand, the restarting facilitates the resource reloading for the new configuration. On the other hand, it may slow down the app, and more critically, it requires developers to manually preserve a set of data in order to recover the user interaction state after restarting.

    Based on the findings of this study, this work further introduces a re starting-free runtime change handling solution -- RuntimeDroid. RuntimeDroid can completely avoid the activity restarting, at the same time, ensure proper resource updating with user input data preserved. These are achieved with two key components: an online resource loading module, called HotR and a novel UI components migration technique. The former enables proper resources loading while the activity is still live. The latter ensures that prior user changes are carefully preserved during runtime changes.

    For practical use, this work proposes two implementations of RuntimeDroid: an IDE plugin and an auto-patching tool. The former allows developers to easily adopt restarting-free runtime change handling during the app developing; The latter can patch released app packages without source code. Finally, evaluation with a set of 72 apps shows that RuntimeDroid successfully fixed all the 197 reported runtime change issues, meanwhile reducing the runtime change handling delays by 9.5X on average.

Past Awardees

  • ACM MobiCom 2017

    Automating Visual Privacy Protection Using a Smart LED

    Zhu, Shilin and Zhang, Chi and Zhang, Xinyu

    The ubiquity of mobile camera devices has been triggering an outcry of privacy concerns, whereas privacy protection still relies on the cooperation of the photographer or camera hardware, which can hardly be guaranteed in practice. In this paper, we introduce LiShield, which automatically protects a physical scene against photographing, by illuminating it with smart LEDs flickering in specialized waveforms. We use a model-driven approach to optimize the waveform, so as to ensure protection against the (uncontrollable) cameras and potential image-processing based attacks. We have also designed mechanisms to unblock authorized cameras and enable graceful degradation under strong ambient light interference. Our prototype implementation and experiments show that LiShield can effectively destroy unauthorized capturing while maintaining robustness against potential attacks.

  • ACM SenSys 2017

    Ultra-Low Power Gaze Tracking for Virtual Reality

    Li, Tianxing and Akosah, Emmanuel S. and Liu, Qiang and Zhou, Xia

    The paper presents LiGaze, a low-power approach to gaze tracking tailored to VR. It relies on a few low-cost photodiodes, eliminating the need for cameras and active infrared emitters. Reusing light emitted from the VR screen, LiGaze leverages photodiodes around a VR lens to measure reflected screen light in different directions. It then infers gaze direction by exploiting pupil's light absorption property. The core of LiGaze is to deal with screen light dynamics and extract changes in reflected light related to pupil movement. We design and fabricate a LiGaze prototype using off-the-shelf photodiodes. Its sensing and computation consume 791μW in total.

  • ACM MobiSys 2017

    BackDoor: Making Microphones Hear Inaudible Sounds

    Roy, Nirupam and Hassanieh, Haitham and Choudhury, Romit Roy

    Consider sounds, say at 40kHz, that are completely outside the human's audible range (20kHz), as well as a microphone's recordable range (24kHz). We show that these high frequency sounds can be designed to become recordable by unmodified microphones, while remaining inaudible to humans. The core idea lies in exploiting non-linearities in microphone hardware. Briefly, we design the sound and play it on a speaker such that, after passing through the microphone's non-linear diaphragm and power-amplifier, the signal creates a "shadow" in the audible frequency range. The shadow can be regulated to carry data bits, thereby enabling an acoustic (but inaudible) communication channel to today's microphones. Other applications include jamming spy microphones in the environment, live watermarking of music in a concert, and even acoustic denial-of-service (DoS) attacks. This paper presents BackDoor, a system that develops the technical building blocks for harnessing this opportunity. Reported results achieve upwards of 4kbps for proximate data communication, as well as room-level privacy protection against electronic eavesdropping.

  • ACM MobiSys 2017

    Matthan: Drone Presence Detection by Identifying Physical Signatures in the Drone's RF Communication

    Nguyen, Phuc and Truong, Hoang and Ravindranathan, Mahesh and Nguyen, Anh and Han, Richard and Vu, Tam

    Drones are increasingly flying in sensitive airspace where their presence may cause harm, such as near airports, forest fires, large crowded events, secure buildings, and even jails. This problem is likely to expand given the rapid proliferation of drones for commerce, monitoring, recreation, and other applications. A cost-effective detection system is needed to warn of the presence of drones in such cases. In this paper, we explore the feasibility of inexpensive RF-based detection of the presence of drones. We examine whether physical characteristics of the drone, such as body vibration and body shifting, can be detected in the wireless signal transmitted by drones during communication. We consider whether the received drone signals are uniquely differentiated from other mobile wireless phenomena such as cars equipped with Wi- Fi or humans carrying a mobile phone. The sensitivity of detection at distances of hundreds of meters as well as the accuracy of the overall detection system are evaluated using software defined radio (SDR) implementation.

  • ACM MobiCom 2016

    Emotion recognition using wireless signals

    Zhao, Mingmin and Adib, Fadel and Katabi, Dina

    This paper demonstrates a new technology that can infer a person's emotions from RF signals reflected off his body. EQ-Radio transmits an RF signal and analyzes its reflections off a person's body to recognize his emotional state (happy, sad, etc.). The key enabler underlying EQ-Radio is a new algorithm for extracting the individual heartbeats from the wireless signal at an accuracy comparable to on-body ECG monitors. The resulting beats are then used to compute emotion-dependent features which feed a machine-learning emotion classifier. We describe the design and implementation of EQ-Radio, and demonstrate through a user study that its emotion recognition accuracy is on par with state-of-the-art emotion recognition systems that require a person to be hooked to an ECG monitor.

  • ACM SenSys 2016

    A Lightweight and Inexpensive In-ear Sensing System For Automatic Whole-night Sleep Stage Monitoring

    Nguyen, Anh and Alqurashi, Raghda and Raghebi, Zohreh and Banaei-Kashani, Farnoush and Halbower, Ann. C and Vu, Tam.

    This paper introduces LIBS, a light-weight and inexpensive wearable sensing system, that can capture electrical activities of human brain, eyes, and facial muscles with two pairs of custom-built flexible electrodes each of which is embedded on an off-the-shelf foam earplug. A supervised non-negative matrix factorization algorithm to adaptively analyze and extract these bioelectrical signals from a single mixed in-ear channel collected by the sensor is also proposed. While LIBS can enable a wide class of low-cost self-care, human computer interaction, and health monitoring applications, we demonstrate its medical potential by developing an autonomous whole-night sleep staging system utilizing LIBS's outputs. We constructed a hardware prototype from off-the-shelf electronic components and used it to conduct 38 hours of sleep studies on 8 participants over a period of 30 days. Our evaluation results show that LIBS can monitor biosignals representing brain activities, eye movements, and muscle contractions with excellent fidelity such that it can be used for sleep stage classification with an average of more than 95% accuracy.

  • ACM MobiSys 2016

    Reactive Control of Autonomous Drones

    Bregu, Endri and Casamassima, Nicola and Cantoni, Daniel and Mottola, Luca and Whitehouse, Kamin

    Aerial drones, ground robots, and aquatic rovers enable mobile applications that no other technology can realize with comparable flexibility and costs. In existing platforms, the low-level control enabling a drone's autonomous movement is currently realized in a time-triggered fashion, which simplifies implementations. In contrast, we conceive a notion of reactive control that supersedes the time-triggered approach by leveraging the characteristics of existing control logic and of the hardware it runs on. Using reactive control, control decisions are taken only upon recognizing the need to, based on observed changes in the navigation sensors. As a result, the rate of execution dynamically adapts to the circumstances. Compared to time-triggered control, this allows us to: i) attain more timely control decisions, ii) improve hardware utilization, iii) lessen the need to over-provision control rates. Based on 260+ hours of real-world experiments using three aerial drones, three different control logic, and three hardware platforms, we demonstrate, for example, up to 41% improvements in control accuracy and up to 22% improvements in flight time.

  • ACM MobiSys 2016

    Practical Human Sensing in the Light

    Li, Tianxing and Liu, Qiang and Zhou, Xia

    We present StarLight, an infrastructure-based sensing system that reuses light emitted from ceiling LED panels to reconstruct fine-grained user skeleton postures continuously in real time. It relies on only a few (e.g., 20) photodiodes placed at optimized locations to passively capture low-level visual clues (light blockage information), with neither cameras capturing sensitive images, nor on-body devices, nor electromagnetic interference. It then aggregates the blockage information of a large number of light rays from LED panels and identifies best-fit 3D skeleton postures. StarLight greatly advances the prior light-based sensing design by dramatically reducing the number of intrusive sensors, overcoming furniture blockage, and supporting user mobility. We build and deploy StarLight in a 3.6 m x 4.8 m office room, with customized 20 LED panels and 20 photodiodes. Experiments show that StarLight achieves 13.6 degree mean angular error for five body joints and reconstructs a mobile skeleton at a high frame rate (40 FPS). StarLight enables a new unobtrusive sensing paradigm to augment today's mobile sensing for continuous and accurate behavioral monitoring.

Committee Members and SIGMOBILE Conferences (2 year term)

  • Chair: Heather Zheng, University of Chicago
  • Ex Chair: Cecilia Mascolo, University of Cambridge
  • SIGMOBILE EC Chair Marco Gruteser, Rutgers University
  • HotMobile: Romit Roy Choudhury, University of Illinois Urbana-Champaign
  • MobiCom: Lin Zhong, Rice University
  • MobiHoc: Theodoros Salonidis, IBM Research
  • MobiSys: Landon Cox, Microsoft/Duke University
  • SenSys: Kay Römer, Graz University of Technology
  • UbiComp: Junehwa Song, Korea Advanced Institute of Science and Technology
  • Industry Member: Ranveer Chandra, Microsoft

Nomination Process and Paper Eligibility

Nominations come from three sources:

  • The SIGMOBILE Research Highlights Committee, working closely with the program chairs from the conferences they represent, will review and nominate papers from the SIGMOBILE conferences for consideration by the full committee.
  • Best paper nominees are nominated by default.
  • Any SIGMOBILE member may nominate a paper appearing in a SIGMOBILE conference. Authors may not nominate their own papers. Nominations can be made by emailing title, authors, conference and year of publication to the contact at the bottom of this page.

All papers published in the covered conference within the previous half year will be considered by the committee. Papers published in conferences not in the above list are also eligible for consideration, but the committee does not take responsibility for proactively considering these venues. Community members may ask the committee to consider papers from such venues through the community nomination process, described below.

Community Candidates

Community members not on the committee may ask for papers to be considered by submitting to the committee chair a nominating proposal. Such a proposal must summarize the contribution of the paper and explain why the paper is suitable for inclusion in the SIGMOBILE Highlights series. Such a nominating proposal should be no more than a page in length. Periodic reminders of nominating deadlines will be sent to the SIGMOBILE mailing list.

Deadlines for Nomination

  • First Deadline: 31st August each year for papers published in HotMobile, Mobihoc, and Mobisys.
  • Second Deadline: 28th February each year for papers published in Mobicom, Sensys, Ubicomp.

Committee Candidates

Each of the members of the committee covering a particular conference will be responsible for reviewing the papers for that it in consultation with others from that community. We anticipate that this committee member will at least contact the program chair for the relevant conference and ask the chair to suggest one or two papers for consideration. We encourage the program chair to solicit input from members of the program committee. The committee member will then bring potential papers to the committee for a discussion.

Best Paper Candidates

All best papers at conferences in that half year will automatically be considered candidates.

Selection Process

The full committee (minus people with conflicts of interest) will discuss via email the candidate papers, both those arising from the committee, best paper candidates and from any community candidates. The committee will meet electronically twice a year in early September and in early March.

The committee may solicit input from outside experts on the merits of any candidate papers. The committee will select for nomination papers taking into consideration technical quality, the ability to summarize the results in an 8 page paper, and the likely interest from computer scientists in other areas. For each paper nominated, the committee will send to the CACM:

  • a copy of the paper
  • a description of why the paper merits publication in SIGMOBILE Highlights (1/2 page)
  • a list of possible people to write the Technical Perspective
  • consent to the nomination from the authors and prospective technical perspective writers

Conflicts of interest

Committee members may not nominate their own papers for consideration. Committee members may not participate in the discussion of whether one of their papers may be nominated to SIGMOBILE Highlights. Other conflicts of interest should be disclosed to the committee chair, who will decide how to handle such conflicts. A committee member may decide to recuse him or herself from the discussion of any paper for reasons of conflict of interest. Papers by committee members may be nominated by the community nomination process.