PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.
Research has brought forth a variety of authentication systems to mitigate observation attacks. However, there is little work about shoulder surfing situations in the real world. We provide the first evidence of shoulder surfing by presenting the results of a user survey (N=174) in which we investigate actual stories about shoulder surfing on mobile devices from both users and observers. Our analysis indicates that shoulder surfing mainly occurs in an opportunistic, non-malicious way. It usually does not have serious consequences, but evokes negative feelings for both parties, resulting in a variety of coping strategies. Observed data was personal in most cases and ranged from information about interests and hobbies to login data and intimate details about third persons and relationships. Thus, our work contributes evidence for shoulder surfing in the real world and informs implications for the design of privacy protection mechanisms.
Although recovering from errors is straightforward on most interfaces, public display systems pose very unique design challenges. Namely, public display users interact for very short amounts of times and are believed to abandon the display when interrupted or forced to deviate from the main task. To date, it is not well understood whether public display designers should enable users to correct errors (e.g. by asking users to confirm or giving them a chance correct their input), or aim for faster interaction and rely on other types of feedback to estimate errors. We conducted a field study where we investigated the users willingness to correct their input on public displays. We report on our findings from an in-the-wild deployment of a public gaze-based voting system where we intentionally evoked system errors to see if users correct them. We found that public display users are willing to correct system errors provided that the correction is fast and straightforward. We discuss how our findings influence the choice of interaction methods for public displays; interaction methods that are highly usable but suffer from low accuracy can still be effective if users can “undo” their interactions.
In this work we show how reading text on large display can be used to enable gaze interaction in public space. Our research is motivated by the fact that much of the content on public displays includes text. Hence, researchers and practitioners could greatly benefit from users being able to spontaneously interact as well as to implicitly calibrate an eye tracker while simply reading this text. In particular, we adapt Pursuits, a technique that correlates users' eye movements with moving on-screen targets. While prior work used abstract objects or dots as targets, we explore the use of Pursuits with text (read-and-pursue). Thereby we address the challenge that eye movements performed while reading interfere with the pursuit movements. Results from two user studies (N=37) show that Pursuits with text is feasible and can achieve similar accuracy as non text-based pursuit approaches. While calibration is less accurate, it integrates smoothly with reading and allows areas of the display the user is looking at to be identified.
We propose a multimodal scheme, GazeTouchPass, that combines gaze and touch for shoulder-surfing resistant user authentication on mobile devices. GazeTouchPass allows passwords with multiple switches between input modalities during authentication. This requires attackers to simultaneously observe the device screen and the user's eyes to find the password. We evaluate the security and usability of GazeTouchPass in two user studies. Our findings show that GazeTouchPass is usable and significantly more secure than single-modal authentication against basic and even advanced shoulder-surfing attacks.
Smooth pursuits is a promising technique for calibration-free and thus spontaneous gaze interaction. We carried out a field study in which we deployed a game on a public display where participants used pursuits to select fish moving in linear and circular trajectories at different speeds. The study ran unattended for two days in a busy computer lab resulting in a total of 56 interactions. Results show that linear trajectories are statistically faster to select via pursuits than circular trajectories. We also found that pursuits is well perceived by users who find it fast and responsive.
Despite the variety of senses that humans possess, the vast majority of user interfaces target the human vision and hearing senses. This work examines the suitability of air streams to be used alongside personal computers to communicate information. Our prototype, the AirDisplay, utilizes the intensity and direction properties of air to exploit the human’s ability to feel mechanical pressure (mechanoreception).
We carried out an experiment to verify that users can perceive multiple air streams separately, and to determine the appropriate number of air sources, air stream intensity and configuration of the device.
The game asks players to name things associated to a given topic. Players earn higher scores if their submisssions match with many players'. This way, we collect associations from players, that are ranked based on the number of occurrences, while offering them a fun game to play. You can try out the game on The Knowledge Test Game website or play The Knowledge Test Game on Facebook.
CAPTCHAs have been widely used over the web, to protect websites from bots that abusively fill online forms automatically. Arabic websites make use as well of CAPTCHAs, although the displayed characters are not in Arabic.
AreCAPTCHA is an Arabic version of reCAPTCHA. It uses distorted Arabic words to determine whether or not the user is a Human. One of the two displayed words is scanned from an Arabic book/newspaper that was never digitized, while users do not which one is which. As users fill AreCAPTCHAs, not only that they verify that they are humans, they additionally contribute in digitizing Arabic literature, making AreCAPTCHA an application of Human Computation.
Teammates: Menna Bakry.
The daily used forms of Arabic differ greatly from one region to another. In fact, studies show that in each of the 22 countries in the Arab world, there are at least five levels of Arabic.
Starting by the Egyptian dialect, we implemented a Game With A Purpose (GWAP) that that collects phrases in the Egyptian dialect that correspond to MSA ones. We envision several GWAPs that collect different dialect mappings to MSA, creating a data set that can be used to facilitate communication between people from different regions. For example, it can help in translating one dialect to another, or even translating non standard Arabic to other languages.
Prolog Server Faces was one of my early projects. Inspired by Java Server Faces, PSF is a stateful event driven web application framework written in Prolog and XML. PSF enforces the MVC design pattern, provides an extensive, and easy to extend, tab library for compact XML that is transformed to XHTML.