home
Search

Understanding what the user is looking at

  • by Lawrence Yau
  • 4 min

Tobii Pico Neo 3

In the realm of user interface design, eye tracking technology has emerged as a tantalizing frontier, promising seamless interaction and control with only a glance. However, beneath this allure lies a landscape of challenges unique to gaze-driven interfaces. In our journey through this terrain, we'll explore the intricacies of eye-based input and unveil strategies to surmount its inherent limitations.

At the heart of eye tracking lies the path to discern what the user is looking at. Yet, this seemingly straightforward task is more complicated than one can imagine. Unlike conventional input methods such as using a mouse or touchscreen, where interactions are precise and deliberate, the gaze is in constant motion. Even during periods of fixation, subtle involuntary movements persist, introducing uncertainty into the equation. Furthermore, discrepancies between measured gaze and actual focus add another layer of intricacy to interface design. 

On our road to harness the potential of gaze input, we must first understand its fundamental principles. The gaze vector, originating from the eye, serves as our guide, directing our attention towards interactive elements within the interface. However, translating this gaze into actionable input presents a myriad of challenges. 

How eye tracking works illustration

What is the best way to deal with eye tracker inaccuracy? 

One of the primary hurdles is the disparity in resolution and stability between gaze tracking and traditional input methods. While our eyes possess remarkable acuity, eye tracking technology often falls short, necessitating larger and more forgiving targets to accommodate the imprecision. This shift towards larger targets, while effective, compromises the aesthetic integrity of the interface and consumes valuable screen real estate. 

To address this issue, designers have devised a range of innovative solutions. From expanding hit regions to incorporating machine learning algorithms, each approach offers a unique set of advantages and drawbacks. For instance, while larger, center-weighted targets enhance accessibility, they may detract from the visual appeal of the interface. Conversely, machine learning algorithms offer unparalleled flexibility but introduce computational overhead and complexity. 

In navigating this landscape of solutions, it's essential to weigh the trade-offs carefully. Expanded hit regions provide a seamless integration with existing designs, while visible gaze direction offers valuable feedback at the expense of distraction. Meanwhile, explicit disambiguation presents a familiar interaction pattern but requires careful implementation to avoid user frustration. 

Ultimately, the choice of strategy depends on the specific requirements of the interface and the preferences of its users. Whether opting for simplicity or sophistication, designers must remain vigilant, continually refining their approach to meet the evolving needs of gaze-driven interaction. 

Tobii Pro Lab - analysis

In conclusion, while the path to effective gaze-based interfaces comes with its challenges, it is also brimming with possibilities. By embracing innovation and embracing the nuances of eye tracking technology, designers can unlock new realms of interaction, ushering in a future where control lies at the blink of an eye. For a deeper dive into the solutions for dealing with eye tracking inaccuracy, read the full learn article: Building for UX: Connecting Eye Gaze to UI Objects.

Written by

  • Tobii employee

    Lawrence Yau

    Sales Solution Architect, TOBII

    Lawrence is currently a Solution Architect in Tobii's XR, Screen-based, and Automotive Integration Sales team where he shares his excitement and know-how about the ways attention computing will fuse technology's capabilities with human intent. At Tobii, Lawrence is captivated by the numerous ways that eye tracking enables natural digital experiences, provides opportunities to improve ourselves and others, and shifts behavior to achieve more satisfying and sustainable lives. With these transformative goals, he is invested in the success of those who are exploring and adopting eye tracking technologies. He is delighted to share his knowledge and passion with the XR community. His restless curiosity for humanizing technology has taken his career through facilitating integration of eye tracking technologies, developing conversational AI agents, designing the user experience for data governance applications, and building e-learning delivery and development tools. Lawrence received his BE in Electrical Engineering at The Cooper Union for the Advancement of Science and Art, and his MHCI at the Human-Computer Interaction Institute of Carnegie Mellon University.

Related content

Subscribe to our blog

Subscribe to our stories about how people are using eye tracking and attention computing.