Face control is an accessibility feature that allows users to control the cursor by moving their head and perform various actions using facial gestures. In celebration of International Day of Persons with Disabilities, we’re sharing more details about how we developed it and highlighting the importance of user-centric design.
What is Face control?
Face control is an AI-powered accessibility feature for Chromebooks that enables people to control their mouse cursor and perform actions using facial movements and gestures. The feature uses a series of machine learning models to generate a 3D mesh of 478 specific facial points, enabling precise, real-time gesture detection for hands-free control. This innovative technology will ensure people with motor impairments can interact with their Chromebooks without relying solely on traditional input methods like keyboards and mice. Face control will join more than 20 other helpful accessibility features that are already built into Chromebooks like automatic clicks, ChromeVox and Dictation.
A focus on solving real problems for real people
Inspiration for the project came from Project GameFace and Lance Carr, a gamer with muscular dystrophy, who demonstrated the potential of face control technology for gaming — and for areas far beyond. Based on feedback from the community, we know that many existing accessibility options for people to control their computers can often prove slow and cumbersome. This means that for a large segment of the population, using a laptop effectively can be a daily challenge. Recognizing this, our team set out to create a solution that would make technology more accessible and help people with motor impairments interact with their devices effortlessly.
The future of inclusive education
With 50 million students and educators using Chromebooks every day, Face control has the potential to transform learning for millions of students. By providing a more accessible way to interact with Chromebooks, Face control can help students with diverse learning needs participate more fully and independently in the classroom, while using the same device as their peers. Our team hopes students with limited mobility will be able to easily navigate educational apps, type essays using dictation or even collaborate with peers on group projects, all hands-free.
Face control also has the potential to help people of all backgrounds control their Chromebooks hands-free, whether they want to send a message to the school chat using dictation or advance the slides of a presentation with facial gestures.
From right to left, team members Mark Schillaci, Katy Dektar, Kyungjun Lee, and Jonathan Bernal at Google/IO 2024
A collaborative endeavor
Feedback from the disability community was essential throughout our development process. We initially designed the feature with a limited number of facial gestures. But, through user testing, we learned there was a need for a broader range of gestures, so people could more fully use their Chromebooks. Now, Face Control can recognize up to 18 different gestures and we remain committed to gathering even more feedback to improve the experience for people.
Face control is more than just a feature — it represents a step toward a future where technology fully empowers everyone to learn, work and play. Face control is available now with ChromeOS beta and will roll out early next year to all users. It requires a minimum of 8GB of RAM for the best experience.
For more information about Google for Education's accessibility features, visit: edu.google.com/accessibility.
Blog Article: Here