— Photograph by Raymond Wong

The new iPad Pros are still the best tablets for all your basic stuff like reading, watching videos, and playing games, but accessories like the Magic Keyboard and Apple Pencil Pro, and performance from the M4 chip, allow them to perform even more like a laptop or drawing tablet than ever before.

But is that it? Is the iPad’s future only incremental updates that inch it closer to laptop functionality, but never morphs it fully into a MacBook with a Surface Pro-like form factor because of various software limitations (iPadOS vs. macOS) and input differences (touchscreen vs. mouse and keyboard)?

That’s the concern, but Apple might have more in store for its magical sheet of glass. As part of Global Accessibility Awareness Day, Apple previewed eye tracking control on the iPad. While announced as an accessibility feature for iPhones and iPads, coming later this year (presumably in iOS 18), eye tracking is one of the flagship features of the Vision Pro (enabled by a multitude of cameras and sensors placed inside of the headset). Apple’s new accessibility-focused eye tracking brings similar granular control to the two-dimensional interfaces of the iPads and iPhone.

Multi-touch transformed Apple into a $3 trillion global powerhouse. Could eye tracking reinvent the iPad or pave the way for smart home devices such as the long-rumored HomePod with a built-in display?

Hands-Free iPad

According to Apple, eye tracking is enabled entirely by on-device machine learning interpreting what the front-facing camera on an iPad (or iPhone) can see. Moving your gaze across iPadOS (or iOS) highlights individual interface elements and lingering anywhere for a duration lets you select it. Apple calls this Dwell Control and it’s currently available now for third-party assistive devices.

Naturally, because all of the capturing and interpretation happens securely on device, Apple can’t see anything the camera captures. But the novelty of the feature is really that no other external hardware — for example, from the likes of a leading eye-tracking company like Tobii — is necessary. If that name sounds familiar, it’s because Tobii’s occasionally gimmicky eye-tracking technology has been integrated into consumer laptops and displays over the last decade. However, the company’s bigger business is creating assistive tech that not only lets you control devices with your eyes, but communicate with them by generating speech based on what you type.

Apple’s eye-tracking tech will have to be vetted by actual users to determine whether it makes for a suitable replacement for existing setups, but it’s clearly been in the works for a while now. Apple purchased SensoMotoric Instruments, a German company working on eye-tracking tech, in 2017, presumably to beef up its development of the Vision Pro, but this new eye-tracking feature could be seen as a trickle-down benefit of that work.

In much the same way the Apple Watch Series 9’s Double Tap, which senses finger movements to control the smartwatch without touching it, could be traced back to AssistiveTouch, eye tracking could similarly become a headlining feature years from now. And it doesn’t necessarily need to be limited to just tracking your eyes.

Powering a HomePod Smart Display?

Take for example, the Pixel 4. Google introduced the Android phone in 2019 with a gesture system called Motion Sense. Using the “Project Soli” radar tech (one of several Google research projects that never seemed to go anywhere), the Pixel 4 could not only detect when you were facing the phone screen and power on, but also recognize hand gestures for playing, pausing, and skipping music tracks. The concept was abandoned the next year, but that doesn’t mean another company couldn’t make it work. It’s not impossible to imagine a similar feature making its way to the iPhone or iPad, powered by machine learning. Apple also acquired PrimeSense in 2013, the company that made the motion-controlled technology that powered the Kinect for Xbox, so the idea isn’t out of left field.

Or, if you consider reports of Apple working on its own smart home display like the Amazon Echo Show or Google Nest Hub 2, a hands-free control method makes even more sense. These smart home hubs were paired with voice assistants because they let you interact with them even if your hands were full, dirty, or wet. They still have touchscreens for when you can input controls directly, but they’re just as useful without getting any fingers involved. Eye tracking would make that whole experience even simpler.

If Apple opts to go the Google Pixel Tablet route and release an iPad that can be docked and become a smart display, then maybe eye or hand tracking is only enabled in smart display mode. Or maybe there’s some visionOS-inspired version of iOS, iPadOS, and macOS that leverages hands-free interactions in the same way Apple’s headset does. The point is, there’s a huge amount of untapped potential here that’s just waiting for a giant tech company to take seriously.

Beyond Multi-Touch

I’ve written before that leaning into the things that make the iPad different from Apple’s other products is the best path out of the confusing in-between place the tablet has been forced into, but completely zigging where others have zagged is an option, too.

Just because eye and hand tracking have been rolled out as the multi-touch of Apple’s spatial computing future doesn’t mean that they couldn’t be useful in other ways, on other devices. Eye tracking will make using an iPad or iPhone more accessible for everyone in the present, but there’s a good chance it could be the preferred way we interact with a non-Vision Pro device in the future. Who doesn’t want to use hand gestures, Minority Report and Tony Stark-style, if it’s available as an option? If not as the preferred way of controlling Apple devices, at the very least eye and hand tracking could make for good headlining features to sell more iPads.

Share This