Apple unveiled upcoming accessibility features that cater to people with cognitive, vision, hearing, and mobility impairments.
They include "Personal Voice," which enables iPhones and iPads to generate synthetic replicas of a user's voice for face-to-face conversations, FaceTime, and audio calls. The features are expected to roll out with iOS 17, the next major iPhone update.
- To create a Personal Voice, users would read a series of text prompts aloud, totaling 15 minutes of audio recorded on their iPhone or iPad.
- As a part of Apple's Live Speech, users would type in their desired message and have the Personal Voice communicate it to others.
- To ensure privacy protection, Apple said the features employ on-device machine learning.
- Additionally, Apple unveiled other accessibility enhancements, including a new Detection Mode for its Magnifier feature, which combines camera and LiDAR input with machine learning to announce on-screen text.
- These features are expected to debut in beta at Apple's WWDC in June before their public release in the fall, coinciding with the launch of the iPhone 15 lineup.