Google Rolls Out AI-Powered Accessibility Features for Pixel and Android Devices
Google Introduces New AI-Powered Accessibility Features for Pixel and Android Devices Google unveiled on Tuesday new accessibility features powered by artificial intelligence (AI) for Pixel smartphones and Android devices. There are four new features, two of which are available on all Android devices and the other two are only available on Pixel smartphones. People who are deaf, have speech impairment, or have low vision or vision loss are the target audience for these features. Guided Frame, the Magnifier app's new AI features, and enhancements to the Live Transcribe and Live Captions features are among the features.
Google Adds AI-Powered Accessibility Features The tech giant said in a blog post that it is committed to working with people with disabilities and wants to introduce new accessibility tools and new ideas to make technology more accessible to all.
The Pixel Camera is the only device that can use the first feature, which is called Guided Frame. Users can get spoken assistance from the feature to find the best camera angle and place their faces in the frame. People who have vision loss or low vision will benefit from this feature. Before the camera automatically takes a picture, Google claims that the feature will prompt users to tilt their faces up or down or pan left to right. In addition, it will inform the user when the lighting is insufficient so that they can select a more suitable frame.
Guided Frame is now accessible through the camera settings, replacing the feature that was previously accessible through Android's screen reader, TalkBack.
An upgrade to the Magnifier app is an additional Pixel-specific feature. The app, which was released the previous year, let users use their camera to zoom into the real-world environment to read signboards and locate items on a menu board. Google now lets users use AI to search for specific words in their environment.
Because the AI will automatically zoom in on the word, they will be able to look for information about their flight at the airport or locate a specific item in a restaurant. In addition, a picture-in-picture mode has been added, in which the searched word is locked into the larger window while the zoomed-out image is displayed in a smaller window. The camera's lenses can also be changed by users for specific purposes. The front-facing camera can also be used as a mirror thanks to the app's support.
Additionally, a new update to Live Transcribe will only be compatible with smartphones with foldable screens. It can now display each speaker's own transcriptions while using the feature in dual-screen mode. If two people are seated across a table, the smartphone can be positioned in the middle so that each half of the screen displays the other person's words. According to Google, it will make it easier for everyone to follow the conversation.
Additionally, there will be an upgrade to the Live Captions feature. Live Captions now supports seven additional languages: Chinese, Korean, Polish, Portuguese, Russian, Turkish, and Vietnamese. Users will now be able to view a real-time caption in these languages whenever the device plays a sound.
According to Google, these languages will also be available for Live Transcribe on devices. The total number of languages now stands at 15. Users will no longer require an Internet connection when transcribing these languages. However, the feature supports 120 languages when connected to the Internet.