New GPT-4 app could be ‘life-changing’ for visually impaired people
The first app to integrate GPT-4’s image recognition capabilities has been described by visually impaired users as “life-changing.”
Be my eyes, a Danish startup, applied the AI model to a new function for the blind or visually impaired. The object recognition tool, called ‘Virtual Volunteer’, can answer questions about any image that is sent.
For example, imagine a user is hungry. They can easily photograph an ingredient and request related recipes. If they prefer to eat out, they can draw an image of a map and get directions to a restaurant. Upon arrival, they can take a photo of the menu and hear the options. Then, when they want to work off the added calories at a gym, they can use their smartphone camera to find a treadmill.
“I know we’re in the middle of an AI hype cycle right now, but several of our beta testers have used the phrase ‘life-changing’ when describing the product,” Mike Buckley, the CEO of By My Eyes, tells TNW.
“This has an opportunity to be transformative by providing the community with unprecedented resources to better navigate physical environments, address everyday needs and gain greater independence.”
Visit us at TNW Conference 15 & 16 June in Amsterdam
Get a 20% discount on your ticket now! Limited time offer.
virtual volunteer uses an upgrade to OpenAI’s software. Unlike previous iterations of the company’s vaunted models, GPT-4 is multimodal, meaning it can parse both images and text as inputs.
Be My Eyes seized the opportunity with both hands to test the new functionality. While text-to-image systems are nothing new, the startup has never been convinced of the software’s performance.
“From too many errors to the inability to converse, the tools available in the market were not equipped to solve many of our community’s needs,” says Buckley. “The image recognition that GPT-4 provides is superior, and the analytic and conversational layers powered by OpenAI exponentially increase value and utility.”
Be My Eyes previously supported users exclusively with human volunteers. According to Open AI, the new feature can generate the same level of context and understanding. But if the user doesn’t get a good answer or just prefers a human connection, they can still call a volunteer.

Despite the promising initial results, Buckley insists that the free service will be rolled out with caution. The beta testers and wider community will play a central role in determining this process.
Ultimately, Buckley believes the platform will provide users with both support and opportunity. Be My Eyes too soon help companies better serve their customers by putting accessibility first.
“It’s safe to say that the technology can not only empower people who are blind or have low vision, but also a platform for the community to share even more of their talents with the rest of the world,” says Buckley. “To me, that’s an incredibly attractive opportunity.”
If you or someone you know is visually impaired and would like to test the Virtual Volunteer, you can register here for the waiting list.