Leaving no one behind
The potential for AI to create a more inclusive world for people with disabilities
Artificial intelligence has dominated public discourse in recent times, with significant advancements that have already started revolutionizing various fields – one such being accessibility. AI has the potential to change the lives of billions who live with disability, empowering those with physical, mental, hearing, or visual impairments to access and enjoy the same services as those without.
According to the World Health Organization (WHO), 1 in 6 people are temporarily or permanently disabled in some way. It’s time to talk about wielding AI for inclusivity, and how we can leverage this technology to ensure accessibility for everyone in our rapidly changing world.
Why inclusive AI is so important
While the AI revolution will likely bring more benefit than harm, we need to consider the risks and implications of AI and big data early and often to ensure that the tools we are building live up to principles of inclusion and equity.
Take streaming and entertainment as an example. Video, music, and gaming platforms have invested a lot of resources into creating algorithms to recommend you the right content based on your preferences and current circumstances. This technology has improved the experience for billions of users. But what if you depend on audio descriptions or closed captions? As of today, these are not even considerations in the data used to provide recommendations. In most platforms, there’s no way to configure preferences or settings related to visual or hearing impairments. So for people with visual or hearing disabilities, these new features are not adding anything – only creating noise.
Another example is in AI-based video analysis. Today, these are used to monitor online exams or online job interviews, the idea being to help eliminate bias by having all participants monitored by a neutral machine. People who are neurodivergent, or who have just had a stroke, however, may react differently and use facial expressions that the AI has not been trained to interpret.
Furthermore, voice recognition and AI-based image descriptions are not as accurate as we like to think. Voice recognition error rates have already been shown to increase dramatically when listening to minorities and strong accents, and are completely unusable by people with speech impediments. These people are often blocked from interacting with AI-based telephone systems to get in touch with customer service, and having automatic caption systems transcribe what they’re saying. And while it may be obvious when a voice recognition system is failing to transcribe someone properly and writing gibberish, this is harder to discern in an AI-based image description. A person with visual impairments may have no way to verify whether the provided description is biased or completely inaccurate.
These are glaring omissions in technology involving AI that mean in the end, these tools are designed to work properly only for a subset of the population. This is why it’s crucial that people with disabilities are actively involved in the design, training, testing, and implementation of AI systems to ensure that the final product is built as inclusively and fairly as possible.
How we can use AI to empower people with disabilities
If we’re mindful about making AI inclusive, it has the potential to customize experiences to varying needs and perspectives in a way that used to be difficult or even impossible for many. Some of these concepts are in early stages of development – but let’s look at these ideas and how they are being brought to life.
- Facilitating communication
While voice recognition today is far from being perfect, the first AI tools have been launched to help address their issues and make them more accessible to people with speech impairments by translating their speech patterns into fluent conversations. Additionally, captioning glasses which automatically project captions visible only to the wearer are currently being tested. While most hearing-impaired people can lip read in one-to-one conversations, these AI-powered glasses make it much easier for them to participate in meetings with multiple attendees, or simply enrich their experience when meeting and conversing with people in groups.
- Inclusion in the job market
In some countries, unemployment is as high as 80% among people with disabilities, at the same time that companies are urgently seeking qualified employees. This cafe in Tokyo is staffed by robots who serve customers food and drink, which are controlled remotely by people with physical disabilities like ALS that don’t allow them to leave the house or even the bed for longer periods of time. Applications like these show us that with a little out-of-the-box thinking, it’s possible to develop products that completely flip the script for people with disabilities.
- Supporting independent lifestyles
Self-driving cars are the ideal for anyone who isn’t able to drive and has their mobility massively reduced because of it. Over 250 million people around the world are legally blind or moderately to severely visually impaired (MSVI). But we are steps away from enabling them to travel safely and efficiently to where they need to go by leveraging AI in autonomous vehicles, without the need to depend on another person to drive or assist them.
Join the movement
Cognizant Netcentric is driving positive change through our commitment to diversity and inclusion, because the time to address inclusivity in AI development is now, not in the distant future. We believe it is our responsibility to ensure that AI systems meet accessibility standards, and power true inclusion for all individuals.