Skip to main content

axe-con 2024: Day Two (Haley)

Accessibility
Team Insights

Day Two was all about AI for me. I watched all of the wildcard talks to learn more about the state of accessibility in terms of AI and how AI will shape the future of accessibility.

How AI Will Help Us Re-Invent Accessibility, Lower Industry Load, and Cover More Disabilities

Speaker: Gregg Vanderheiden

In his talk, Dr. Gregg Vanderheiden shares his vision for the future of AI and accessibility. In our current approach, we attempt to design/build products that are usable and accessible to ALL people with ANY/ALL disabilities. This can be costly, time consuming, and imperfect because it’s very difficult to test every possible type, degree, and combination of disabilities to ensure our product works for every individual. A new (disruptive) approach involves creating an “Info-Bot” like functionality that can understand and operate any ICT interface that a 50th percentile human can combined with Individual User Interface Generators (IUIGs) that take the info from the Info-Bot to generate an interface tailored to each specific individual’s needs, abilities, and preferences. So, rather than designing a single system that works for everyone (and probably fails at doing so), we design a system that is capable of customizing its interface to accommodate each individual.

There are lots of advantages to this type of approach:

  • Ubiquity of access for all products - anyone will be able to use any product, even if the company that created the product didn’t think about accessibility
  • Interfaces are designed each individual (even if that same interface is not optimal for others)
  • Familiar interface - for example, all microwaves will display the same interface regardless of brand
  • Adaptive interface - as abilities change over time/day-to-day, the interface can respond
  • Especially useful for people with cognitive, language, and learning disabilities and/or multiple disabilities
  • Works on closed products - if a human can understand it, so can the info-bot
  • For developers/companies
    • No need to train for all types, degrees, and combination of disabilities
    • No longer hundreds of guidelines/provisions to follow (interfaces will be accessible automatically)
    • Higher compliance and reduced litigation
    • More market reach
  • For policy makers
    • Shifts focus from lobbying/advocacy to figuring out the ideal interface for each type, degree, and combination of disabilities
    • Fewer/simpler regulations
    • Fewer lawsuits

Of course, there are also limitations to this approach:

  • Difficult - it will be extremely difficult/time consuming to create this new technology
  • Imperfect - for example, how do you create a system that gives a blind person a way to experience the Mona Lisa in the exact same way as a sighted person?
  • Disruptive - there be a transition period that must be navigated with care. We shouldn’t replace the old way of doing things until the new way is truly ready.
  • Privacy - until this info-bot can run locally, it will leak information it learns (cloud AI isn’t careful enough)
  • Distributive Justice - who gets a good IUIG? Are there free options? Are the more expensive options better? (This is a problem that we currently have with assistive technology)
  • Funding - it will be expensive to create this technology. Who will fund it?

The Impact of AI on People with Disabilities

Speaker: Matthew Johnston

The impact that AI has had on people with disabilities has been enormous. Matthew Johnson started his talk by sharing his journey as a deaf individual with hearing aids over his lifetime. At the beginning, he needed to wear a body aid to help him wear/carry his hearing aid device. The device itself, back then, helped him to hear, but it was not perfect by any means. It was clunky and he felt embarrassed wearing it. As time progressed, hearing aid technology advanced and became much smaller/more discreet. Now, cochlear implants use AI to adapt to the surrounding environment and change based on the noise level. He has only had his new AI-enabled device for a few months, but has already been blown away by the improvement that he has personally experienced.

There are many other ways that AI can improve not only the lives of people with disabilities, but everyone:

  • Automated live captions - not just useful for deaf people, but also helpful for understanding people with thick accents
  • Speech to text - not just useful for people with low/no vision, but also useful for people who are busy using their hands but need to send a text
  • Text to speech - not just useful for people who can’t speak, but also useful for people with anxiety about talking on the phone
  • AI Smart Glasses - not just made for people with low/no vision, but also useful for translating signage in other languages, seeing a new piece of furniture in your home at scale, etc.
  • AI Smart Homes - not just useful for people with limited mobility to be able to turn on/off a light, but useful for able-bodied people as well
  • AI for cognitive - not just useful for people with learning disabilities to understand complex concepts, but also useful for someone learning a new skill to translate jargon into more understandable language
  • AI Assistants - not just useful for helping someone who is deaf to take notes while lip reading on a video call, but also useful for anyone needing to multitask

Of course, Matthew cautions that we should be aware of AI and its limitations — it can be biased/discriminatory and it can provide false information confidently. As humans, it is our job to teach AI using inclusive data and strong ethical frameworks. AI shouldn’t replace humans, but it should support them.

Advocating for Change

Speaker: Jonah Berger

Getting people to change is difficult, especially if they’ve been doing things the same way for a long time. In this talk, Professor Jonah Berger introduces his REDUCE framework for being a catalyst for change, and highlights a few key actions you can take to persuade people to change their ways.

“Catalysts REDUCE Roadblocks”

  • Reactance - When pushed, people tend to push back. Encourage people to convince themselves.
  • Endowment - People are wedded to what they’re already doing. Highlight the cost of inaction.
  • Distance - If a goal feels too far away, people tend towards inaction. Break up a large task into smaller steps.
  • Uncertainty - Change almost always involves uncertainty. Take some of the uncertainty away when possible.
  • Corroborating Evidence - Sometimes one person isn’t enough to be convincing. Find reinforcement and use multiple sources.

I was particularly interested in Jonah’s description of “Reactance.” He gave three strategies that anyone can use when trying to advocate for change:

  • Provide people with a menu of multiple choices and ask them to pick their favorite. (i.e. “Do you want to deal with accessibility compliance now or later?”)
  • Ask people questions to get them talking/sharing their opinions. This allows you to collect information about current barriers to change so you can mitigate them. (i.e. “Why haven’t we improved our website’s accessibility?”
  • Highlight a gap between attitudes and actions, but don’t just tell them they’re being inconsistent. Encourage them to realize it for themselves. (i.e. “Is web accessibility important to you? What actions is this company taking to move towards accessibility?”)

The takeaway from this talk: Identifying the barriers to change and then figuring out how to mitigate them will result in more change and less resistance.

AI as an A11y

Speakers: Kelly Goto and Colin Wong

In their talk, Kelly Goto and Colin Wong discuss a project that they worked on together for a glucose monitoring product/application targeted toward people with diabetes. They highlighted the importance of involving real users with disabilities in testing and really listening to their stories, which allowed them to discover an issue with screen reader accessibility that they otherwise would not have caught. I found their story both interesting and inspiring!

Their process involved recruiting a small group of individuals with diabetic retinopathy (the leading cause of blindness in aging adults). Before the actual discussion, they built a rapport with each individual in the group in order to prepare them and ease the anxiety that comes with with sharing sensitive healthcare information. During the focus group, they learned that there was an issue with the way screen readers read the information within the app. One of the participants shared that, while the screen reader read that the user’s blood sugar level was steady, there was a graph on the screen (not exposed to the screen reader) that showed a downward trend. A blind user relying on the screen reader would not know that their sugar level was dropping until the app decided to alert them that their sugar level was too low.

They asked the participants how they would like this feature to work in an ideal world. They learned that there was a desire for a voice command that would allow users to ask for their sugar levels within a configurable time period, allowing them to get their current blood sugar level as well as learn about trends over time. Kelly and Colin used this information to create a rapid prototype of how this voice command conversation could work. They used ChatGPT to generate a script to follow, and they used VoiceFlow to create an interactive voice control prototype. They also utilized Natural Language Processing to allow for variations in language/prompts, removing the need for users to learn and remember specific voice commands in order to converse with the system. Finally, they tested this prototype with their focus group to ensure they resolved the issue appropriately before implementing the change.

Using AI allowed them to create and test these prototypes in only a matter of a few days. Their story highlights the incredible possibilities with using AI to prototype solutions to accessibility problems!

From Exclusion to Inclusion: Advancing Accessible Gaming Together

Speaker: Morgan Baker

“Nothing About Us Without Us” is a quote that Morgan Baker shared several times throughout her talk, and it perfectly summarizes her points. In her presentation, she talks about the evolution of accessibility in video game design, but her insights are applicable to all industries. I found her recommended strategies for supporting the disabled community in accessibility efforts particularly helpful. Those strategies included:

Provide community support and engagement

  • Provide a way for the disabled community to engage with you. (i.e. engage with/boost disabled creators on social media)
  • Provide dedicated accessibility support (i.e. support videos/help content)
  • Provide dedicated channels for accessibility feedback (i.e. forums/bug report forms)
  • Engage with the disable community at events (i.e. ensuring product demos are accessible)
  • Support organizations aimed at helping those with disabilities

Host activities and workshops

  • Invite disabled community members to share their stories/demonstrate their experiences (i.e. host a disabled speaker)
  • Invite disabled community members to participate in workshops (i.e. recruit people to demonstrate how they use assistive technology)

User research, insights, and consultation

  • Recruit people with disabilities to test your product
  • Recruit people with disabilities to participate in focus groups
  • Hire full-time accessibility consultants/experts

Her hopes for the future? Companies and developers developing muscle memory when it comes to involving the disabled community and embedding accessibility in development processes, expanding accessibility resources and services, improving opportunities for the disabled community, and increasing inclusive hiring practices. That sounds like a great future to me!

Improving Accessibility Through Leveraging Large Language Models (LLMs)

Speaker: Sheri Byrne-Haber

The key takeaway from Sheri Byrne-Haber’s talk is to utilize AI and ChatGPT to increase efficiency of mundane and tedious tasks so you can focus on the things that AI can’t do. She highlights some tasks that AI is great for, and there were three tasks in particular that were particularly interesting to me as a UX designer and certified accessibility professional:

Generate Test Plans

We all know the importance of testing website/app features, but writing test plans can be a tedious and time consuming effort. We can ask ChatGPT to generate a usability test plan for a specific feature that includes Objectives, Scope, Tools, and Test Cases for keyboard navigation, focus control, screen reader, visual design, readability, messaging, errors, and interactions. We, as humans, can then focus more time and energy on actually running the test.

Turn Complicated Language into Plain Language

On average, people in the United States read at an 8th grade level. Knowing this, we can use ChatGPT to rewrite text containing jargon, advanced words, or complex sentence structure in a way that is more consumable for the majority of people. That way, we can use text in our user interfaces that doesn’t exclude people who otherwise might not be able to read or understand it.

Generate Voluntary Product Accessibility Templates (VPATs)

Because VPATs have a standard format, it is possible to feed that format to ChatGPT, give it a list of tasks/accessibility issues, and then ask the system to generate a VPAT that highlights the individual WCAG criteria that is violated by each issue. To take this even further, you could use a localized LLM system that has access to your company’s ticketing system (i.e. JIRA) and have the LLM summarize the open tickets for use in the VPAT’s content rather than manually providing accessibility issues to ChatGPT.

As always, AI makes mistakes, so everything generated by AI needs human review.