Skip to main content
Event Recap

axe-con 2026: Keynotes

Accessibility
Team Insights

Hello, Ashley here, your friendly neighborhood Accessibility Lead. It's a new year and a new axe-con! This time around, RDG has enough virtual attendees to be able to cover all four tracks of the conference, so we'll be leaving no stone unturned as our (now fully CPACC certified) team continues to further our accessibility knowledge so we can keep making better products.

This year, we're going to organize our articles by the different tracks, with this fancy one here summarizing the keynotes. Five articles to cover two days full of talks. To read our thoughts on the different conference tracks, click on any of the following links:

And now without further ado, here are our thoughts on this year's keynotes!

Human-Centric AI for Digital Accessibility: Agency, Inclusion, and the Future of Interfaces

Speaker: Rana el Kaliouby, AI scientist and thought leader, Founder of Affectiva

Summary/Insights: Riley Rittenhouse

The start of axe-con is always so exciting, and this year is no different. I really enjoy seeing the number of people joining from around the world it really shows the impact of community and those looking to learn more about accessibility.

A personal AI assistant for everyone, similar to Steve Jobs statement “A personal computer for everyone”.

Rana explained that AI has a high IQ but no empathy or emotional intelligence, and that we aren’t focused enough on this side of AI. There isn’t currently a benchmark for how it impacts us socially or morally, but that we should be having more conversations around this part of AI. 

  • AI generated code can make things worse, especially if accessibility is an afterthought.
  • 90-97% of websites fail to meet basic accessibility standards for users with disabilities.
  • AI has a diversity problem - The more inclusive we are in the people who are at the table making decisions about AI will benefit everyone.

Agentic AI - rise of AI agents that act on our behalf. Rana talked about the opportunity there is for AI-generated code that adheres to accessibility guidelines. Also integrating Axe-core and Deque’s benchmarks directly into the AI development workflow. Someone later asked how we approach accountability for things like this if something is incorrect. Is the AI agent at fault or is it the person who implemented it? 

OpenClaw - open source AI assistant. Runs locally and connects to messaging platforms. Someone paired it with another AI tool and it was one of the first instances where two AI agents were communicating. While it was interesting to see what happened, it was also concerning because there aren’t any security or privacy restrictions in place.

Ethics and safety - thinking about and prioritizing bias. Respecting privacy and giving the option to opt out. She really stressed how important it is that we build guardrails around these models, rather than solely focusing on revenue.

She also shared a podcast towards the end called Pioneers of AI that dives deeper into this discussion of accountability, privacy and security.

Accessibility at an inflection point: Regulation, AI agents, and what comes next

Speakers:
Dylan Barrel, CTO and Author, Deque Systems
Preety Kumar, CEO, Founder & Board Member, Deque Systems

Summary/Insights: Haley Troyer

The second keynote of the morning was an endorsement of Deque’s new AI agent/MCP server, which prioritizes and is specifically trained on accessibility guidelines. The presenters shared a demo of the product, which showed a push notification from the server notifying the user of a few accessibility errors that were detected on a website. The user responded to the agent with instructions to fix the accessibility errors and then submit a pull request. Shortly after, there was a new PR submitted which resolved the errors.

There were concerns from the chat during the session. In the beginning of the session, the presenters emphasized a few concerning statistics about the state of web and mobile:

  • 94.8% of home pages still have WCAG 2 failures
  • 90+% of mobile apps fail basic axe tests

If that’s the case, then how can we really trust AI to solve accessibility errors? I think these concerns are valid. However, I like to think that, by using a model developed by an accessibility-focused company like Deque, the model’s knowledge of accessibility can (hopefully) be trusted more than other models on the market. That said, all fixes still need to be validated by a human who is trained in accessibility to ensure they are fixed properly. For example, in the demo, one of the issues fixed by AI was missing alt text on an image. While the AI agent technically fixed the issue, the alt text that it added wasn’t very helpful, and it simply repeated the title of the page that the image was on, rather than actually describing the contents of the image.

I think we’re still a long way away from AI solving all of our problems. However, I do appreciate that Deque is trying to solve the problem that many other AI agents have right now, which is lack of accessibility knowledge and training.

Building a stronger, more innovative accessibility community

Speaker: Haben Girma, Lawyer; Disability rights advocate, author

Summary/Insights: Ashley Helminiak

I remember hearing Haben Girma speak at a previous axe-con, and was more than excited to see her name again as a keynote speaker. Right out of the gate she made an excellent point about unlearning assumptions, and how we so easily make them based on our own individual perception of the world. I was actually a little surprised when she started talking about technology helping her communicate, such as using a bluetooth device that allowed someone to type and have it translate into Braille on a machine, people were resistant to using it to communicate at first. But Haben, ever a personal advocate, got more people to use it so she could access their words, even President Obama.

Technology has been a great foundation in recent years to make it easier for people with disabilities to get job. Not, as Haben points out, because they somehow gained talent through technology. More so due to the fact many people have an ableist mindset and automatically disqualify people from jobs based on their disability alone, but technology helps people to share their talents in a greater variety of ways.

I think, when seeing the different ways that Haben communicates, working with deaf interpreters, technology, and more, it's easy to fall into the trap of "that's so inspiring!" But she's a human being, with the same need for communication and a fulfilling life as anyone else, and she's found ways to get it, through travel, salsa dancing, advocacy, and many other things. To me, this is not meant to be an inspiration, but an eye-opener, to other ways of living besides what we experience in our usual day to day life. The point is that alternative ways to communicate and experience the world should have greater awareness, more advocacy, and more solutions. Haben even makes connections between the word "inspiring" and pity or guilt, and urges people to only use that feeling to drive action and change.

She then showed some examples of ableism in ChatGPT. She had it read a sign in Italian and translate it to English. Then she had it read braille that was on the same plaque, and it had a completely different interpretation of the Braille—proof that ChatGPT doesn't know Braille. Nobody has taught it a reading system that should be very teachable. She closed by urging us to continue to fight ableism in technology, and challenge negativity around disability. I know I've seen other examples of ableism and discrimination in AI systems recently, so her call to action rings very true to me. AI systems need to learn from somewhere, and it's up to us to try to teach it—or help it unlearn—help create inclusive technology that works for everyone.

Need a fresh perspective on a tough project?

Let’s talk about how RDG can help.

Contact Us