Skip to main content
Event Recap

axe-con 2026: Development Track

Accessibility
Team Insights

The Development Track focuses on real implementation strategies for finding, fixing, and testing accessibility issues. Some talks demonstrate software that can be integrated into your workflow, and some get into the nitty-gritty of code. Here are our thoughts on two days of development talks. 

Building for a New Next Billion Users

Speaker: Ire Aderinokun, Investor, Entrepreneur, and Frontend Software Engineer

Summary/Insights: Haley Troyer

Ire started this talk by jumping on the throwback to 2016 trend bandwagon. 10 years ago, over 50% of the world was still offline, and over 90% of these people were from less developed areas of the world due to low coverage and high costs. Today, only around 30% is still offline due to improvements in connectivity and lower costs. This was due to a major focus in the last 10 years on improving connectivity issues, device affordability, localization, and mobile-first design.

Now, when we think about the next 10 years and the next billion new users, what will bring the most impact? In addition to improving on the previous four areas, we should also look at internationalization, representation, usability, and, of course, accessibility.

Ire then shared her five pillars of digital accessibility, which I felt were very well thought out:

  • Performance: Be as lean as possible by using frameworks sensibly, optimizing every resource, minimizing render-blocking resources, and lazy-loading assets.
  • User experience: Work with the platform, not against it by using semantic HTML for baked-in accessibility features, adapting to user preferences about data, contrast, color scheme, motion, and transparency, and being conscious of alternate modes of interaction like screen readers, keyboard, switch controls, etc.
  • Robustness: Use progressive enhancement and optimize for offline-first viewing.
  • Content: Be conscious of the content you’re publishing to ensure it’s free from bias. This includes website content, text alternatives to non-text content, and even code.
  • Empathy: Don’t treat accessibility like a technical checklist. Test with real people, and make accessibility everyone’s job!

Overall, I thought this was a very engaging talk to kick off the Development track!

Shift Left Without Shifting Gears: Accessibility in Your Existing Workflow

Speaker: Harris Schneiderman, Director of Product Management, Deque Systems

Summary/Insights: Riley Rittenhouse

Harris walked through updating a project with a form for booking train tickets. There were several requirements listed as well as a mockup that showed the addition of a section for specifying the number of adults and children. Through this demo he shared three axe tools to use for accessibility testing.

axe DevTools Linter - a free VS Code plugin that identifies accessibility issues. I haven’t tried this tool but I’m looking forward to adding it in the future. In his example it identified a link that had aria-hidden text as well as an image that was missing alt text. Worth noting that it only inspects the code and doesn’t test fully rendered sites.

axe MCP Server - requires subscription to axe DevTools for web, but utilizes more sophisticated checks. He later used Copilot to scan for accessibility issues and fix any problems that were found. In a copilot-instructions.md file he walked through the verification phase that rescans and verifies that all issues are fixed and it works correctly. In his example, it fixed an input with missing label, and added a header around the banner image. Changes can then be kept or declined.

axe DevTools Browser Extension - scans webpages through different tests to identify accessibility issues. I’m more familiar with this tool and have used it to run accessibility tests before. I believe it is a paid tool with a free trial available. I do really like the guided/interactive tests that provide explanation for accessibility issues and also provides some options for how to fix them. Even if everything passes AI provides some information for why it believes everything passes.

Integrating Axe for Automated Testing in a Distributed Engineering Environment

Speakers:
Peter Bossley, Sr. Manager, Accessibility, Thomson Reuters
Corey Hinshaw, Lead Accessibility Specialist, Thomson Reuters
Pavan Mudigonda, Lead Developer, Experience Engineer, Thomson Reuters

Summary/Insights: Riley Rittenhouse

The speakers are all employees of Thomson Reuters, a company with locations worldwide that has a focus on legal, tax and accounting, news, government and print. 

They explained that their company has adopted a hybrid approach to manual and automated testing that adopts an accessibility-first mindset, empowers designers and developers with a11y training and materials, incorporate automated testing often and manual tests for major releases.

Axe Developer Hub - integrates axe-core with unit and e2e testing frameworks, collects results in a project dashboard

Axe Linter - can flag issues in code editors

They discussed the challenge of multiple team members using different tools and how they needed to be flexible in how testing was conducted. There were some charts they shared to help illustrate this but honestly they were pretty difficult to read on the slide.

At an organizational level they explained that one size did not fit all, so some aspects had to be reworked for how often tests were done, and how they were integrated into different workflows. They were able to integrate axe Developer Hub into over 250 applications and overall reduced the number of accessibility issues across projects.

Proactive Inclusion: Embedding Accessibility into the AI Revolution at Coinbase

Speaker: Sam Smith, Senior Staff Accessibility Lead, Coinbase

Summary/Insights: Haley Troyer

Amongst all of the other AI talks I watched today, it was refreshing that this presenter was very aware of the fact that AI makes mistakes. Rather than hyping up AI’s capabilities and glossing over its limitations, he made it clear that AI should be treated as a companion, and it still needs humans to give it clear instructions and help it through bias, hallucinations, etc.

The biggest takeaway from this talk for me is that, when using AI in your development workflow, it’s important to have instructions, guidelines, and standards that your model of choice can work with. For Sam, this meant creating shared documentation files, known as Multi-Document Context (MDC), that their agents could pull from, including (but not limited to) common development standards that all code should adhere to, accessibility guidelines to test against, and even how to output code for human review. These files were added to each code repository so that all projects would be evaluated in the same way, without needing to provide context every time a prompt is written.

One example of building with AI mentioned in this talk that I could see Rapid Development Group implementing is the idea of having an AI coding assistant that can automatically open pull requests named according to an established convention for resolving quick issues. While I don’t see this working well for full features or more complex issues, I think it could save a lot of time on resolving simple bugs or fixing typos.

Accessibility-First App Development: iOS and Android Essentials

Speakers:
Megan Pletzer, Senior Product Manager - Mobile, Deque Systems, Inc.
Jatin Vaishnav, Technical Manager, Native App Accessibility, Deque Systems, Inc.
Nitya Baddam, iOS Engineer, Deque Systems, Inc.
Tyler Williams, Android Engineer, Deque Systems, Inc.

Summary/Insights: Megan James

The team at Deque dove right into the world of mobile accessibility, for both iOS and Android systems. They gave helpful tips, tricks, and demonstrations of how you can test accessibility as you work with automated tools and testing. For native mobile apps, they highlighted the tools used inform their mobile accessibility tests:

In addition to the live demos, they cover common accessibility issues and dive deeper into info and relationship specific adjustments in code. Among both operating systems, common issues to look out for include:

  • WCAG SC 1.3.1 Info and Relationships (Level A): Heading - Visual heading text is not marked as heading
  • WCAG SC 4.1.2 Name, Role, Value (Level A): Form control is not associated with its visible label
  • WCAG SC 4.1.2 Name, Role, Value (Level A): Button does not have a role

For anyone curious about learning more about the tests, this would be a good watch to walk through the demonstrations!

Accessible by Default: Scaling Design Systems with AI-Assisted Development

Speaker: Jesse Beach, Software Engineering Manager, Meta

Summary/Insights: Riley Rittenhouse

Design systems make AI coding faster, more consistent, and higher quality for product building tasks.

Jesse started out by explaining that AI was built on the internet, which means most of the code it produces is also inaccessible. Instead of relying on the internet we should be training these on design systems with accessibility built in from the beginning. AI optimizes for code that works, not necessarily code that is accessible.

They shared some examples from Grammarly, Dew Design and others and in each case they used accessibility standards from the start in their components. This reduced the amount of rework and bug fixes later on, and also ensured that everything being built was accessible. 

I am curious how we would start to train AI on these standards. Do we point it to WCAG documentation and tell it to follow that, or would it require creating our own standards and using our components? In any case I think this would involve a lot of manual testing to make sure that any code produced by AI is accessible.

I know this was more of a dev talk, but this makes me want to revisit our Figma library for any improvements we can make so that all of our designs follow WCAG guidelines from the start.

Axe Innovations: Harnessing AI Responsibly to Automate Accessibility Testing

Speakers:
Wilco Fiers, Director of Accessibility Automation, Deque Systems
Noé Barrell, Machine Learning Engineer, Deque Systems

Summary/Insights: Riley Rittenhouse

They started off this session by talking about the advancements in AI, but also calling out some of the concerns surrounding it. I think they were the first people to acknowledge there are still many issues, without jumping into all the hype. They shares a few principles they follow for responsible AI:

  • Only use it where it adds value
  • Uncertainty and confidence are reported
  • AI decisions are transparent and traceable
  • Human review and corrections are easy

Intelligent Guided Tests - tool available in Axe DevTools extension that enables non-experts to test more. They demonstrated how these tests work to identify color contrast issues, images that are missing alt text, and other accessibility issues. You can select specific elements to test and receive information on which items pass or fail. There’s also some additional information for what needs to be updated, with suggestions for basic patterns and structure.

Axe MCP Server - Integrates with and AI coding agent, tests and remediates issues and creates guardrails for these AI solutions. In this example, they tested a form with Axe MCP Server and Copilot. Analyze tests using Axe DevTools in the browser, and remediate explains exactly what needs to be changed in the code for any issues. I’m interested in how well this would work across an entire site that uses different modules and shared components. I think this would be worth investigating as we explore how AI fits into our workflow at RDG.

Accessibility Adventures: The Lost Secrets Of Forced Colors Mode

Speaker: Daniel Yuschick, Lead Design System Developer, Posti

Summary/Insights: Riley Rittenhouse

Forced colors mode overrides system and web content with a limited, user-defined set of colors automatically. Daniel pointed out that using a box-shadow for focus states gets completely removed in forced colors mode, which means there wouldn’t be any focus styles on interactive elements.

Using a media query (forced-colors: active) we can access css system colors that are mapped to the contrast theme. I attended his session last year and he always uses really engaging storytelling. In his example, he used forced colors mode to read a treasure map to find the treasure. 

Use more than color to distinguish elements

Often we remove borders from buttons, but when we do this we lose a lot of the visual value of the element. Buttons don’t show a background and are left as just text. In CSS we can set the border color to transparent, this will allow forced colors mode to override the button border. Other elements to consider are dialog, modals, cookie consent banners and other popovers.

Don’t remove link underlines completely

Text underline is often removed in favor of a more customized underline that introduces some kind of animation. Setting text-decoration-color to transparent hides the underline but still allows it to be used in forced colors mode.

SVGs and icons

Using svg elements for icons we can use currentColor so that the svg inherits the color that is being set on the button by whichever mode is being used. 

Transparent image fallbacks

Dark logos on a light background, using a dark background in forced color mode makes the logos difficult to see. Using a picture element with a media query for forces-colors: active we can load an image with a solid background instead.

To be clear, this is mainly a Windows supported tool but something to be aware of  so websites are still accessible for Windows users. He did mention that you can emulate this in Chrome, but a Windows machine will have the best results.

The Accessibility Imposters Game Show

Speaker: Jennifer Gorfine, Senior Software Engineer, Product Accessibility, Zendesk

Summary/Insights: Ashley Helminiak

I made a point to not read the description for this talk going in so I didn't know what to expect. Little did I know that the audience and I would be thrown into an actual gameshow, where we use our testing tools to find different offending UI elements from different sets of options. It very much reminded me of our resident game expert, Jonathan Chaffer. It started off easy, but got more difficult, with proof that there are countless cases where an accessibility issue is not flagged by an automated test because its markup is incorrect, or javascript is influencing the behavior in a way to make something work for a mouse but not a keyboard, etc.

While the exercises themselves were nothing new for me, they were an extremely concrete reminder of several things for me:

  • Many issues can pass automated accessibility tests. Usually to underlying markup structure, many issues aren't detectable to automated testing, since they're looking for particular elements or flags that can be looked over if the DOM isn't giving it the right information.
  • Not everything is as it appears. Code languages are powerful, and they can be used to hijack elements so that they appear to be something else if you're navigating with mouse clicks especially. However, keyboard and other manual testing techniques can help diagnose these, as masquerading imposter elements are almost always not a true robust replacement for its intended accessible counterpart.
  • In the end, it comes down to user experience. We don't want to get wrapped up in automated tests and checking off boxes. We still need to pay attention to the user experience as a whole, and from multiple perspectives. We want our products to be delightful, after all. 

Overall, a fun session, and definitely a refreshing format change. Glad to see some nice walk throughs of testing examples!

Testing Web Experiences with Your Keyboard

Speaker: Greg Gibson, Principal UX Producer, Accessibility Testing and QA, Red Hat

Summary/Insights: Megan James

This session was a practical, live demonstration on how to test web experiences using only your keyboard. Some tips for most operating systems included:

  • Tab moves focus to the next interactive elements.
  • Shift-tab moves focus to the previous interactive item.
  • Arrow keys scroll the page or navigate within form elements and complex components.
  • Return follows links or triggers buttons.
  • Spacebar triggers buttons or operates other form elements.
  • Esc closes certain elements, like modals.

Greg created a great page as a resource as we walked through different types of tests, including both good and bad examples. I'd recommend checking it out if you'd like to do testing on your own!

Keyboard testing: https://hellogreg.org/axe26/

Using Cloudflare as a case study, he also demonstrated some common problems: invisible focus states, navigation that only works on hover, and carousels that trap or lose focus. This was yet another point to drive home that keyboard testing is simple, powerful, and a great resource to spot issues automated scans miss.

I'll end with another takeaway to leave you with: “If everyone would tab through their sites from top to bottom, the web would be a better place.

Making Platform React Chart Components Accessible

Speaker: Ambika Yadav, Visualization Engineer, Atlassian

Summary/Insights: Haley Troyer

Charts are important tools for understanding lots of different types of data, from election results to product usage trends and everything in between. However, many chart components are not built in an accessible way, meaning lots of people are unable to access the data and stories told by the numbers. In this session, Ambika shared many insights into improving the accessibility of charts, but here are a few of my favorites.

Colors and patterns

A common recommendation in accessibility is to avoid using color only to represent data. In charts, color is often a primary way of representing different data sets compared within the same chart. For users with no visual impairments, this works just fine. However, for users with color blindness, it can be difficult, if not impossible, to differentiate between the different colors used in charts. One solution that was proposed was the use of patterns in addition to color fills. That way, if a user can’t see the different colors used in a bar chart could still see the differences between a striped bar and a dotted bar.

Focus management

Having proper focus management is important for charts because there could be many data points represented within a chart. If each mark has its own tab stop, a user would have to press the tab key many times just to get to the data they want. To mitigate this, the chart container itself should be the only thing that is part of the main tab order of the page. Other elements within the container should be accessible by using the arrow keys to navigate between data points across the x and y access. This allows people to get to the data that they want quickly, or skip over the chart entirely!

Sonification

Sonification refers to a method of using sound and pitch to represent data in a non-visual way. The speaker played a demo which showed a line chart. There was a “Play” button that, when pressed, plays sounds that respond to the data. Higher pitches for higher numbers, and lower pitches for lower numbers. It essentially played a song of the data, which was very cool to hear!

At Rapid Development Group, we support a few applications that utilize charts for data visualization, so I’m eager to test our charts using these new insights. What a fascinating final talk to end Day 2 of axe-con 2026!

Read our insights from the other tracks:

Need a fresh perspective on a tough project?

Let’s talk about how RDG can help.

Contact Us