Skip to main content
Event Recap

axe-con 2024: Day Three (Ashley)

Team Insights

Day three is here! As always, it is bittersweet to come to an end of a conference. By now the conference fatigue has set in a bit, but at the same time I always enjoy learning new things and connecting with like-minded professionals. Hopefully we go out with a bang today with some talks that have promising titles and the closing keynote!

ARIA (are ya) afraid of the dark?: Unmasking common HTML monsters to create better user experiences

Speaker: KJ Schmidt, Lead Accessibility Designer, ADP

KJ had some cute analogies to kick off the morning with this code-oriented talk. Did you know that HTML isn't a monster, it's just a sandwich? Just not the one you eat. But HTML tags are the 'bread' that house the contents of our site. Just make sure you're using the right kind of bread for your sandwich! And no open-face sammies here. Close those opening tags!

Good thing I already ate breakfast.

The point of all this sandwich talk wasn't really to take it back to basics, but to point out that proper use of semantics and nesting helps screen readers make programmatic associations, such as items in a menu or form options. The sandwich analogy continued with the talk of attributes, which, would you know, is really just ways to make fancier bread? Bring on the marbled rye! Or maybe multigrain? Either way, attributes can allow you to associate one element with another, relay the state of an element, and more.

Overall, this talk was probably a bit too beginner for me. But it was nice to see a talk out there that aims to instruct UX designers in the basics and make them easier to understand. I am a firm believer that one has no business designing for the digital space if they're not aware of how the basics work, or how accessibility is applied. We've had break the "sorry, this isn't accessible" news many a time when provided a third-party design, and take time and effort to navigate those conversations and find a solution. So UXers, web designers? Learn. This. Stuff. You don't have to be an expert, but if you're designing an interface, knowing a bit of how it will be coded, how it might be navigated via keyboard, or how it might be announced to a screen reader will only help you.

How to get stakeholder buy-in for your product's accessibility

Speaker: Noorul Ameen, Director of Design, Mitratech Holdings Inc.

Noorul had some great tips on building buy-in to digital accessibility. As many of us know, sometimes accessibility can be a difficult sell. Reasons for this range from not believing users with disabilities are part of the company or product audience to not wanting to pay the extra costs associated with it. But by now we also know that it can be far more expensive not to include accessibility in our design, development, and maintenance processes.

There are several different groups to consider as stakeholders for a product. Designers, developers, product owners, C-suite, and customers were groups that he mentioned. For groups such as designers and developers, ways to gain buy-in include providing tools and best practices. If accessibility becomes incorporated into these processes, then we've already made great strides in how the creation of a product can be guided in the right direction.

Product owners are a bit different. They have different goals, and for them accessibility is often only a small part of a bigger picture. So the best way to have them buy-in to accessibility? Relate it to other parts of the bigger picture. Showing how accessibility can improve brand image, reduce poor user experience, and decrease chances of a lawsuit can help them see how widespread the effect is. Showing them metrics for their own product over time can also help prove the return on investment. C-suite stakeholders may need a similar approach, with emphasizing financials and costs of not shifting left with regards to accessibility.

Customer buy-in often comes down to product use. Is it a delightful, efficient experience? Have you outlined your accessibility efforts with a statement or VPAT? Are you transparent about any issues and plans for the future? Customer buy-in can make or break you, after all.

This was a good talk overall, and has some overlap with an agency environment. We just have more people to gain buy-in from! And as we've shifted left in our own accessibility journey, it's been an interesting road gaining buy-in from long-standing customers. Sometimes that goes great! Other times, they can't/won't spend the money on it. A bit of a gamble, if you ask me. But we'll keep having the conversation!

The Power of User Research to Increase Fitness Accessibility

Speakers: Emma Torres, User Researcher, Andrea Sutyak, Senior Manager, User Research, both of Peloton

Not being a person that is prone to buying expensive home gym equipment, I was nonetheless interested in the accessibility initiatives that help to craft Peloton physical and digital products. They created their own dedicated accessibility team in the year 2020, and have now incorporated accessibility into all of the processes on nearly all of their product teams.

I liked that they were transparent about where they are now (they said they're not perfect, but really, nobody is!), and that the screens on their products like the bike and rowing machine now offer screen readers capability for blind and low vision users (or whoever wants to use a screen reader, really). What I'm curious about is something that they've built called Form Assist. It helps rowers make sure they're doing the exercise correctly and helps to correct issues or give tips and praise. They admitted some shortcomings, such as issues with calibration, and how it can't replace a real coach, but I was curious about how it handles unique body types. What about people with missing limbs, or growth disorders that shape their bodies differently? I suppose I'll have to ask.

They outlined how they conducted in-person user research, from recruitment, to planning, to data collection. It was interesting to hear about it, as their research was a blend of testing both hardware and software. Gaining some familiarity with their own product before testers arrived helped them to hone their research questions and even discover their own pain points along the way.

This was an interesting case study, especially with the complexity of hardware and software being factors in the development of the product. They covered many points about user testing, from how to recruit to making not just an accessible product, but an accessible, safe testing space as well. While I have not had the privilege of conducting usability testing with people that have disabilities, there were definitely good tips in this talk to keep in mind. Also, kudos to Peloton for grasping early on that anyone from any walk of life might want to have the exercise experience with their products, not just a certain type of person! 

What is "equivalent" text alternatives for artistic and cultural visual content?

Speakers: Willa Armstrong, Digital Accessibility Specialist, Library of Congress, Dr. Rachael Bradley Montgomery, Founder and Executive Director, Accessible Community, Elizabeth Bottner, Assistive Technology Specialist, Library of Congress

I was excited for this talk because it reminded me of an incident last year, when I was talking to someone that has been in the design industry for decades and they wanted to know why they should even bother with composing alternative text for photos of graphic design content if they couldn't see. After quelling the rage within, I think I ended up telling them that they shouldn't assume that those people don't want to consume that content just because they can't see it. Unfortunately, they didn't seem to put much stock in my words. But that's sort of what it comes down to, right? We shouldn't purposely narrow our audience because we've made assumptions that they won't get anything out if it the way user without disabilities would. As this talk points out, for images in particular, we should present alternative text that serves an equivalent purpose. Hello, it's only WCAG 1.1.1, one of the first thing they thought of!

There are many ways to provide equivalent purpose for an image. This could be alternative text, a caption, associated text, or even a linked file. There are also many ways that you can describe the contents of an image, such as generic or details content, instruction, aesthetics, emotive content, etc. I always say that composing alt text is an art form, and this talk only reaffirmed my opinion.

One of the concepts that this talk discussed was the power of supporting text around an image. Sometimes we forget that there is relevant visible text around an image that lessens the need to create all-encompassing alt text. If you have a site of photography or graphic design pieces and each one has its own page with a title of the piece at the top, you hardly need to repeat the title in the image alt text, do you? That sounds just a tad redundant to me.

What the Library of Congress was researching was what type of image alt text or captions that people want when they're looking through images within the library. There were significant differences in information they wanted to know about known people like politicians and celebrities vs. unknown people. Also information known by the artist or photographer vs. assumed information. They've already conducted moderated surveys to have people rank image descriptions for a set of images in order of relevance/importance, and are continuing with surveys and focus groups to further inform how they should formulate alternative and supporting descriptive text for images. Their results did share that while testers were okay with not that much alternative text directly for an image, they wanted a higher volume of descriptive text in HTML outside of the image. Things like image context, personal preference, interst of detail, disability, and user task all had an impact on how image alt text should be crafted. That's a lot to consider!

Overall, this seems like such a daunting task that they've undertaken, although it's a laudable one. I'm curious how AI might play into this, or some other feature that would allow them to program multiply categorical descriptions for each image, so that it's up to the user to decide which image description(s) they want to hear. After all, sighted users make many different observations about a single image, yes? Why shouldn't a user with a vision disability be afforded that same opportunity?

Demystifying the APCA (Advance Perceptual Contrast Algorithm)

Speaker: James Sullivan, Product Designer, Willowtree

I was excited for this talk since I'm excited about the APCA. Why? It's the proposed candidate for color contrast evaluation for WCAG 3.0! Not only that, it seeks to address several pain points we come across when testing for contrast.

Current guidelines in WCAG 2.2 only look at the text foreground and background, with different ratio thresholds of conformance for large text or regular text. APCA will take into consideration which color is the text and which is the background, as well as the font size and weight. 

After giving a great overview on how WCAG 2.2 contrast is calculated, he showed how some color combinations that pass WCAG 2.2 would fail ACPA standards, due to font weight and size being inadequate. In many cases, I see the added nuances resulting in encouraging larger font sizes, or less usage of thin or light weight fonts.

Like the proposed WCAG 3.0 evaluation system, ACPA will have levels of conformance of bronze, silver, and gold. Since gold is maybe closer to seeking a AAA pass by WCAG 2.2 standards, silver is a good goal to have. The ACPA standard is currently based on the following:

  1. The combination of foreground and background colors determine the lightness contrast (Lc) value, which is on a scale of -108 to +106.
    • Foreground and background matter: reversing the text and background colors does not mean it will have the same Lc value.
  2. The minimumLc Value you need to have is based on the use case for your text.
    • 90: level 1 - good for thin fonts
    • 75: level 2 - use as a minimum for larger text where readability is important
    • 60: level 3 - good minimum threshold for body copy
    • 45: level 4 - minimum for headlines, etc.
    • 30: Level 5 - minimum for disabled text, placeholder text, copyright, etc.
  3. The minimum font-size requirement is based on the Lc Value of your of your text and the font-weight you are using.

James (or Sully, his nickname) shared some crazy work he did with color hue swatches to test the ACPA calculations and show how their thresholds different from WCAG 2.2. At first glance, it appears that ACPA gives us greater leeway than the WCAG 2.2 methods. However, when we take font size and weight into account, there are definitely passing 2.2 combinations that would fail ACPA.

Keeping the future direction of WCAG in mind and its proposal to use ACPA, I know I'll be nudging designs to steer clear of small fonts (even more than I already do), and if I am able to select fonts, I will start avoiding lighter weights for body copy. Because we want to keep our options open! And the icing on the cake? Willowtree has a Figma contrast plugin that already has an ACPA beta evaluator built-in. And more features for ACPA evaluation will be released in the next couple months. Can't wait!

Helping Debunk Disability Stigma

Speakers: Squirmy and Grubs (Shane and Hannah Burcaw), Interabled YouTube Couple, Disability Rights Advocates

We're finally to the end! For more in-depth insights on the closing keynote, take a look at Riley's day three axe-con article. All I have to say is that they were a charming and humorous couple, and have an adorable first-date story. Despite the many encounters with ableism they've faced (wow, some of those stories were unbelievable), they haven't let it bring them down. A perfect way to end the conference!