The LCA Spotlights are posted in 3 places: the LinkedIn community, as part of the LCA Newsletter and here, on the LCA Spotlight website. (If you haven't yet signed up for the newsletter, you can do so from the Home page. Remember, they're delivered twice a month on Tuesdays.)
If you're only just joining now, you can find all the past spotlights here listed in reversed chronological order.
Spotlight 19: Headings (1.3.1, 2.4.2, 2.4.6 & 2.4.10)
It’s time for another spotlight, this time on headings! This is not too complex, so let’s start!
According to WCAG 2.4.2 Page Titled (Level A) each page of the learning content should have a descriptive title so users can understand what the page is about. A descriptive title listed in the menu also helps users to navigate to specific content when looking for it. It also benefits users who use assistive technology like screen readers which announce the title to the user before reading rest of the content on the page.
WCAG 2.4.6 Headings and Labels (AA) simply states that “headings and labels describe topic or purpose”. It means that the page/slide and section headings should give a clear indication of the topic or the purpose of the content to allow users to “skim" the headings and quickly locate the specific content they need. For example, the headings “Part 1”, “Part 2” etc, are not descriptive enough; instead, an article or training about accessibility might have these headings: “Intro to accessibility”, “What is WCAG?” etc.
While the Headings and Labels (WCAG 2.4.6, AA) criterion only applies if headings and section headings are used, the advanced guideline WCAG 2.4.10 Section headings (AAA), actually encourages the use of heading and section headings. This is to indicate the organization of the content and facilitate the navigation and comprehension of the content.
Finally, we must mention WCAG 1.3.1 Info and Relationships (A) as well. This criterion is not specifically about headings, but about “programmatically determined” elements, such as tables, lists, links, input fields etc. Simply put, it is about making sure that assistive technologies can recognize the nature of the element by using the right code and not just visual cues. For example, a heading may appear to be a heading to the eye if it’s bolded or larger in size, but unless it’s marked up properly as a header, screen readers will not be able to communicate that to the user.
Marking up a heading simply means choosing one of the heading formatting options authoring tools offer usually in a dropdown format. There are 6 heading levels altogether and while the number of the headings and levels of headings you need is determined by the complexity of the content, it’s best to keep them limited to important signposts and keep them logical. For example, it’s best practice to have only one Level 1 heading (<h1>) per page and then a few big sections (<h2>). In addition, <h3> shouldn’t come straight after <h1> and should not precede <h2>. Equally, heading formatting should not be used for content that is not meant to be a heading, regardless of how appealing the visual style of that heading format might be.
We gathered some resources for you for this topic.
Read more about the different heading levels: Heading structure and accessibility (bighack.org)
Read the full criteria for WCAG 2.4.6 with examples: 2.4.6 Headings and labels (w3.org)
Follow this link from W3C for more details on how to comply WCAG 2.4.6: Providing descriptive headings (w3.org)
Read the full criteria for WCAG 2.4.10: 2.4.10 Section headings (w3.org)
Read the full criteria for WCAG 1.3.1: 1.3.1 Info and relationships (w3.org)
Read W3C’s definition of programmatically determined: Definition of programmatically determined (w3.org)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What challenges do you anticipate with these guidelines?
How do you decide whether a heading or section heading is descriptive enough?
Where do you see the main benefits of these guidelines?
When you post in the community, use the hashtag #LCASpotlightHeadings
Spotlight 18: Accessibility in Storyline 360
In this spotlight, we put the guidelines into practice with Articulate 360.
Elizabeth Pawlicki from Articulate shares how to create accessible e-learning courses with Storyline 360. She explains the built-in accessibility features, the best practices for designing inclusive content and the future updates on the roadmap.
Learn what you should start doing right now and discover the specific features of Storyline 360 that support accessibility and the best practices for leveraging them in your courses.
Check out the conversation on our YouTube channel: Accessibility in Storyline 360
Read the transcript: Accessibility in Storyline 360 transcript
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
Have you explored all the accessibility features in Storyline 360?
What steps do you take to ensure your courses are accessible in Storyline 360?
What other questions do you have about accessibility in Storyline 360?
When you post in the community, use the hashtag #LCASpotlightStoryline360Accessibility
Spotlight 17: Audio content (1.4.2 & 1.4.7)
Imagine that you are reviewing a training, and when you select "Next", a video starts playing automatically. You frantically start looking for a pause or stop button but find none. Now, imagine the same situation but with someone using a screen reader, trying to find a pause or stop button while the screen reader and video sound play simultaneously, interfering with each other.
WCAG 1.4.2: Audio Control (A) states that the user should be able to stop or pause any audio that plays automatically for more than 3 seconds. One easy way to comply is by providing a pause or stop button for any multimedia with sound that plays automatically for more than 3 seconds. However, it's generally considered a better solution to allow users to start the multimedia containing audio after landing on the content page instead of giving them the option to stop it. So it's best to not have multimedia with audio auto-start but start at the user's control.
Talking about audio content, there is an advanced criterion that relates to the relationship between foreground and background audio within a track.
WCAG 1.4.7: Low or No Background Audio (AAA) states that any pre-recorded audio that contains speech in the foreground should not contain background audio or users should be able to turn off the background audio. Any background audio, if present, should be 20 dB lower than the foreground speech audio.
We gathered some resources for you for this topic.
Read about why autoplay is not good for accessibility: Why Autoplay Is an Accessibility No-No (boia.org)
Learn more about how background noise can affect your learners: How Background Noise Affects Accessibility (boia.org)
Follow this link to read the full WCAG criteria of 1.4.5: 1.4.2 Audio control (w3.org)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
Have you experienced media that auto-plays and why do you think several trainings have media that auto-plays?
Have you created training that required foreground and background audio? How did you ensure that it was accessible?
Does your authoring tool provide a way to let learners control autoplay?
When you post in the community, use the hashtag #LCASpotlightAudioContent
Spotlight 16: Images of texts (1.4.5 & 1.4.9)
In this spotlight, let's discuss images of text and how they may impact learners accessing your content.
WCAG 1.4.5: Images of Text (AA) states that if an author can use text to achieve the same visual effect, they should present the information as text rather than using an image. Images of text can be images of styled headings, quotations, logos, letters with important content, diagrams with text, or infographics with text. Not complying with this criterion and using images of text or text that is presented inside an image, can impact people with low vision as images can become blurry when enlarged. They can also impact people using smaller screens to access the content as these images don't scale to the screen as the native text does. Besides, unless Alt text is used, screen reader users will not be able to access the written content.
One exception to this rule is if the user can customize the text in the image by changing the font or the colour for example. Another example is when the text in the image is essential such as with logos, charts and graphs. Note, however, that it is best practice to provide Alt text with images at all times.
The advanced version of this criterion, 1.4.9: Images of Text (No Exception) AAA states that images of text should be used ONLY for decorative purposes or when they are considered essential like logos, charts and diagrams.
We’ve gathered some resources for you for this topic.
Review this link to learn why images of text can be a barrier to different types of users: Images of text | Web Accessibility (duke.edu)
Learn why it’s important to use text in this article: Why Is It Important for Accessibility to Use Actual Text Instead of Images of Text? (boia.org)
Follow this link to read the full WCAG criteria of 1.4.5: 1.4.5 Images of text (w3.org)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
In what instances would it be better to use images of text?
What methods do you use to make images of text accessible?
What would you say to a stakeholder who insists on using images of text?
When you post in the community, use the hashtag #LCASpotlightImagesOfText
Spotlight 15: Contrast (1.4.3, 1.4.6 & 1.4.11)
In Spotlight 13, we talked about supplementing colours. In this spotlight, we continue talking about colours, but from a different perspective.
Visibility is important for any user, but especially for people with low vision or colour vision deficiency. There are 3 WCAG guidelines outlining the requirements to create enough contrast between the different elements on the screen.
Before we dive in, let’s quickly define the colour contrast ratio. Colour contrast ratio calculates the difference between the brightness of two colours. For example, white text on a white background would be invisible, and the ratio is 1:1, while the contrast ratio between black and white is 21:1.
WCAG 1.4.3 Contrast (Minimum) (AA) criterion states that the contrast between the text and the background has to be at least 4.5:1. This applies to any text on the screen, including hover states, regardless of whether it’s in the body text, the navigation, or a button.
However, there are several things to consider here. For text that is 18 points (24 pixels) in size or larger, or bolded text of size 14 points (18.56 pixels) and above, a colour contrast ratio of 3:1 would suffice. That is because the size or the bolding makes it easier to read. In addition, texts in logos and texts that don’t need to be legible are exempt. For example, disabled buttons, text that incidentally appears on images but has no significance, or text that is part of a decorative image doesn’t have to meet the contrast ratio.
To comply at a higher level, the extension of this criterion, WCAG 1.4.6 Contrast (Enhanced) (AAA) requires the contrast ratio to be 7:1 for standard text and 4.5:1 for large or bolded text.
Another contrast-related criterion is 1.4.11 Non-text Contrast (AA) and it extends to elements on the screen that are beyond text such as user interface components and graphical objects. So, for example, it applies to icons, graphics, links, buttons, input items, visual focus indicators, maps, charts, and their states. The contrast between these elements and any adjacent colours has to be 3:1. The phrase “adjacent colours” means that the colour contrast requirement is not only between the non-text element and the background, but also anything next to it, or on top of it. For example, a segment of a pie chart has to have 3:1 contrast between the background, the adjacent segments and any text on the top of the segment. This is also true for buttons, and links within a paragraph, just to mention a few.
Finally, while it’s not a WCAG requirement, it’s important to mention that people with dyslexia prefer non-white backgrounds.
We gathered some resources for you for this topic.
Follow this link to access WebAIM’s Contrast Checker: WebAIM contrast checker
This article explains the contrast-related guidelines in depth and with examples: WebAIM's article on contrast
Follow this link for the British Dyslexia Association’s Dyslexia-friendly guide: British Dyslexia Association’s Dyslexia-friendly guide
Read W3C’s definition of user interface components: Definition of interface components (w3.org)
Follow this link for a collection of contrast checker tools: Color Contrast Analyzers & WCAG Color Contrast Generators (digitala11y.com)
This tool can tell you which combinations of your brand colours can work together (limited to text size) : Accessible colour matrix
This tool tells you which combinations of your brand colours can work together and labels them: Contrast grid
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
Does your authoring tool make it easy to check the colour contrast within the tool itself or do you have to use external contrast checkers?
What’s your preferred tool for checking colour contrast?
Can you give specific examples of when text is exempt from the criteria?
Can you give examples of when measuring the colour contrast may prove difficult?
When you post in the community, use the hashtag #LCASpotlightContrast
Spotlight 14: Writing accessible copy
This spotlight is all about writing accessible copy. To help with this episode, we asked Kayleen Holt, Instructional Designer Consultant at Scissortail Creative Services (soon to be Inclusive LXD) to give us an overview of the different things that can help ensure that people can understand and perceive written text.
Check out the conversation on our YouTube channel: Writing accessible copy (YouTube video)
Or read the transcript: Writing accessible copy transcript
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
Do you have anything to add to the things discussed in the video?
How do you ensure that topics about complex subject matters are still accessible to non-native speakers?
Do you have a solution for making puns accessible in copy, not just alt text?
Do you have any tips for checking whether your copy is accessible?
When you post in the community, use the hashtag #LCASpotlightAccessibleCopy
Spotlight 13: Complementing colours (1.4.1)
As learning designers, we probably all love colour. However, people with low vision, colour blindness, or even elderly people with reduced eyesight might have trouble accessing information where colours carry meaning.
WCAG 1.4.1 (Use of colour - level A) is about making sure that colours alone do not convey meaning and other visual means are added to ensure that all sighted users can still perceive the information.
For example, you may have seen error messages where red means incorrect, and green means correct. To make them more accessible for people who don't perceive the colours well, you can add icons and text cues as well. Another example is an Excel spreadsheet where the text or background colour in certain cells is changed to highlight information or trends. Because many people might not even notice that different colours are used, you could use other methods of visual cues, such as different fonts or patterned backgrounds.
We gathered some resources for you for this topic.
Follow this link to read the full WCAG criteria of 1.4.1: 1.4.1: Use of Colour (w3.org)
Check out this article with very clear examples of improving accessibility by using icons and patterns: Accessibility considerations for colours
To experience how people with colour blindness might perceive colours, use this simulator: Colour blindness simulator (Coblis)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
Do you have more common examples of conveying meaning by colour differences in everyday life (online or offline)?
What should we consider when using icons to deliver information?
Do you have a QA process to ensure that your information doesn’t rely only on colour cues? What else do you recommend?
When you post in the community, use the hashtag #LCASpotlightComplementingColours
Spotlight 12: Flashes (2.3.1 & 2.3.2)
Let's talk about flashy content.
There are two WCAG standards about safety and preventing people with photosensitive epilepsy from having seizures. But also people with an attention deficit disorder can benefit from limiting flashing content.
According to WCAG 2.3.1: Three Flashes or Below Threshold (Level A), you shouldn’t include any content that flashes more than three times a second unless the flashes fall below a defined threshold. However, WCAG 2.3.2: Three Flashes (Level AAA) does not allow more than three flashes a second, even if these are below the threshold.
In general, it is safer and more recommended to avoid flashes whenever possible. To comply, avoid using text and animations that create a flickering effect or using blinking effects alongside texts. And, as with animations and other effects, give people the possibility to pause, stop or hide this kind of content.
We gathered some resources for you for this topic.
Follow this link to read the full WCAG criteria of 2.3.1, including what the threshold is: 2.3.1: Three Flashes or Below Threshold (w3.org)
See a bad example of a flashy Start button in an eLearning game: Bad example of flashes (eLaHub)
The Photosensitive Epilepsy Analysis Tool (PEAT) is a free tool to identify seizure risks in web content and software: PEAT tool
Follow this link to read the full WCAG criteria of 2.3.2: 2.3.2: Three Flashes (w3.org)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What do you think one reason is for people creating flashy content and how could you achieve the same effect without flashes?
What else other than flashes could cause seizures?
If you use animated objects in your content, which criteria would you follow to ensure that your design is still accessible?
Do you have an example where you have revamped flashy content?
When you post in the community, use the hashtag #LCASpotlightFlashes
Spotlight 11: Using links (2.4.4 & 2.4.9)
In this spotlight, we’ll talk about using links.
WCAG 2.4.4 Link Purpose: In Context - level A and 2.4.9 Link Purpose: Link only - level AAA both relate to making links meaningful to users. The difference is that with 2.4.4, the users should be able to work out where the link will take them from the context surrounding the link, while according to 2.4.9, they should be able to do that from the link text alone.
Links are usually visually different from standard text. Screen readers try to mimic that functionality and allow users to pull up a list of links and navigate through that list quickly. However, if the list has 3 "Click here" and 4 " Find out more" links, users have to investigate the surrounding content to find out what these specifically refer to. If they can work out the meaning from the context surrounding the link, the learning content complies with the level A standard. However, in general, it's considered best practice to comply with the level AAA guideline and make the purpose or the destination of the link clear from the link alone.
Although not specifically mentioned in the WCAG guidelines, here are some more considerations when using links:
Use hyperlink instead of the actual URL link wherever possible. That is because screen readers read out the full chain of the URL and that can be time-consuming. Not to mention that a chain of random numbers and letters are meaningless.
Try to keep the hyperlink short and more or less same or similar to the title of the destination page.
Because links opening in a new tab or window can be disorienting for some learners using assistive technology and it also breaks back-button history navigation, it's recommended that links are set to open in the same window. If users want to open links in a new window, they can do that by using their mouse or keyboard functions.
According to WCAG 1.4.11 Non-text Contrast (AA), the colour contrast between links and surrounding elements should be 3:1. (In most cases, the surrounding element refers to the surrounding text, but in addition, it may also refer to the background colour.)
Finally, "click" is uninclusive of users who do not use a mouse. Instead, using "select" or "follow" are better options.
We gathered some resources for you for this topic.
Follow this link to read the full WCAG criteria of 2.4.4: 2.4.4: Link purpose in context (w3.org)
Follow this link to read the full WCAG criteria of 2.4.9: 2.4.9: Link purpose link only (w3.org)
Also, this article talks about some additional considerations related to using links: Links (UNC School of Medicine)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
How can we make links more accessible if the authoring tool doesn’t allow hyperlinking or when referencing links in printed format?
What other things should we consider when adding links to learning content?
According to the WCAG wording, 2.4.4 and 2.4.9 should be applied “except where the purpose of the link would be ambiguous to users in general”. What could we do to avoid ambiguity in general?
When you post in the community, use the hashtag #LCASpotlightLinks
Spotlight 10: No keyboard trap (WCAG 2.1.2)
In this spotlight, let’s look at interface components and navigation.
Users with visual or physical disabilities may use the keyboard to navigate and access content only using keys on the keyboard.
According to success criterion 2.1.2: No Keyboard Trap (Level A), keyboard-only users should be able to enter, interact and exit the content that receives focus using only the standard keyboard keys such as Shift and Tab, Tab, or the Arrow keys. A keyboard trap occurs when learners can navigate to an item, but then cannot move away from it. If the user needs to use non-standard keys to exit the content, then those instructions must be provided to the learner to avoid keyboard trap.
Because the way interactive elements work is determined by the authoring tool, it’s unusual to come across a keyboard trap in eLearning. However, it's always advisable to test your content with the authoring tool to ensure there are no traps.
It’s also possible to inadvertently create a trap for learners through certain design choices, such as a navigation button that disappears or a drag and drop activity that is required to be completed to move ahead in the training.
We gathered some resources for you for this topic.
Check out this article about why keyboard traps are frustrating for users: Why Keyboard Traps Are One of the Most Frustrating Accessibility Issues (boia.org)
Experience some examples of being stuck a keyboard trap: Web Accessibility Criteria - Keyboard Traps (California State University)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What common interactions might be more prone to keyboard traps?
What kind of QA process do you follow to ensure that the user is not stuck in a keyboard trap?
Does your authoring tool provide a way to avoid keyboard traps?
Thanks to Susi Miller and Diane Elkins for contributing to this post.
When you post in the community, use the hashtag #LCASpotlightKeyboardTrap
Spotlight 9: AAA guidelines related to multimedia (WCAG 1.2.6-1.2.9)
In the previous two spotlights, we looked at the A and AA requirements related to making time-based media, such as audio and video content, accessible (WCAG 1.2.1-1.2.5).
In this spotlight, let’s review the remaining requirements (1.2.6, 1.2.7, 1.2.8 & 1.2.9) that are required for AAA conformance. These are somewhat more advanced requirements and therefore are harder to meet.
According to success criterion 1.2.6 (AAA), sign language interpretation should be provided for all pre-recorded audio in synchronized media.
People who are deaf or hard of hearing may use sign language as their first language and have limited reading ability which makes it harder to read and comprehend captions in synchronized media. For them, therefore, sign language is faster to interpret and is more descriptive in terms of intonation and emotion compared to captions. However, note that sign language differs in countries and not everyone with hearing impairment understands sign language.
To comply with the criterion, you could add sign language interpretation to the video that is presented to all users, or provide a link to a video that has sign language interpretation.
Success criterion 1.2.7 (AAA) talks about Extended audio descriptions and is an extension of 1.2.5 (AA) Standard audio description.
According to WCAG 1.2.5 (Level AA), pre-recorded videos that have sound should have an audio description. However, sometimes the content doesn't have long enough natural pauses for the audio narration to be added. In this case, to comply with AAA, you may need to pause the video to allow the extended audio description to be added. The video then can resume once the description ends.
Success criterion 1.2.8 (AAA) states that an alternative in text form (most commonly as a transcript) should be provided for any audio-visual content. This is important because people who may not be able to read captions and hear sound have access to the same information. The text transcript should include full descriptions of the audio and visuals including visual context, actions and expressions of actors, and any other key visual material. In addition, all spoken audio like laughter and off-screen dialogues, on-screen text should be included in the transcript.
Success criterion 1.2.9 (AAA) states that alternative text should be provided for information covered by live audio such as meetings, conferences, podcasts etc. This can be achieved through a real-time captioning service or a transcript if it’s a prepared script, for example, a pre-written script for a live press release.
We gathered some resources for you for this topic.
Learn more about sign language through this W3C article: Sign Languages (W3C)
Review this resource to learn about Extended audio descriptions: What is Extended Audio Description, and When Do You Need It? (thetechblock.com)
Check out an example of extended audio descriptions: Extended Audio Description (YouTube)
Have you wondered if your video needs standard or extended audio descriptions? This resource will tell you more: Do You Need Standard or Extended Audio Description? (3playmedia.com)
A resource from W3C that tell you how to meet WCAG 1.2.8: Understanding Success Criterion 1.2.8: Media Alternative (Prerecorded) (w3.org)
Read about the benefits of captions and transcripts: Transcripts on the Web (uiAccess)
Here’s how you can enable live captions in Teams Meetings: Use live captions in a Teams meeting (microsoft.com)
Here’s how you can enable captions and live transcriptions in Zoom Meetings: Managing closed captioning and live transcription (Zoom support)
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What could be some barriers to providing sign language interpretation in our content and how can we overcome them?
What type of content might extended audio descriptions benefit from?
Does your authoring tool provide user-friendly features to add a full transcript to media alternatives?
Do you find captions or transcripts helpful during online meetings? How might the benefits of transcripts extend once the meetings have ended?
When you post in the community, use the hashtag #LCASpotlightAdvancedMultimedia
Spotlight 8: Transcripts and audio descriptions (WCAG 1.2.1, 1.2.3 & 1.2.5)
In spotlights 7-9, we look at the 9 WCAG guidelines (1.2.1 - 1.2.9) relating to making time-based media, such as audio and video content, accessible. In the previous spotlight, we covered the A and AA requirements related to using captions (WCAG 1.2.2 & 1.2.4). This week, we bring the A and AA requirements related to using transcripts and audio descriptions (WCAG 1.2.1, 1.2.3 & 1.2.5). Next week, we’ll look at all the AAA requirements related to time-based media together (WCAG 1.2.6-1.2.9).
So, in this spotlight, let’s have a look at the A and AA requirements related to transcripts and audio descriptions.
According to WCAG 1.2.1 (level A), with pre-recorded audio-only content, such as podcasts, a transcript should be provided. With videos that have no sound, but have on-screen information or non-decorative visuals, either a transcript or an audio description track should be provided.
When it comes to providing audio description and/or a media alternative such as a transcript for videos with sound, there are different WCAG requirements depending on the conformance level.
The level A guideline (1.2.3) requires that for pre-recorded videos that have sound, either an audio description or a media alternative is provided. However, to conform at level AA (1.2.5), an audio description must be provided. That means that by providing an audio description, you conform at level AA, whereas if you only provide a transcript, you only conform at level A. Exceptions are videos where there are no visuals used to enhance the spoken content or where those visuals are explained in the narration.
In general, captions, audio descriptions, and transcripts are all similar because they provide an alternative output method for the content in the video. The main difference is that captions give sighted users information about the dialogues and the sounds in the video that they need to understand the content. On the other hand, audio descriptions give non-visual users information about the visuals, such as the setting and any action in the video. This typically involves an additional voiceover layer explaining the visuals in the pauses between the existing sound. Finally, transcripts are text versions of the two combined. Transcripts should include both the necessary auditory and visual information users need to understand the content without hearing or seeing it.
There are 3 main ways to provide audio descriptions with videos: using the video including the audio description as the default option for all users, providing a separate video or a link for the video that includes the audio description, or using a plug-in or the built-in audio description track option available in certain authoring tools. Note, however, that not all authoring or streaming tools have the last option.
For transcripts, there is no set requirement of how transcripts should be structured or provided; the main requirement is that they’re easy to find. Note though that if the video has any interaction such as taking the learner to a web page, then the transcript should also provide that functionality.
Finally, we’d like to mention that there are additional AAA guidelines that relate to transcripts and audio descriptions. We’ll cover these in the next spotlight.
We gathered some resources for you for this topic.
Follow this link for a brief, yet comprehensive guide on audio descriptions: Why Audio Descriptions Need to Be a Priority for Your Content
Check out this quick guide on how to write audio descriptions: Audio Description Tip Sheet
For detailed information on audio descriptions, follow the link to this W3 page: Description of Visual Information
This article also gives comprehensive and easy-to-read details about audio description: The Ultimate Guide to Audio Description
For detailed information on transcripts, follow the link to this W3 page: Transcripts
Follow this link to read reviews of transcription services for eLearning: 13 Best Transcription Software for Audio and Video to Text
For more tips on writing audio descriptions with examples, check out this article: Tips for Writing Descriptive Scripts
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What method do you use to provide audio descriptions to your videos in your learning content?
How do you structure your transcripts?
What features are available in your authoring tool to facilitate using transcripts and audio descriptions?
When you post in the community, use the hashtag #LCASpotlightTranscripts
Spotlight 7: Captions (WCAG 1.2.2 & 1.2.4)
In the next few spotlights, we’ll be looking at the 9 WCAG guidelines (1.2.1 - 1.2.9) relating to making time-based media, such as audio and video content, accessible. Note that these guidelines solely focus on making the content within the audio or video accessible and these don’t include standards relating to how audio and video content are to be used (eg 1.4.2).
Because this is an extensive topic with 9 guidelines, we’ve divided the topic into 3 spotlights:
The A and AA requirements related to using captions (WCAG 1.2.2 & 1.2.4) - It’s the topic of this spotlight.
The A and AA requirements related to using audio descriptions and transcripts (WCAG 1.2.1, 1.2.3 & 1.2.5) - It’s the topic of spotlight 8.
The AAA requirements related to time-based media (WCAG 1.2.6-1.2.9) - We’ll cover these in spotlight 9.
So, in this spotlight, let’s have a look at the A and AA requirements related to captions.
Providing captions is an A (basic) requirement for pre-recorded videos that have sound (1.2.2) and it’s an AA (intermediate) requirement for live content such as a webinar (1.2.4).
Captions are similar to subtitles in that they should include dialogue that is synchronized to the spoken words and any action happening in the video. However, the difference is that, unlike subtitles, captions also need to include any information necessary to fully understand the video without any sound. For that reason, it should also indicate who is speaking and include any non-speech information like change in voice or background sounds.
To comply, either open or closed captions can be used. The difference is that open captioning burns the captions into the video and therefore there’s no way to make them disappear whereas closed captions (cc) can be turned off. In general, it’s best to use closed captioning because it gives flexibility to the viewer to switch the captioning off if they find it distracting. If, however, the video platform doesn’t allow caption files to be edited or added, it’s best to use open captioning than no captions..
There are two ways to add subtitles to video content: manually and automatically. Manually works great if you have a copy of the script used in the video that you could copy and paste in. This might not always be the case though, so automatic subtitling can help generate the script quickly. Note, that I used the word “subtitles” and not “captions”. That is because in most cases, scripts and auto-generated “captions” only capture the speech and you should still add any additional notes, such as speakers and non-speech information. In addition, if you use automatic captioning, make sure to check the accuracy of the content and fix any mistakes as they often include misheard words and irregular punctuation.
Note that while it’s best practice to use captioning whenever possible, video content that is used in addition to text and two-way conferencing are exempt from these guidelines. Also note that there are some additional AAA guidelines that also recommend providing captions or an alternative for live audio-only content such as podcasts and providing sign-language interpretations for pre-recorded video content. But we’ll cover these in spotlight 9.
We gathered some resources for you for this topic.
Meryl Evans provides a lot of content about captions. Follow this link to check out her 10 rules of good captioning accompanied with short example videos: 10 rules of good captioning
For more content from Meryl, check out the 6 most common captioning mistakes with examples: Common caption mistakes
This article lists 10 free transcribing and video captioning tools: 10 free transcribing and captioning tools
If you’re using YouTube, follow this link to find out more about the captioning options and processes: Captioning in YouTube
If you’re creating captions in Storyline, this page explains the process: Captioning in Storyline 360
If you’re using Camtasia, follow this link to learn about how to add captions manually and automatically (Note that automatic captioning is only available in the Windows version): Captioning in Camtasia
If you’re using Zoom for live workshops etc., this article details how you can enable automatic captioning: Captioning in Zoom
If you’re using Google Meet for live workshops etc., this article details how you can enable automatic captioning (Note that captions are not captured in recordings): Captioning in Google Meet
If you need to create subtitle files to manually add to videos, this article explains the different subtitle file formats: Overview of subtitle formats
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
What’s your preferred method of creating captions, creating them manually, or using a transcribing or automatic captioning tool?
What’s your preferred tool for adding and editing captions and creating live captions?
What limitations should we consider when adding captions either to prerecorded or live video content
What features are available in your authoring tool to facilitate using captions?
When you post in the community, use the hashtag #LCASpotlightCaptions
Spotlight 6: Meaningful sequence and focus order (WCAG 1.3.2, 2.4.3 & 2.4.7)
This week, the highlight is meaningful sequence and focus order.
WCAG 1.3.2 Meaningful sequence - level A and 2.4.3 Focus order - level A both relate to logical order in any content. They're similar and aim to ensure that assistive technologies don't access content in a confusing way. The difference is mainly that 1.3.2 applies to all content and is essential for screen readers so that it reads out the content in a logical and meaningful way. Whereas 2.4.3 refers to interactive items and are also important for keyboard users who don’t necessarily use screen readers but need to activate interactive items such as links, video players or search bars. Focus order ensures that for example keyboard users can tab through, access and input content in a meaningful way.
To comply, make sure to present content in a logical and intuitive way. Besides, use the authoring tool's focus order function to align the focus order with the reading order as much as possible.
One more guideline that relates to focus order is WCAG 2.4.7 Focus Visible - level AA. This states that when users navigate through the keyboard, the focus indicator should be visible. This is often a colour border around or a change in the appearance of the content that is being selected by the keyboard. It is a function that is usually governed by the authoring tool, so there's nothing to do.
We gathered some resources for you for this topic.
Follow the link here to read a summary of 1.3.2: Understanding SC 1.3.2 Meaningful sequence
Follow the link here to read a summary of 2.4.3: Understanding SC 2.4.3 Focus order
Follow the link here to read a summary of 2.4.7: Understanding SC 2.4.7 Focus visible
Then why not try the screen reader you used in the last spotlight (Spotlight 5: Screen readers) and navigate through a website and see how they apply the meaningful sequence guideline? You can also just test out the focus order without a screen reader and use the tab button to jump around the interactive items and activate them with the space bar.
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
How was your experience navigating through the interactive items on the website with only your keyboard?
How do you set the focus order in your authoring tool?
How do you verify the right focus order in your content?
When you post in the community, use the hashtag #LCASpotlightOrder
Spotlight 5: Screen readers
This week, the highlight is on screen readers. This assistive technology converts on-screen text into spoken words or braille and also allows users to navigate the content.
We gathered some resources for you for this topic.
The best way to understand how screen readers work is to watch it in action.
Follow the link here to see how Marc Sutton, a blind person, uses a screen reader to navigate content: Screen Reader Demo for Digital Accessibility
Have you never used a screen reader before? This week is your chance to try one!
As a Mac user, you can try Voiceover. The following link includes instructions about how to use Voiceover as well as the link to access the Voiceover app: Using VoiceOver to Evaluate Web Accessibility
If you're a Windows user, try NVDA. The following link includes instructions about how to use NVDA as well as the link to access it: Using NVDA to Evaluate Web Accessibility (Note that NVDA works best with Chrome and Firefox.)
It’s generally advised that you test your learning content with a screen reader, and even better if you could test with someone with a lived experience of a disability who is an expert at using the software. According to Susi Miller, having an experienced tester could help avoid what she calls ”'screen reader rabbit holes” where the screen reader flags up issues caused by for example incompatibility with certain browsers but in fact, would not affect the learning experience.
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
How was your experience using a screen reader for the first time?
Besides people with visual impairment, in which other contexts/situations can people benefit from using a screen reader?
What do you take into account in developing your content to be screen reader-friendly? What would you like to improve?
When you post in the community, use the hashtag #LCASpotlightScreenReader
Spotlight 4: Time limits (WCAG 2.2.1 & 2.2.3)
This week we'll look at the WCAG guideline concerning time limits.
According to WCAG 2.2.1 - Timing adjustable (Level A), learners shouldn’t be given a time limit unless they can extend it or turn it off. The advanced version of this guideline (WCAG 2.2.3 - No timing) goes beyond that and recommends not setting time limits even if the learners can control them.
There are many reasons behind these guidelines, the most obvious being that some people may need more time processing what's on the screen. People for example who use assistive technologies may need more time to get a feel for the navigation of the page, or listen to the screen reader read out the instructions, texts, and anything else on the screen. Also, people with cognitive impairments or people with English as a second language may need more time to read the information on the screen. But it can also be about situational impairments. Imagine that you're interrupted by the doorbell ringing and you miss some crucial information or lose some precious time on the task; you're equally disadvantaged. Finally, we must mention that time constraints can make learning experiences stressful and could cause people to have anxiety and moments of mind blanks.
While many people may assume that this only relates to timed activities or assessments, it is about anything that has time limits. Other examples where people can miss out on content if this guideline is not followed are slides that automatically move on or update or animated content on the screen that automatically disappears.
To comply at the AAA level, avoid imposing time limits completely. If that's not possible, to comply with the basic level of this guideline, allow learners either to turn off or provide a means to set the time limit to 10 times the default time limit.
We’ve gathered some resources for you.
Follow the link here to read a summary of the guideline related to timing: Understanding SC 2.2.1 Timing Adjustable
Follow this link to read the full WCAG criteria of 2.2.3: WCAG 2.2.3 overview
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
How do you handle conversations with stakeholders who insist on having time limits for quizzes or learners who like the competitive element of time restraints?
According to the WCAG guideline, time limits are acceptable if the learner can turn it off or adjust it. How does your authoring tool allow you to do that?
Have you seen examples of eLearning courses that used timed activities and complied with this guideline?
When you post in the community, use the hashtag #LCASpotlightTiming
Spotlight 3: Alt text (WCAG 1.1.1)
This week the spotlight is on alt tags.
According to WCAG 1.1.1 - Non-text content (Level A) non-text contents such as images, infographics, or diagrams and functional items such as custom buttons and logos should have alternative text added to them so that screenreaders can recognise them and read them out.
Screen readers basically convert text into speech. That is easy with written texts such as headings and body text, however, it gets more complicated with images. Alt text, that is short for alternative text, is aimed to ensure that people who use screen readers can have the full experience and not feel confused by incomplete explanations such as "The next image shows how much crime in the UK has increased in the last decade". Alt text can also benefit people who have slow internet connections, as images that cannot load are replaced with the alternative text.
Most authoring tools have an alt text field. If for some reason you cannot add alt text to a specific format, an alternative is to add the alt text to the caption or the body text near the image, or even create a separate link to the alt text.
The first rule of alt text is knowing when to use them. If the caption already gives a full description, or if the image is purely decorative and the learners would not miss anything if it had been removed, it's best not to use alt text. In these cases, you should use an empty (also called null) alternative text by adding alt="" to the alt text field. That prevents assistive technologies from announcing the content and wasting people's time.
The next important thing to consider is how to write effective alt text, and it's not easy. As they say, a picture paints a thousand words, but neither do you have that many characters in the alt text field, nor do you want to waste the learners' time. So, you need to consider what purpose the image has and therefore what elements of that image are important to the learners. In fact, one image can have many different alt text descriptions based on the audience, the context etc.
Finally, make sure your alt text is accurate but succinct. Use standard punctuation but avoid special characters that screen readers might have a lengthy way of reading out. Also, avoid using redundant details such as "image of" as screen readers already include that information along with the alt text so you'd be just repeating it.
Note that when talking about alt text, people usually mean images, screenshots, maps, diagrams, charts etc. However, according to the guideline, this should apply to interactive items too such as audio, video, or buttons.
We’ve gathered some resources for you.
Check out the following link for more information about alt text, including examples: eLaHub eLearning accessibility article on alt tags
Also, read more about alternative text on the WebAIm website: WebAIM alternative text
This link allows you to practise your alt-tag writing skills: Image Description Practice Form
Finally, this link shows how to use alt text with screenshots related to software training: Accessibility with screen captures
Get Involved: Come to the LCA Spotlight LinkedIn group and join the conversation.
How do you add or hide alt text in your authoring tool?
What is your method to test that all applicable non-text content has alternative text?
What are your best practices for writing alternative text for complex images like charts, graphs, screen captures and diagrams?
How do you decide which images are decorative and don’t need alternative text?
When you post in the community, use the hashtag #LCASpotlightAltText
Spotlight 2: Assistive technology
This week, the highlight is on assistive technologies.
Assistive technology is an umbrella term covering tools and services that can increase, maintain, or improve the functional capabilities of persons with disabilities. According to the World Health Organization, more than 1 billion people need assistive products (with that number increasing to 2 billion by 2030). (Source: WHO fact sheet)
Assistive technologies include a wide range of devices that help people who have difficulties carrying out certain tasks feel more functioning and independent. These can be every-day tools such as glasses, or simple devices such as automatic pill dispensers for people who have difficulty remembering what pills to take and when, they can be sophisticated software such as head trackers for people who cannot use a mouse or keyboard and have difficulties with oral communication as well. Basically, assistive technology can support people with speaking, typing, writing, remembering, pointing, seeing, hearing, learning, walking, and many other things.
In the context of online learning content, the most relevant assistive technologies are those that relate to hearing, seeing, and using a mouse or a keyboard. Examples of these include screen readers, screen magnifiers, braille displays, hearing aids, switches, mouth sticks, modified keyboards, speech recognition software etc.
We gathered some resources for you for this topic.
The following YouTube playlist introduces people who rely on different kinds of assistive technology: Assistive Technology in Action
If you're already familiar with the topic or when you're finished, come to the LCA Spotlight LinkedIn group and join the conversation.
Have you worked with someone who uses assistive technology?
Which assistive technologies is the content you create compatible with?
How would you raise awareness about assistive technology users with stakeholders?
When you post in the community, use the hashtag #LCASpotlightTechnology
Spotlight 1: Overview of the WCAG guidelines
For the first spotlight, we thought we'd start with an overview of the WCAG guidelines.
Created by the World Wide Web Consortium (W3C), the Web Content Accessibility Guidelines (known as WCAG) are an internationally recognised set of recommendations for improving content accessibility on the World Wide Web (e.g. the internet).
Even though not all organisations are required to be WCAG compliant, following these guidelines helps create content that is accessible to a wide audience.
The WCAG guidelines are grouped into 4 categories known as the POUR principles. POUR stands for:
Perceivable
Operable
Understandable
Robust
All the guidelines are grouped into one of these 4 categories and each guideline is further categorized according to its conformance level:
Level A is the most basic
Level AA is intermediate
Level AAA is desired
In the latest version of WCAG (WCAG 2.1), there are 78 standards altogether. While that may sound like a lot, some of these are related to each other, and also not all of these are applicable to learning content creation.
In the spotlights, we'll go over the relevant guidelines. In this spotlight, let's just get a brief overview of them in general.
We gathered some resources for you for this topic.
Follow the link to watch a 15-minute overview of WCAG guidelines: Web Content Accessibility Guidelines (WCAG) 2.1 and 2.0 Explained
For a more visual overview, check out this map: WCAG 2.1 map by Intopia (Licence link, Original source)
If you're already familiar with the topic, or when you're finished, come to the LCA Spotlight LinkedIn group and join the conversation.
Which WCAG guideline are you the most comfortable with?
Which guideline do you find the most difficult to get right?
Which WCAG guidelines does your authoring tool not support?
When you post in the community, use the hashtag #LCASpotlightWCAG