6 Accessibility

Overview

This chapter focuses on accessibility metadata for physical and digital resources. It explores accessibility metadata in MARC and highlights the emerging importance of web accessibility. Examples include best practices for accessibility metadata, such as using alt text and extended description for images; audio description, captions, and transcripts for audiovisual resources; optical character recognition (OCR) for digitized text, and language information for digital resources.

Accessibility and accessibility metadata

Accessibility refers to providing flexibility (i.e., multiple modalities) to enable users, including those with disabilities, to readily access a resource. It entails several concepts, such as “flexibility, customization, universality, usability, interoperability, reusability, and navigability.”[1] Accessibility is integral for creating inclusive environments within libraries and ensuring equitable access to diverse user groups. This chapter covers the contemporary practices to enhance accessibility in library environments, and it may differ from other chapters as it includes both MARC and non-MARC metadata examples to improve accessibility. It also provides key considerations, best practices, and concrete examples of accessibility metadata through the lens of diversity, equity, and inclusion (DEI).

Accessibility metadata is information that provides details about the accessibility of resources, both digital and physical.[2] This metadata is critical for users with disabilities, including people who are blind or have low vision, those who are deaf or hard of hearing, and those with neurodiversity needs. The inclusion of accessibility metadata ensures that people can effectively search for, locate, access, and use resources in various formats. While the significance of accessibility metadata has been particularly evident in the digital realm, it is equally essential for physical materials, such as books, magazines, journals, newspapers, maps, and audiovisual materials, to enhance inclusivity across all types of library collections. As more emphasis is placed on making digital content that is born-accessible, the need for comprehensive accessibility metadata for both digital and physical resources becomes increasingly apparent.

Accessibility metadata in MARC

Metadata for physical and electronic library resources is traditionally encoded in the MARC 21 (Machine Readable Cataloging) format. In 2018, two new fields were introduced in MARC 21 specifically for accessibility: 341 for Accessibility Content and 532 for Accessibility Note.

The MARC 341 field, which can be repeated, provides details regarding how the content of a resource can be accessed through textual, visual, auditory, and/or tactile means. Thus, it is important to employ controlled language within the field. Currently, the terms acceptable in subfield $a are restricted to textual, visual, auditory, or tactile. On the other hand, the MARC 532 field includes written details about a resource’s accessibility features, potential risks, and shortcomings, encompassing technical specifics related to accessibility. This field can be employed to provide additional information or to clarify details found in the MARC 341 field, which focuses on the content’s accessibility.[3]

Both MARC fields are essential components of bibliographic records designed to capture information about the accessibility features of a resource. In addition, these two fields are instrumental in enhancing the discoverability and usability of library collections for all users.[4]

Figure 6.1 is an example of a MARC record for fields 341 and 532 that outlines the accessibility features of a DVD.

Figure 6.1. MARC field 341—Accessibility Content
341 0# $a visual $b closed-caption $b audio-description $2 w3c

In Figure 6.1:

  • “$a visual” indicates that the accessibility feature pertains to the visual aspect.
  • “$b closed-caption” specifies the presence of closed captioning.
  • “$b audio-description” indicates the availability of audio descriptions.
  • “$2 w3c” identifies the source of the accessibility terms according to W3C (World Wide Web Consortium) standards.
Figure 6.2. MARC field 532—Accessibility Note
532 1# $a Closed captioning in English

In Figure 6.2, “$a” indicates the main content of the field, which is the accessibility note, and the phrase “Closed captioning in English” is the actual content of the accessibility note. It specifies that the described item has closed captioning available, and the captions are provided in the English language.

It is worth noting that accessibility information was present in MARC records before the introduction of MARC fields 341 and 532. For instance, while MARC field 546 (Language Note) was primarily used to indicate language and script, it can also include information about accessibility features related to language, such as sign language, and subtitles or captions for people who are deaf or hard of hearing.[5]

Figure 6.3. MARC field 546—Language Note
546 ## $a Open signed in American Sign language.

In Figure 6.3, “$a” specifies the content of the language note. “Open signed in American Sign language” is the content of the language note. It indicates that the described audio-visual resource is presented or performed in American Sign Language (ASL).

Another important MARC field related to accessibility is 655, which specifies genre/form terms associated with various types of materials.[6]

Figure 6.4. MARC field 655—Index Term-Genre/Form
655 #7 $a Video recordings for the hearing impaired $2 lcgft

In the above example,

  • “#7” indicates that the term used is taken from a specific controlled vocabulary or thesaurus. In this case, it’s referring to the Library of Congress Genre/Form Terms (LCGFT).[7]
  • “$a Video recordings for the hearing impaired” contains the actual genre/form term, which refers to video recordings specifically created for individuals with hearing impairments.
  • “$2 lcgft” specifies the source of the term, which is the LCGFT.

While the 655 field may not directly relate to accessibility, catalogers could still use it to provide additional information about the nature or format of resources, which would indirectly contribute to understanding the accessibility of those resources.

Historically, MARC records could include accessibility-related notes or statements about the resource, such as indications that a resource was available in Braille or as a talking book. These notes served as early forms of accessibility metadata in library cataloging.

To illustrate this historical accessibility information in MARC records, you can refer to examples in the Library of Congress’s National Library Service for the Blind and Print Disabled (NLS) catalog (https://nlscatalog.loc.gov/), such as records that mention “Braille” or “talking book.” Each result in the NLS catalog provides a MARC view, allowing users to examine the accessibility-related information contained within the records.[8]

Figure 6.5. Example of accessibility information for Braille edition
530 ## $a Also available for download from BARD/Web-Braille as digital braille. $c Users must register with their cooperating library.

In the above example, MARC field 530 is Additional Physical Form Available Note. This demonstrates that libraries have a longstanding commitment to providing accessibility information in their records, and the introduction of MARC 341 and 532 in 2018 further formalizes and standardizes this practice, enhancing the accessibility of library collections for all users. In 2023, significant progress was made in this area, including the development of a brand-new vocabulary of accessibility properties.[9]

Note: Examples of MARC records related to digital content can be found in the “Accessibility metadata for resources on the web” section below.

Accessibility metadata for resources on the web

With the rampant proliferation of content available on the web, accessibility metadata for digital resources is increasingly important. In some cases, as discussed in the Legal Compliance section, web accessibility is required by law.

As with metadata for traditional library resources, metadata for resources on the web can be encoded in MARC for discovery in a library catalog. However, digital resource metadata is often recorded using non-MARC metadata schemas such as Dublin Core and Metadata Object Description Schema (MODS). This section includes both MARC and non-MARC examples of accessibility metadata.

Legal compliance

In the European Union (EU), the Web Accessibility Directive has established policies for web accessibility, particularly for public-sector websites and mobile apps. Compliance with this directive is crucial for promoting inclusivity across the EU.[10]

In the United States, the Department of Justice (DOJ) has issued guidance on web accessibility under the Americans with Disabilities Act (ADA).[11] On April 8, 2024, the DOJ finalized a rule aimed at improving accessibility for web content and mobile applications for individuals with disabilities. This new rule sets clear and consistent standards for accessibility that state and local governments must follow for their websites and mobile apps. This initiative is part of the ongoing effort to ensure that people with disabilities have equal access to essential public services, highlighting the importance of accessibility in the U.S. legal context.[12]

Additionally, many organizations, including educational institutions, have adopted policies and requirements aligned with international standards like the Web Content Accessibility Guidelines (WCAG) 2.1. These guidelines offer a comprehensive framework for achieving web accessibility and are widely recognized and adopted globally.[13]

Ensuring that digital content is accessible to all individuals, regardless of their abilities, is a critical consideration for organizations and institutions.

Best practices

The following are some best practices to promote accessibility in metadata for non-physical resources, including e-books, e-journals, digital images, digital audiobooks, and streaming videos. The availability of these features/elements depends on the platform you or your institution uses.

  1. Alt text for images
  2. Extended description for images
  3. Audio description for audiovisual content
  4. Captions and subtitles for audiovisual content
  5. Transcripts for audiovisual content
  6. Optical Character Recognition (OCR) for digitized text
  7. Language information for digital resources

Alternative text (alt text) for images

Alternative Text (alt text) is a short description that conveys the “why” of an image. Its purpose is to provide descriptions of images, graphs, and other non-text content that can be read aloud by screen readers or other assistive technologies. It aids people with vision loss, including people with low vision and color blindness.

In addition to aiding accessibility, alt text contributes to search engine optimization (SEO). Search engines use alt text to understand the content of images, which can affect the ranking of your content in search results.[14]

Best practices for alt text

Do:

  • Be specific and succinct.
  • Provide enough context.
  • Indicate the purpose of logos, symbols, and buttons in the image.
  • Know the different types of images to add an appropriate description. See Figure 6.6.
  • Check for spelling errors, because the screen reader tries to read the text contained in the alt text.
  • Use proper grammar to enhance user experience.
  • Capitalize the first letter.
  • End with a period so that the screen reader pauses after reading alt text.
Figure 6.6. Types of images
Informative images Add to the context of a page—when removed, the context suffers. The alt text needs to have a concise description—preferably no more than 100 characters. When a lengthier description is necessary, describe the image in the content and provide a shorter alt text.
Functional images (linked images) Used to initiate actions instead of conveying information (e.g., buttons, links, and interactive elements). Alt text must convey the action that will be initiated rather than the actual image description.
Images with text Have text embedded in the image. It is best to use the exact same text in the image.
Deecorative images Serve no specific purpose, i.e., nonessential information, as they are not meant to convey meaning. It is best to use the NULL or empty alt text.

Avoid:

  • Starting with “Photo of” or “Image of,” as screen readers automatically announce an image as an image.[15]
  • Repeating information.
  • Using the file name as an alt text.[16]
  • Using ampersands (&).[17]
  • Using all caps.[18]
  • Using technical terms and jargon.[19]

Having discussed best practices for writing alt text to enhance accessibility, let’s now delve into a few approaches for incorporating alt text into images within digital content.

Alt text in HTML

In HTML, alt text to an image is added using the “alt” attribute with the “<img>” image tag. See Figure 6.7 for an example.

Figure 6.7. Alt text in HTML
<img src=”example.jpg” alt=”A smiling young woman holding a book in her hands, standing in front of a bookshelf filled with books.”>

In this example:

  • img tag: This is the HTML tag for embedding images.
  • src attribute: Specifies the source (file path or URL) of the image.
  • alt attribute: Provides alternative text for the image.

The alt text is added directly to the image element in the HTML code. When the image is displayed on the web page, the alt text will be read by screen readers, providing an image description for users with vision loss and low vision.

Alt text in content management systems (CMS)

Adding alt text is a straightforward process on most CMS platforms. When you integrate an image into a webpage, there is typically a designated field where you can input your description. Some CMS platforms go a step further, offering additional resources or tips on creating effective alt text.[20]

Figure 6.8. Alt text in a content management system
Title Smiling Woman with Books
Alt Text A smiling young woman holding a book in her hands, standing in front of a bookshelf filled with books.

In Figure 6.8, the alt text is included as a specific field within the CMS, ensuring that the image is accompanied by a descriptive textual representation of its content.

Additionally, it is worth noting that some platforms may extract alt text from metadata embedded in the image file itself. In such cases, if the alt text field is left blank, the platform may use alternative sources, such as the filename, image URL, or other embedded metadata, to provide a descriptive label for the image.[21]

Extended description for images

Extended descriptions are used to provide a detailed explanation of complex images (e.g., infographic images, charts, maps), videos, or other visual content. So, they are also referred to as long descriptions.[22] Extended descriptions may include details about the content, context, and other relevant information that may not be fully understood by people with blindness and low vision.

Unlike alt text, extended descriptions do not have a character limit. So extended descriptions are used to supplement information provided in the alt text. See Figure 6.9

Figure 6.9. Alt text vs. extended description
Alt text “Figure A”
Extended description “Figure A shows that …”

Extended descriptions must be formatted using headings to facilitate organization and logical flow of information. Creating tables from graphs and charts is an alternative way of providing extended descriptions.[23]

Alt text for images must be provided even when an extended description is included, where the alt text serves as a summary of information in the extended description.[24]

Alt text and extended description in the IPTC metadata standard

In a rapidly evolving digital landscape, accessibility and inclusivity have taken center stage. A recent development in this pursuit is the International Press Telecommunications Council (IPTC)’s Photo Metadata Standard, which introduces two crucial elements.

On October 27, 2021, the IPTC unveiled an updated Photo Metadata Standard featuring two essential properties: Alt Text (Accessibility) and Extended Description (Accessibility). These accessibility features will streamline the process for various platforms and software to meet WCAG standards and provide images that are accessible to all users. Integrating or embedding inclusive image descriptions into the photo’s metadata will enable alt text and extended descriptions to accompany the image wherever it appears on the internet, in books, or within electronic publication (EPUB) documents.[25]

Audio description for audiovisual resources

Audio description, also known as described video, is a form of narration that describes and provides additional information about the visual details or onscreen movements, such as facial expressions that are not conveyed through dialogue or sound effects.[26] It is mainly intended for people who are blind or with low vision to experience and enjoy the content as sighted individuals.[27] Audio descriptions also can be beneficial for auditory learners or people with autism.[28] Figure 6.10 lists different types of audio description.

Figure 6.10. Types of audio description
Standard audio description Describes the visual elements of media in a concise and objective manner. As this description is integrated into the natural audio breaks, no additional time is added to the original version to accommodate the audio description.[29]
Extended audio description Provides additional details and contextual information beyond the standard audio description about the visual elements. It involves creating a version with more time to include detailed descriptions. In the extended version, the narrator’s voice interrupts the natural audio breaks to provide descriptions of the on-screen action.[30]
Built-in audio description Involves a speaker or narrator incorporating the audio description of visual elements and significant onscreen action directly into their script or talking points during the presentation or recording. This is considered a cost-effective approach as it eliminates adding a separate audio track (i.e., an extended audio description) or interrupting the audio during natural breaks (i.e., a standard audio description).[31]

As mentioned in the beginning of this chapter, MARC 21 field 341 is used to provide information about accessibility features in library resources. This field is crucial for making library resources accessible to individuals with disabilities, as it allows catalogers and users to easily identify and comprehend the accessibility features available in audiovisual materials. These features may include closed captions, subtitles, sign language, audio descriptions, and other forms of content that enhance the experience for individuals with sensory disabilities.[32] Figure 6.11 shows the MARC record from field 341 and the Non-MARC record from MODS (Metadata Object Description Schema).

Figure 6.11. MARC and non-MARC metadata indicating the availability of audio descriptions for a resource
MARC 341 0# $a visual $d audioDescription $2 w3c
MODS <accessCondition>Audio Descriptions Available</accessCondition>

In the MARC example in Figure 6.11, subfield $a indicates the main content related to the sense of sight. In this case, it specifies a visual aspect of the resource. Subfield $d provides additional details or qualifiers related to the main content; it indicates that the resource has an audio description. Subfield $2 specifies the source of information. In the non-MARC example, the MODS <accessCondition> element indicates a specific condition related to access, namely that audio descriptions are available for the described resource.

Captions for audiovisual resources

Captions provide a text-based representation of the spoken and non-spoken audio elements that are necessary to comprehend the content. They are primarily intended for people who are deaf and hard of hearing and those who prefer written information rather than listening to audio. Captions are timed to match the audio and are typically displayed in the media player when users enable them.[33]

While the terms “captions” and “subtitles” are often used interchangeably, there is a subtle difference between thee two. Captions refer to transcriptions in the same language as spoken audio (e.g., English to English). Subtitles involve the translation of the spoken audio into a different language (e.g., English to Spanish).[34] Figure 6.12 shows an example of MARC field 341 and a non-MARC element from Dublin Core.

Figure 6.12. MARC and non-MARC metadata indicating the availability of captions for a resource
MARC 341 0# $a auditory $b captions $2 w3c
Dublin Core <dc:description>This item was captioned by Rev.com in conformance with WCAG 2.1 AA accessibility guidelines.</dc:description>

In the MARC example, $a refers to the mode required to access the content of the resource, $b refers to textual assistive features and adaptations to access the content of the resource, and $2 refers to the identification of the source of terms in subfield $b. The Dublin Core example conveys that the described item (the resource associated with this metadata record) has been captioned by a service called Rev.com, and the captioning was done in accordance with the Web Content Accessibility Guidelines 2.1 AA.

Transcripts for audiovisual resources

A transcript is a textual version of the spoken and non-spoken content in an audio or video file. It is used to provide accessibility to people who are deaf or hard of hearing and people with blindness or low vision. A transcript can also be helpful for people who prefer reading rather than listening to the content or those who may not have access to headphones or speakers.[35] There are three common types of transcripts: basic, descriptive, and interactive.[36] See Figure 6.13 for a summary of these types.

Figure 6.13. Types of transcripts
Basic transcript Written form of all the audio information, including both spoken and non-spoken content, that is necessary for comprehending the material.
Descriptive transcript Extension of the basic transcript that provides a more detailed description of the important audio (like laughter) and visual information (such as someone entering the room) that are relevant to the content—the text equivalent to the extended audio description. Descriptive transcripts are important for creating a more inclusive and accessible experience for a wider audience.
Interactive transcript Highlights the text as it is spoken in the video or audio file. This feature is built into the media player and allows users to select specific phrases in the transcript and jump to the corresponding point in the video. To enable this feature, the media player relies on the captions file.

While MARC does not have a specific field exclusively for transcripts, you can use various MARC fields to include information about the availability of transcripts in your catalog records. Figure 6.14 shows some examples of MARC fields you might use to describe the presence of transcripts and their language.

Figure 6.14. MARC and non-MARC metadata regarding transcripts
MARC 500 ## $a Transcripts available for each episode.

546 ## $a Transcripts in English and Spanish.

Dublin Core <dc:description>The Presidential Transcripts: The Complete Transcripts of the Nixon Tapes-the Most Extraordinary and Revealing White House Document ever Made Public. Other views of same object available.</dc:description>[37]

The 500 General Note field can be used to convey that transcripts are available for each episode of the described resource. Field 500 is used to include additional information that doesn’t fit into other specific fields in the MARC record, making the catalog record more informative for library staff and users.[38] The 546 Language Note field conveys that the transcripts of the described resource are available in both English and Spanish. Field 546 is used to provide information about the language or languages used in the resource, helping catalog users understand the linguistic aspects of the resource, which can be important for individuals who are looking for materials in specific languages or need access to transcripts in a language they understand.

Figure 6.14 also shows how transcript information can be recorded in non-MARC metadata. The Dublin Core metadata in this example use the <dc:description> element to convey a detailed and informative textual description of a resource, specifically a collection of transcripts related to the Nixon Tapes. The description highlights the significance of the transcripts and hints at the availability of alternative views or presentations of the same content.

Note: Remember, the choice of fields and the specific information you include may depend on your cataloging practices and the nature of the resource. The goal is to ensure that users searching your catalog can easily identify resources that offer transcripts to enhance accessibility for people who are deaf or hard of hearing.

Optical character recognition (OCR) for digitized text

OCR is a technology that automatically recognizes and transcribes text from images and scanned documents into machine-readable text.[39] Hence, OCR is considered an efficient and valuable accessibility metadata feature as it allows people with blindness and low vision who may rely on screen readers to access the content.[40]

OCR algorithms are designed to analyze and interpret text from various sources, including handwritten text. However, recognizing handwritten text accurately can be more difficult than recognizing printed text because of the variations in handwriting styles and the lack of consistency.[41]

Metadata information regarding OCR availability and its quality enhances the accessibility and usability of digital content to a wider range of users.[42]

Like transcripts, MARC does not have a specific field solely dedicated to OCR. You can use various MARC fields to indicate that OCR has been applied to a resource, especially when describing digitized content. Figure 6.15 shows some MARC fields that can be relevant when discussing OCR in MARC records.

Figure 6.15. MARC and non-MARC metadata regarding OCR
MARC 500 ## $a Text generated through OCR; some errors may be present.

538 ## $a Text converted to machine-readable format using OCR; PDF format.

Dublin Core <dc:description>Text converted to machine-readable format using OCR </dc:description>

In this example, field 500 General Note is used to communicate that the text within the described resource was created through OCR, a technology that converts printed or handwritten text into machine-readable text. Since OCR is not error-proof, it is acknowledged that there might be some errors or inaccuracies in the converted text. This general note helps library staff and users understand the potential limitations of the text content in the resource.

In addition, field 538 System Details Note can be used to convey technical information, including accessibility information, about the resource. The example in Figure 6.15 mentions that the text content was converted into a machine-readable format using OCR and that the resource is in PDF format. This information is useful for understanding how the resource is presented and accessed in terms of its technical features.[43]

This Dublin Core metadata in Figure 6.15 also provides information about the accessibility of the resource by indicating that the text has undergone OCR conversion. Similarly, metadata can be used to indicate that a handwritten item has been transcribed into machine-readable text. For example, Villanova University has inserted a value of “Transcribed” in the Format field of its digital library to indicate documents that include machine-readable transcriptions. This use of the Dublin Core <dc:format> element to record accessibility information contributes to a more user-friendly experience. Users can refine their searches efficiently using facets, making their interactions with the digital library more productive and satisfying.[44]

Language information for digital resources

Language metadata refers to information about the language used in the digital content. This information helps users to find content in their preferred language. It can be used to ensure that screen readers and other assistive technologies are configured to read the content in the appropriate language.

MARC field 041 Language Code is a versatile tool used to specify the languages associated with the content of a resource. Depending on the specific subfield codes used, it can convey various types of language-related information, including the original language, translations, subtitles, captions, and more.

MARC adheres to specific language code standards, typically using three-letter language codes, such as “eng” for English. MARC field 041 serves as a valuable tool for precisely identifying the languages used in a resource, facilitating the organization and access of multilingual materials.[45] Figure 6.16 shows an example of MARC field 041, where $a refers to the language code of the text/sound track or separate title. This example indicates that the item is available in English, French, and Swahili languages.

Figure 6.16. MARC and non-MARC metadata indicating languages of the resource
MARC 041 ## $a eng $a fre $a swa
Dublin Core <dc:language>eng</dc:language>

In the non-MARC example in Figure 6.16, the Dublin Core language element specifies that the language of the described resource is English.

Conclusion

In summary, by incorporating the relevant metadata features, we not only enhance accessibility but also become advocates for social justice and equity. Besides improving accessibility, a meticulous approach to metadata serves as a powerful tool to promote inclusivity and foster an equitable and just information environment for diverse communities. A librarian’s commitment to accessibility metadata is not just a technical necessity but an active step toward creating a more inclusive and fair society where every individual’s access to knowledge is acknowledged, respected, and facilitated through the conscientious curation of metadata.

Resources

  1. Paola Ingavélez-Guerra, Salvador Otón-Tortosa, José Hilera-González, and Mary Sánchez-Gordón, “The Use of Accessibility Metadata in E-Learning Environments: A Systematic Literature Review,” Universal Access in the Information Society 22 (2023): 445–461, https://doi.org/10.1007/s10209-021-00851-x.
  2. Hans Beerens, “Accessibility Metadata from a User’s Perspective,” (webinar, Accessible Books Consortium, November 23, 2022), https://www.wipo.int/meetings/en/doc_details.jsp?doc_id=593072.
  3. “Creating and Editing Accessibility Metadata MARC Tags for Library Staff,” Accessible Libraries, last modified October 17, 2023, https://accessiblelibraries.ca/resources/accessibility-metadata-for-library-staff/.
  4. “341 - Accessibility Content (R),” MARC 21 Format for Bibliographic Data, Library of Congress, last modified June 7, 2024, https://www.loc.gov/marc/bibliographic/bd341.html.
  5. “546 - Language Note (R),” MARC 21 Bibliographic, last modified July 7, 2022, https://www.loc.gov/marc/bibliographic/bd546.html.
  6. “655 - Index Term-Genre/Form (R),” MARC 21 Bibliographic, last modified July 7, 2022, https://www.loc.gov/marc/bibliographic/bd655.html
  7. “Video Recordings for the Hearing Impaired,” Library of Congress Genre/Form Terms (LCGFT), Library of Congress, last modified January 23, 2019, https://id.loc.gov/authorities/genreForms/gf2011026725.html.
  8. “The Cat Who Smelled a Rat—MARC Tags,” NLS Catalog, National Library Service for the Blind and Print Disabled, Library of Congress, accessed November 9, 2023, https://nlscatalog.loc.gov/vwebv/staffView?searchId=943&recPointer=2&recCount=25&bibId=21898.
  9. Charles LaPierre, Madeleine Rothberg, and Matt Garrish, eds., Schema.Org Accessibility Properties for Discoverability Vocabulary, World Wide Web Consortium (W3C), July 18, 2023, https://w3c.github.io/cg-reports/a11y-discov-vocab/CG-FINAL-vocabulary-20230718/.
  10.   “Web Accessibility,” Shaping Europe's Digital Future, European Commission, last modified October 10, 2023, https://digital-strategy.ec.europa.eu/en/policies/web-accessibility.
  11. ADA National Network, Guidelines for Writing about People With Disabilities, 2018, https://adata.org/factsheet/ADANN-writing.
  12. Office of Public Affairs, U.S. Department of Justice, “Justice Department to Publish Final Rule to Strengthen Web and Mobile App Access for People with Disabilities,” April 8, 2024, https://www.justice.gov/opa/pr/justice-department-publish-final-rule-strengthen-web-and-mobile-app-access-people.
  13. Web Content Accessibility Guidelines (WCAG) 2.1, ed. Andrew Kirkpatrick, Joshue O Connor, Alastair Campbell, and Michael Cooper (W3C, September 21, 2023), https://www.w3.org/TR/WCAG21/.
  14. “Accessibility: Image Alt Text Best Practices,” Siteimprove Help Center, last modified Feburary 3, 2023, https://help.siteimprove.com/support/solutions/articles/80000863904-accessibility-image-alt-text-best-practices.
  15. “Accessibility,” Siteimprove Help Center
  16. “Image, Video, and Audio Accessibility,” The Ultimate Guide to Accessible Web Design, AudioEye, accessed May 8, 2023, https://www.audioeye.com/accessible-web-design/video-audio-image/.
  17. Emmitt, Debbie. “Ampersand (&) or And?” Debbie Emmitt (blog), September 8, 2022, https://www.debbie-emmitt.com/ampersand-or-and/.
  18. “Capitalisation,” A11Y-101, accessed May 11, 2023, https://a11y-101.com/design/capitalisation.
  19. “Simple Language,” A11Y-101, accessed May 11, 2023, https://a11y-101.com/design/simple-language.
  20. “Alt Text for Accessibility,” Level Access, May 4, 2023, https://www.levelaccess.com/blog/alt-text-for-accessibility/.
  21. “Alt Text for Accessibility,” Level Access.
  22. Mick Orlosky, “Alt Text: New Accessibility Metadata Fields in Photo Mechanic,” Camera Bits (blog), November 24, 2022, https://home.camerabits.com/2022/11/23/alt-text-new-accessibility-metadata-fields-in-photo-mechanic/.
  23. “What Is Long Description?” Accessibility by Design, Colorado State University, accessed April 30, 2023, https://www.chhs.colostate.edu/accessibility/best-practices-how-tos/long-description/.
  24. Caroline Desrosiers, “Image Description” (presentation, Accessibility Virtual Conference, NISO, March 29, 2023), https://niso.org/events/accessibility.
  25. Brendan Quinn, “IPTC Announces New Properties in Photo Metadata to Make Images More Accessible,” IPTC (blog), October 27, 2021, https://iptc.org/news/iptc-announces-new-properties-in-photo-metadata-to-make-images-more-accessible/.
  26. “Our Services,” Described Video Canada, accessed April 16, 2024, https://describedvideocanada.com/services/.
  27. “Accessibility Metadata Sets,” Accessibility Metadata Project, accessed May 8, 2023, http://www.a11ymetadata.org/resources/accessibility-metadata-sets/.
  28. “Audio Descriptions Accessibility,” Universal Design Center, California State University, Northridge, https://www.csun.edu/universal-design-center/audio-descriptions-accessibility.
  29. “Important Audio Description Tips: Techniques to Make Visuals Heard,” Minnesota IT Services, January 23, 2023, https://mn.gov/mnit/media/blog/?id=38-560867.
  30. “Important Audio Description Tips,” Minnesota IT Services.
  31. “Audio Descriptions Accessibility,” Universal Design Center.
  32. “341 - Accessibility Content (R),” MARC 21 Bibliographic.
  33. Shawn Lawton Henry, ed., “Captions/Subtitles,” W3C Web Accessibility Initiative (WAI), last modified July 14, 2022, https://www.w3.org/WAI/media/av/captions/.
  34. “accessibilityFeature,” DAISY Accessible Publishing Knowledge Base, accessed May 8, 2023, http://kb.daisy.org/publishing/docs/metadata/schema.org/accessibilityFeature.html#captions.
  35. "Captions, Transcripts, and Audio Descriptions,” WebAIM, last modified July 1, 2020, https://webaim.org/techniques/captions/#ad.
  36. Shawn Lawton Henry, ed., “Transcripts,” WAI, last modified April 12, 2021, https://www.w3.org/WAI/media/av/transcripts/.
  37. “The Presidential Transcripts,” Chez Baldwin Writer’s House Digital Collection, University of Michigan Library Digital Collections, accessed September 25, 2023, https://quod.lib.umich.edu/cgi/i/image/image-idx?id=S-BALDWIN1IC-X-269%5DJB00597.
  38. “500 - General Note (R),” MARC 21 Bibliographic, last modified July 7, 2022, https://www.loc.gov/marc/bibliographic/bd500.html.
  39. “What Is OCR (Optical Character Recognition)?” Content Management, TechTarget, last modified November 2022, https://www.techtarget.com/searchcontentmanagement/definition/OCR-optical-character-recognition.
  40. “Metadata Creation," ScienceDirect, accessed May 10, 2023, https://www.sciencedirect.com/topics/computer-science/metadata-creation; and Peya Mowar, Tanuja Ganu, and Saikat Guha, “Towards Optimizing OCR for Accessibility,” (extended abstract, Accessibility, Vision, and Autonomy Meet, CVPR 2022, New Orleans, LA, 2022), https://doi.org/10.48550/arXiv.2206.10254.
  41. Jamshed Memon, Maira Sami, Rizwan Ahmed Khan, and Mueen Uddin, “Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR),” IEEE Access 8 (2020): 142642–68, https://doi.org/10.1109/ACCESS.2020.3012542.
  42. “Accessibility,” HathiTrust, accessed May 10, 2023, https://www.hathitrust.org/accessibility#accessibility-of-books.
  43. “538 - System Details Note (R),” MARC 21 Bibliographic, last modified April 9, 2008, https://www.loc.gov/marc/bibliographic/bd538.html.
  44. Rebecca Oviedo, "Transcribing History in Villanova University's Digital Library,” Falvey Library, Villanova University, August 2020, https://exhibits.library.villanova.edu/mini-exhibits/transcribing-history.
  45. “041 - Language Code (R),” MARC 21 Bibliographic, last modified June 21, 2023, https://www.loc.gov/marc/bibliographic/bd041.html.

License

Icon for the Creative Commons Attribution 4.0 International License

The DEI Metadata Handbook Copyright © 2024 by H. E. Wintermute, Heather M. Campbell, Christopher S. Dieckman, Nausicaa L. Rose, and Hema Thulsidhos is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.