Closed Captions vs. Subtitles: Why the Difference Matters

The two terms are similar, but not completely interchangeable.

Most video players allow users to read the text spoken in the video. It’s a feature that people have come to expect from virtually all video content they interact with.

There are good reasons for this. Not everyone is able to hear every word spoken in your video content. 85% of Facebook video is watched without sound, This means that videos without captioning or subtitles are losing valuable viewers and engagement.

Both captions and subtitles positively impact viewer engagement but in slightly different ways. While they work in similar ways, they actually solve two different problems.

What are Captions For?

Captions are clips of text that contain dialogue transcriptions. They are primarily designed to help people who are deaf or hard of hearing. As a result, they often include non-dialogue information.

For instance, most professional captions will specify who is speaking, especially if the character is off-screen. This is because a hearing-impaired viewer may not distinguish between two or more speakers based on the timbre of an actor’s voice.

It’s also common for captions to describe ambient sounds, environmental noises, and even the musical score. This information is important for interpreting the scene being shown, and captioning ensures that viewers can process that information without needing to hear it – even if the words, “suspenseful music” isn’t quite as thrilling as the composer’s orchestral score.

Closed vs. Open Captions

Captions can be either closed or open. These terms describe how the caption is embedded into the video, and whether viewers can turn them off or not.

  • Closed captioning (CC) is not hard coded into the video file itself. This means that viewers can choose to display captions or turn them off.
  • Open captions are embedded into the video file permanently. You cannot turn them off without painstakingly editing them out with professional software.

Of the two, closed captions are far more common in today’s web-based media landscape. Viewers appreciate having the ability to choose whether they view captions or not based on their personal preference.

Of the two, open captioning is the older technology. It was first broadcast in 1972, and quickly became the norm for news broadcasts. It wasn’t until later that decade that the FCC approved a television broadcast frequency to carry closed captioning signals.

Even then, television viewers needed to purchase a special decoder box to receive and process captions. It wasn’t until the 1990s that televisions included caption-decoding technology as a built-in feature.

As of 2012, the FCC requires consumer electronics to universally support closed captioning for Internet video. Whether you create content for Netflix, upload videos to YouTube, or deploy a custom video hosting solution, closed captions are guaranteed to display correctly on virtually every device.

Subtitles: Transcriptions of Translations

Subtitles assume that viewers can hear everything happening on-screen. Instead of addressing the hard of hearing, they address people who can hear perfectly well – but fail to understand what is being said.

Subtitles are translations of transcriptions – including of closed captions texts. These translations allow people from other cultures and territories understand a video as if it were spoken in their own language. They typically do not describe sounds, speakers, or ambient noises in the text.

While closed captioning did not become commonplace until the end of the 20th century, subtitles were first patented in the 1930s. The demand for Hollywood movies in the world’s most far-flung places ensured a high demand for quality translations – often stamped directly onto the film strips sent to those countries.

How Subtitle Technology Developed Over Time

For much of the 20th century, subtitle technology competed with foreign-language overdubbing for prominence. Dubbing – as it is often called – was a useful tool for reaching audiences in the developing world, where low literacy rates could impact the success of a subtitled film or video.

Overdubbing became less popular through the final half of the 20th century as literacy rates surged upwards across the globe. At the same time, increasingly sophisticated subtitling made it faster, cheaper, and easier for content creators to reach new audiences without having to hire an entire cast of foreign-language voice actors.

Nevertheless, overdubbing has made a surprise comeback thanks to Netflix. The streaming giant employs studios and artists around the world to translate audio content into more than 31 languages. Yet at the same time, it continues to raise the bar on subtitle quality as well – rather than seeing the two formats as mutually exclusive.

This approach drives the value of subtitles in a new way. Netflix does not have to worry about literacy rates the way Hollywood producers did during its Golden and Silver ages. Instead, it uses both as complementary technologies for capturing the widest possible audience.

In terms of technology, today’s subtitle tracks use the same FCC-mandated digital encoding that closed captioning texts use. This is the main reason why it’s so easy to confuse the two.

When to Use Closed Captions and Subtitles

The case of Netflix underlines a useful point on the value of closed captions and subtitles. These technologies serve slightly different purposes in slightly different ways. Knowing which one to use – and when – is important for any content creator.

  • Closed Captioning is one of the services guaranteed to the public under the Americans with Disabilities Act (ADA) for most forms of public visual content. Content creators can bypass the need to create their own captions by using sign language interpreters.
  • Subtitles are the easiest way to present content to an audience beyond the community that understands its language. Content creators can also use overdubbing to address these audiences.

Both closed captioning and subtitles can help address viewers with muted devices. They can also help audiences decipher technical dialogue and regional dialects in languages they know.

These technologies are starting to play a wider role outside of film and television. In education, they help students understand difficult technical language and better memorize important terms. This is especially helpful in academic disciplines like medicine, mathematics, and engineering.

Professors, instructors, and trainers who deliver content to students through video can use captions and subtitles to maximize student performance. Captions have been shown to increase attention, memorization, and comprehension of video content in more than 100 empirical studies.

Similarly, the availability of subtitles can help drive the value of video content in communities that may not otherwise have access to it. Even if the majority of viewers are proficient with the video’s primary audio language, that doesn’t always mean it’s their preferred language.

This is especially important for reaching communities who speak different languages at home and in public. Proficiency in a widely spoken language doesn’t always mean that language is the best one to use.

Introducing Subtitles for the Deaf and Hard of Hearing (SDH)

In some media environments, you may also see subtitles for the Deaf and Hard of Hearing (SDH). These are closed captions that include non-dialogue descriptions and speaker identification translated into another language. This type of captioning is meant to provide accessibility to the hard of hearing in other languages.

Video Subtitling and Captioning Outside North America

Outside the United States and Canada, the term “video subtitling” refers both to closed captioning for the hard of hearing and subtitles for foreign language-speakers. This is the term that you will find most frequently in the UK, Ireland, and other English-speaking countries. Many non-English languages also do not differentiate between captions and subtitles.

Since relatively few territories distinguish between closed captioning and subtitles, content creators should pay extra attention to the way they present content to communities outside North America. Deafness is not a uniquely American trait. Be sure to communicate the degree of accessibility your content offers to its audience.

 

Originally published on May 10th, 2021, updated on July 28th, 2022
The Blog

Closed Captions vs. Subtitles: Why the Difference Matters

by Austin Jesse Mitchell time to read: 5 min
0