
Adding Spanish Subtitles to Your Movies and Videos: Simple Steps
Do you need Spanish subtitles for your videos? Learn how to translate and transcribe English audio quickly while maintaining contextual accuracy.
An analysis of eye movements as people read subtitles? That sounds like scientific experimentation, but in fact, researchers are using viewing patterns, or “eye tracking technology,” as a tool to make subtitling more effective. This article provides an overview of how researchers are using this information to better understand what viewers focus on while watching a screen, including text, images, and other visual cues. Researchers then apply this knowledge to improve subtitle placement, formatting, timing and more!
First, let’s explore how eye tracking is actually measured. An eye tracking assessment is usually done in a lab, under controlled conditions, with specialized equipment, and through various stages. Like any other type of scientific assessment, the goal is to collect data that can be considered to be reliable and actionable (with immediate practical applications, such as for subtitling).
The first stage consists of informing the people whose eye movements will be measured about the purpose of the assessment. This includes how the eye tracking procedure is done and any potential risks or discomfort. Because these assessments are typically treated as part of research on human subjects, people must provide consent before the procedure begins.
The eye tracking equipment used for these assessments can be a head-mounted device, a remote eye tracker mounted on a monitor, or a virtual reality headset with integrated eye tracking. The device is connected to a computer.
Controlled conditions in a lab environment include things like adjusting lighting and blocking distractions to ensure accurate data collection. People whose gaze is being tracked may be standing, sitting or lying down, depending on the specific experimental setup.
Once the assessment is in progress, the following happens:
People are asked to follow a series of visual targets or stimuli on a screen or within their field of view
The eye tracking device records their eye movements and uses this data to establish a baseline (a reference value) for subsequent measurements
The eye tracking device is calibrated as needed to ensure that it accurately maps the person's gaze to the visual content (stimuli) presented
The types of visual content participants are exposed to include images, videos, websites, or real-world scenarios, depending on the goal of the research. For subtitling research, the eye tracker continuously records reading patterns, pupil dilation, and other relevant metrics, such as cognitive load.
In some cases, researchers may ask people to perform specific tasks while their eye movements are monitored, like interacting with a user interface. During those tasks, various eye movement metrics are calculated, including fixation duration (the time a person’s gaze is sustained on a focal point), saccade amplitude (rapid eye movement that shifts the gaze from one focal point to another), pupil dilation, and blink rate. These metrics provide insights into cognitive load, emotional responses, and decision-making processes.
When the eye tracking activity is complete, i.e. the assessment for data collection is over, the recorded data is used to create “gaze maps,” which are visual representations of where people looked and for how long. This is how eye tracking technology can help reveal areas of interest like attention distribution (more on that below) and potential difficulties.
Researchers then use statistical methods to compare the compiled data across participants, the research conditions, or time points to identify significant patterns and differences, and draw conclusions.
There are several ways in which data collected through eye tracking technology provides useful information for subtitling generation. For example, eye tracking reveals how viewers read text on screen: how they look at the order of words, the time they spend reading each word, and the number of times they look to read back. These types of viewer reading patterns can help determine subtitle readability and timing.
Eye-tracking technology also helps understand other factors researchers find useful for subtitling. One of those factors is “attention distribution.” This basically shows how viewers divide their attention between subtitles and other visual content. Attention distribution information helps engineers make better decisions for subtitle placement and format.
Another key factor is “cognitive load” or a person’s mental effort as they use their eyes. This means that scientists analyze the gaze patterns, i.e., the different positions of the eyes as people read. Human eyes do not just passively scan the surrounding environment. Instead, eye movement is closely linked to attention and mental effort to understand and process information. The metrics mentioned above, such as pupil dilation, fixation duration, and blink rate, also help measure cognitive load in eye tracking technology to improve things like subtitling speed.
Combining the data obtained for subtitle readability and timing, attention distribution, and cognitive load allows researchers to find a comfortable viewing balance or validate decisions such as running faster subtitles or changing the font color.
For multilingual subtitling, achieving a balance is a big deal. For example, when a target language expands by 20-35% compared to the source language, it is generally accepted that straight translation will not work. Expanded target text instead undergoes adaptation and multiple rounds of additional adjustments to account for readability, time on screen, and other nuances.
In that context, eye tracking data can reveal whether viewers are struggling to keep up with subtitles due to excessive text or complex vocabulary, or lack of synchronicity between the audio and the text on screen. This data can inform decisions about how much text to include in subtitles for which target languages, and how to simplify the language as needed to reduce cognitive load.
It is known that viewers process subtitles in different languages differently, and eye tracking has helped establish the differences in reading patterns between native language and foreign languages.
Eye tracking has also been shown to help assess how viewers process subtitles that are broken up into different lines or segments. This can help optimize the placement of line breaks for easier reading and understanding, particularly in fast-paced scenes or with longer sentences.
Analyzing viewers' gaze patterns in relation to the video content helps researchers to fine-tune the timing and synchronization of subtitles. This in turn ensures that translated subtitles appear and disappear when it is most opportune, minimizing the need for viewers to switch between subtitles and other visual elements.
Eye tracking studies can also reveal variations in mental effort among different viewers, based on factors such as language proficiency, familiarity with subtitles, and cognitive abilities. This helps audio-visual production teams to personalize subtitling options that can accommodate individual needs and preferences.
Measuring this mental effort also has important applications for subtitles on applications such as learning interfaces. Taking into account the data obtained can help user interface designers to improve performance and reduce errors when making them accessible to the hearing impaired or to people who speak other languages.
Do you need Spanish subtitles for your videos? Learn how to translate and transcribe English audio quickly while maintaining contextual accuracy.
In this article, you'll learn all about the European Accessibility Act (EAA) and its requirements for making audiovisual content accessible through subtitles and captions. It also explains how automating the subtitling process can save you time and money, improve accessibility, and engage a broader audience. Article written by Henni Paulsen, June 2024.
Discover why the Web Content Accessibility Guidelines (WCAG) are crucial for subtitling and transcription, helping media companies ensure inclusivity and reach a wider audience. This article explains how WCAG standards support the deaf and hard-of-hearing community with high-quality, accurate subtitles. Learn how following these guidelines can improve your content’s accessibility and create a more inclusive digital experience. Written by Henni Paulsen, June 2024.
Shot changes - transitions between different camera angles, scenes, or locations - are fundamental to storytelling in video, but also pose unique challenges for subtitling. We’ll dive deep into what shot changes are exactly and why they’re super important for providing top-quality subtitles.
An analysis of eye movements as people read subtitles? That sounds like scientific experimentation, but in fact, researchers are using viewing patterns, or “eye tracking technology,” as a tool to make subtitling more effective. This article provides an overview of how researchers are using this information to better understand what viewers focus on while watching a screen, including text, images, and other visual cues. Researchers then apply this knowledge to improve subtitle placement, formatting, timing and more!
Live captioning, which is sometimes called real-time transcription, is the process of converting spoken dialogue or narration into text in real time. This can be done during live events or from recorded video.
In this article, we’ll dive into why subtitles are important for your Instagram Reels and how they can improve your content. You’ll learn how tools like Happy Scribe can help you easily add captions to your videos, making them more accessible and engaging for a bigger audience.
In this article, we’ll walk you through the differences between VTT and SRT subtitle formats, so you can pick the one that works best for your video. Whether you’re after sleek, styled captions or just need something simple and widely compatible, we’ve got you covered!
Research suggests that millennials and Gen Z viewers have made captions and subtitles a regular part of their viewing experience, both by choice and sometimes out of necessity. This article discusses the trends and benefits of captions and subtitling for younger audiences.
In today’s global media landscape, accessibility isn’t just a nice-to-have—it’s a must. Non-compliance with accessibility standards can lead to hefty fines, legal trouble, and damaged reputations. But it’s not all about avoiding penalties. Embracing inclusive media practices can also open new opportunities and boost your bottom line. In this whitepaper, we’ll explore how legislation and technology are shaping accessibility, and share practical, cost-effective strategies to stay compliant while growing your audience.