HomeBlog >
Moments Lab Announces Built-In Subtitles Feature in Cloud Media Hub
< Home
News & Events

Moments Lab Announces Built-In Subtitles Feature in Cloud Media Hub

By
Moments Lab Content Team
September 12, 2022

Newest feature speeds up production workflows, and gives users the ability to add and amend subtitles that closely mimic human-generated text, thanks to Multimodal AI.

Users of cloud media hub, Moments Lab (ex Newsbridge), can now generate subtitles for their video content within the platform itself. Using the same disruptive Multimodal AI technology that enables speech-to-text, Moments Lab’s built-in subtitles feature speeds up media production workflows. And its adherence to key elements of the BBC Subtitle Guidelines makes it easier for editors to implement subtitling best practices.

The subtitles can be burned-in for publication on social media, and the new feature also integrates with Avid, Adobe and Final Cut workstations. Included in the feature’s initial rollout is the ability to add, edit or delete text segments and generate an SRT file. Future deployments will automatically generate the name of the person speaking, allow multiple users to be working in the subtitle editor at the same time, and give them the ability to set colors, text types and change the placement of the subtitles on screen.

Moments Lab Co-Founder and CTO Frederic Petitpont explains:

“We know that as much as 85 percent of online video views are happening with the sound off, and subtitling is an essential but time-consuming step in post production. Our customers tell us that one hour of video equals 10 hours of manual subtitling work. With a subtitle editor now available in the Moments Lab Media Hub, our customers can speed up their workflows and ensure maximum audience engagement with their content.”
“What’s unique about our subtitles feature is how closely it emulates human-generated captions. This is thanks to Moments Lab’s multimodal AI diarization, which identifies who speaks and when, through recognizing that there are people on screen and that their lips are moving. Unlike traditional, unimodal AI, multimodal analyzes multiple data types - such as images, objects, speech and context - when adding text to a video file. In this way, it operates more like the human mind and produces more logical and accurate results.”
Moments Lab 's new Subtitle Editor.

Petitpont continued:

“Speech-to-text technologies have matured in recent years, and our customers love our automatic transcription and translation feature. However, a video transcript is not the same as subtitling, and the desire to go further really motivated us to build the Moments Lab subtitles feature. The BBC is considered a global leader in accessible subtitle practice, so our subtitles adhere to key BBC rules for editing, style, presentation and timing.” 

The subtitles feature is available to demo at IBC 2022, Hall 7 Stand B09.

Moments Lab pour votre organisation

Contactez-nous pour une démo et un essai gratuit de 7 jours.

C'est parti →