Leading AI video discovery company Moments Lab has today unveiled MXT-2, the latest version of its multimodal AI indexing technology that helps producers find the right moments in their large video libraries for fast content creation, repurposing, and monetization.
At 1.5 billion images and counting, MXT-2’s training dataset is more than three times larger than its predecessor, enabling the AI to generate even more detailed and accurate humanlike descriptions of video content.
New features include Custom Moments, which are time-coded descriptions of key video moments that are tailored to user needs — such as pinpointing the most compelling scenes in unscripted TV shows to build viral shorts, trailers, or best-of compilations.
The new Custom Insights feature enables MXT-2 users to automatically generate texts of any length, including titles, teaser descriptions, and full-length articles. Custom Insights also automates the theme or editorial topic classification of videos.
“We’re proud to say that MXT-2 is the best technology in the world to sequence and describe video content,” said Frederic Petitpont, CTO and co-founder of Moments Lab. “In fact, MXT-2 outperforms Google Research’s Vid2Seq on video sequencing by 47%.”
“Our mission is to remove the roadblocks to better video storytelling by making it simpler to search through and sort ever-growing media libraries,” said Philippe
Petitpont, CEO and co-founder of Moments Lab. “MXT-2 provides our users with even deeper editorial insights and video searchability at scale to maximize content creation, reuse and repurposing and deliver tangible ROI.”
MXT-2 is available now on the Moments Lab video discovery platform, or it can be integrated into users’ existing tools, workflows, and platforms via API.
MXT-2 will be available to demo at the 2025 NAB Show, April 6-9 in Las Vegas, at the Moments Lab booth (SL12213). Book a meeting.
Or for more information you can contact us.