Leading AI company solving the video searchability problem, Moments Lab (ex Newsbridge), announced today MXT-1, a revolutionary new generative AI indexing technology. Moments Lab’s core AI indexing tech uses natural language models to generate human-like descriptions of video content. Capable of indexing more than 500 hours of video per minute, MXT-1 is a game changer for organizations working with media and sports content. Leveraging the next-gen technology, users can index vast amounts of content in record time, and search their large video collections as easily and intuitively as searching the web.
“With MXT-1, Moments Lab has achieved an important breakthrough in content indexing and discovery,” said Frederic Petitpont, co-founder and CTO at Moments Lab. “Our new core technology combines multiple AI modalities, including computer vision and speech processing with natural language models. MXT-1 aligns video perception with the power of language models to accomplish incredible results.”
MXT-1 is specifically trained on hundreds of thousands of hours of media, entertainment, and sports audiovisual content, leveraging AI transformers, making it particularly good at describing content for these industries’ indexing and search use cases. Fast and scalable, MXT-1 enables users to quickly start enhancing, sharing, and monetizing their archives.
“Until now, broadcasters faced the tough question of what footage to fully index due to limited media logging resources to transcribe, describe, and summarize content,” said Philippe Petitpont, co-founder and CEO at Moments Lab. “MXT-1 dramatically reduces the cost of using AI at scale, making mass indexing of media assets a business reality. With MXT-1, content owners know exactly what is in their files and can shine a light on the hidden gems in their archives.”
The cost of traditional AI services is prohibitive, and holds companies back from embarking on fully automating their archive and live indexing operations. MXT-1, with its dramatically reduced energy consumption, makes AI indexing seven times more cost-efficient than mono or unimodal AI systems.
MXT-1 offers a significant leap forward in AI indexing technology based on its ability to describe scenes in natural language. Moments Lab’s language model links raw modalities (i.e., detection of faces, text, logos, landmarks, objects, actions, shot types, and transcription) to generate a semantic description for increased searchability. It improves upon the current state of AI indexing, which produces a jumble of tags that fail to give content owners the level of information they need. MXT-1 bundles all the latest evolutions of Moments Lab's multifaceted AI, and can be trained and fine-tuned by an end user with multimodal rules and a custom thesaurus.
MXT-1 is available in beta mode now and is being progressively deployed in all of Moments Lab’s cloud solutions, including Just Index, Media Hub, Live Asset Manager and Media Marketplace.
Moments Lab will showcase its new MXT-1 AI indexing technology at the 2023 NAB Show, April 16-19 in Las Vegas in booth W2073. Book your demo meeting.
For more information, please contact us.