MXT-2 combines what it sees, hears and knows about your videos to generate rich and accurate time-coded descriptions.
Forget about manual logging. MXT-2 breaks down video footage and describes what’s happening like a human.
Identify who is on screen, what they’re doing and even where they are. You can easily train MXT-2 to detect the people, places and logos relevant to you.
Customize MXT-2 to break down videos into the moments your users need, for whatever story format they’re crafting. From trailers to highlight reels, compilations and more.
Provide a taxonomy and ask MXT-2 to sort your video according to the content. Classify by theme, content type or by any other parameter important to you.
Create custom titles and descriptions for digital and social. Highlight diversity insights or even suggest articles and plot lines.
Turn speech into text, even in footage where multiple languages are spoken. MXT-2 will automatically highlight the best soundbites so users don’t need to scrub through videos to find the quotes they need.
Multiple AI technologies are unified for contextually-aware indexing, available in a single contract.
MXT-2 is trained on more than 1.5 billion data points, reducing hallucinations and biases.
Under 40 minutes' processing time for a one-hour video, regardless of whether you're processing one video or 100 simultaneously.
Train MXT-2 to detect your talent and logos. Break down videos depending on what your teams are looking for.
All metadata generated is text, ensuring compatibility across tools now and in the future—no proprietary embeddings here.
Plug MXT’s AI-generated metadata into your existing tools—DAM, MAM, CMS, or anything in between. Use our API or low and no-code integration platforms like Qibb, Embrace or Tedial to help.