Moments Lab’s MXT-2 video indexing technology combines what it sees and hears to turn video into descriptive metadata that can be added to media workflows using Embrace’s low-code platform. This allows media organizations to quickly build custom applications, automate processes, and create tailored integrations with simple drag-and-drop tools.
Moments Lab’s MXT-2 video indexing technology combines what it sees and hears to turn video into descriptive metadata that can be added to media workflows using Embrace’s low-code platform. This allows media organizations to quickly build custom applications, automate processes, and create tailored integrations with simple drag-and-drop tools.
Use Embrace’s low-code platform to easily integrate MXT-2 metadata into your existing MAM, DAM, or CMS to improve media search and accessibility. MXT-2 breaks down video into key shots and sequences, describing what’s happening including who and what appears, what they might be saying, where they are and even what shot type is used. It will transcribe audio and detect the best soundbites. MXT-2 can be trained to detect people relevant to your business.
MXT-2 can be customized to pinpoint the best moments from specific video content at scale, whether it’s retrieving the best actions from a sports event to create highlight packages or beauty shots from a documentary to be used in trailers helping save huge amounts of time on manual editing.
Custom descriptions and summaries for social media and streaming platforms can be generated at scale using and integrated using Embrace into content management systems and social publishing tools to help increase engagement and views across digital platforms.
Descriptive metadata can be included in frame-accurate players such as Codemill’s Accurate Player to help quality control and compliance professionals more easily retrieve particular moments that might require attention, speeding up the QC workflow.