HomeBlog >
Making Contextual Video Targeting Work at Scale: an Introduction to the IAB and IPTC Taxonomies
< Home
Insights

Making Contextual Video Targeting Work at Scale: an Introduction to the IAB and IPTC Taxonomies

By
Philippe Petitpont
co-founder & CEO
December 14, 2023

What are the IAB and IPTC’s taxonomies and how do they relate to media indexing, especially in this era of AI and machine learning?

Humans know around 100,000 different words, which helps us to generally make ourselves understood. But, making things more complex, we can use different words for the same concept: a ‘vehicle’ could be a car, lorry, automobile, truck, ute, train, bus, and so on. To figure out which vehicle we mean in context, we use layers of subconscious context gleaned over years of living in the world.

Machines don’t have this context, so they don’t know the difference; they just see metal on wheels and go “car”. So when we’re talking to machines, or asking machines to speak with each other, conversation gets more complicated. We need to give a lot more context than we’re used to, just to make sure the machines understand what we’re actually looking for.

This means that when we want machines to exchange information, we need to think beyond words. We need information standards and a process known as contextual targeting.

The Go-To Source of Information Standards in Broadcasting 

Yet, it’s still not simple—to make contextual targeting work en masse and at scale, there’s a need for a semantic world that is universally speaking the same language, regardless of source or location. Organizations like AFP, Reuters, the New York Times, and Sky News all use tags generated by the International Press Telecommunications Council, or IPTC; they’ve done so since the mid-20th century when press wires helped share the news in almost-real time between machines using electrical signals and frequencies. In the modern digital age, using these tags helps the media to make their assets discoverable across different platforms, including search engines. 

But the IPTC is not the only organization working with global information standards—the Interactive Advertising Bureau (IAB) does similar for digital advertising, getting media assets in front of the right people at the right time.

These taxonomies are about content discovery and advertising classification. They help end users to find what they are looking for easily—and in the case of the IAB and digital advertising, they help advertisers target the right audience and personas for their products and services at the right time and in the right place. It’s contextual targeting for the modern age. 

And while the IAB and IPTC’s taxonomies are nowhere near as large as the average human’s word knowledge, they have proven an efficient way to share information between systems. They’ve also evolved with the digital world, and as such they are often used as best practices in digital storage and asset management—especially now, with AI starting to take a leading role in archiving platforms and content delivery. 

What Does This Mean, In Context?

Let’s take the IAB’s tags as an example. Its series of taxonomies and frameworks—an ontology of around 700 words used to describe different concepts across multiple fields— helps marketing leaders to tag and define their advertising and media assets.

For example, a shoe company might use IAB tags to target its ads to people who have shown an interest in footwear in the past. The machines can tell because the user’s profile has been tagged by their browser history to indicate their interest in footwear, and the shoe company has tagged its digital advertising assets with the same referential. The company could also use IAB tags for location, such as looking for users in urban areas if its sneakers are designed to be worn in a city environment. 

Our intrepid shoe company uploads its digital assets with Tag ID IAB-27, indicating it’s for users interested in “Footwear - Athletic”, and perhaps other related tags to narrow down the context some more. These in combination would show the digital advertising to sneaker-heads. By using IAB tags, the shoe company can ensure that its ads are seen by the people who are most likely to be interested in buying its sneakers. This will help the company to maximize its return on investment (ROI).

How Does This Relate to Media Indexing?

Just as with advertisers looking to target the right users for their assets, broadcasters need to have their own media assets tagged and stored in the right way for discoverability—so let’s look at how the IPTC’s taxonomies can be used to describe videos in an archive. 

The IPTC’s tags are what we call the media topics; its nearly 3,000 words relate to different tiers. Your tier 1 concept could be Arts, Culture, Entertainment and Media, but that’s a very broad and wide-ranging theme. As you go down through the tiers, you start to get more detailed and more niche. You might go from Arts and Entertainment (tier 2) to Animation (tier 3) to Cartoon (tier 4), getting more descriptive as you move through the tiers. One machine can say: hey, I’m looking for some Disney stuff. The other machine then says: how about these? Or do you want to go deeper, maybe just look at the animated films from the 1990s? It’s like moving through a Russian doll set, trying to get to the smallest item at the center. The semantics used in these tags help to really target a search to get the right asset quickly and efficiently.

A graphic illustrating how the IPTC's tags are layered, like moving through a Russian doll set.
The IPTC’s tiers of tags become more detailed and more niche.

By employing these tags in your own asset management, you can enable searching and discoverability at both a high level and a much more detailed, targeted level. When you’re considering how to best organize and manage your (no doubt huge) media asset archive, there are a number of things to consider—and perhaps not top of mind are the ideas of portability and compliance. You might just be thinking about usability and accessibility for your staff and clients, about getting all of your stuff imported and sorted. When you had only humans involved, that was time-consuming but relatively ok.

Now, though? Enter AI and its promise to simplify uploading and indexing of your assets. How can you be sure it’s tagging things the way you want or need it to? These AI systems need to speak a long wedge that is compliant with the way humans speak. But AI doesn’t speak in words; it speaks in vectors. These vectors are then transformed into text for humans to read, which again is fine in silo—but what about when you’re sharing these assets outside of your organization or platform? You need a way for these vectors to make the data interoperable so you don’t lose any of the raw AI data. 

This is where information standards come into play. By using the IPTC and/or IAB tags, you ensure multiple systems can access that raw data and parse it for the local systems, and local humans. In short, these information standards ensure traceability, accessibility, and understandability from system to system. Employing an industry standard rather than making up your own ensures your assets can be used, shared, and be appreciated by clients, audiences, and content platforms alike. 

Imagine a luxury perfume ad showing mid-roll on a documentary about waste water treatment: that wouldn’t be a great look for the perfume brand, and viewers may well roll their eyes at the lack of self-awareness. These sorts of classifications and standardizations help you ensure that the appropriate category of ads show before, during, or after likeminded content on a FAST or OTT platform. With a need for automated, at-scale programmation, tags are becoming essential to provide a qualitative experience. 

Where Do We Go From Here?

It’s easy to stay focused on the here and now, on what we need platforms to manage right this minute. But technology moves fast, and nowhere is that more true than in the world of AI and machine learning. We all need to be thinking about what is the next level in terms of both human and machine interactions so that your content continues to be easily discovered and served. If you have a good description of your assets, your long wedge models are here to make sure it will be compliant—and the more descriptive your description, the more the large language models will be able to translate that to other systems. 

What’s on the horizon, then? We’re now looking not just at pure text, but at the elements around it. The IPTC is developing a descriptor for editorial tone, and on sentiment and emotion in the content—though we need to make sure we have a rational approach to this. Machines will struggle to choose content that is more emotional, whereas a human can do that much more easily because it’s wired into our brain—we’ve been designed to feel emotion, whereas it could lead to bias in the machines. We still need to teach the machines to normalize sentiment, so it’s not quite ready to be unleashed.

Ultimately, the end goal is to create a single source of truth for your media assets, and to avoid any AI hallucinations in the targeting and tagging. Teaching AI and large language models how to use the IPTC and IAB tags and principles helps to avoid clashing metadata, or descriptions that other platforms can’t use. In an age where organizations are looking to share their content to a maximum number of people, it’s important to consider the world-wide implications of your metadata and descriptions. We firmly believe the IPTC principles are the way to go for the media industry in terms of standards, which is why we’ve adopted them for the Moments Lab platform. 

Get in touch—we’d be happy to show you around.

Newsletter

Sign up to be the first to know about company news, product updates, industry trends, and more.
Information
Moments Lab pour votre organisation

Contactez-nous pour une démo et un essai gratuit de 7 jours.

C'est parti →