Skip to content
14th April 2025

Is it fake or for real? Trusting the content we’re watching

Is it fake or for real? Trusting the content we’re watching with watermarking technologies

By William Del Strother, VP of Innovation, FMTS  

Thanks to colour–coded ‘traffic light’ food labelling, we can quickly and easily make healthy and safe choices. If only it was as easy to validate the video content we see to establish whether what we’re watching is trustworthy, real and not putting our digital health at risk.

We all have front row seats for the impending perfect storm: AI manipulated video becoming hard to detect; an increasingly polarised political landscape; and the growth of extreme content as big tech drives up user engagement while reducing its fact checking teams. We are seeing the rise of misleading and unreliable news stories and videos and images on social media and news sites. The threat of misinformation and manipulation such as election interference from deep fakes and AI-enhanced content - both intentionally and unintentionally - will grow as AI becomes more sophisticated. 

We are at a tipping point where content creators, studios and governments as well as misguided individuals can get access to the power to generate video that is fully convincing and indistinguishable from ‘real’ content, eroding trust in video as a source of truth. 

Broadcasters and content owners need to figure out their level of tolerance for any manipulation before they can consider what action they would take. While they might turn a blind eye to memes that have the upside of extending the reach of their content, their red lines could be a minor change to a crucial scene or more serious content manipulation such as modifying lip movement to alter what someone says or editing the narrative by changing the words to manipulate their meaning and make them more incriminating.  

News organisations will have their work cut out evaluating whether video contributions from their sources in the field have been altered. Even if the content has not been tampered with, the context in which it is distributed could be manipulated easily; for example, real footage of a historic terrorist incident could be posted with a misleading date and/or location to sway opinion about another country; or old photos of a celebrity's baby bump could be posted at a later date to falsely report they are having another child, simply as clickbait. Techniques and tools to establish the validity of content will need to carefully consider both the content itself and its context in order to give a rounded picture.  

Consumers will need to rebuild their trust in news services, social platforms and influencers. Some might offer mechanisms to indicate the trustworthiness of a video or news report, based on some combination of manual fact-checking, community-driven flags and/or automated solutions. For some licence-funded broadcasters, demonstrating robust anti-deepfake validation could be a compliance necessity to maintain their broadcasting rights. Meanwhile, others will actively embrace the lack of fact-checking and let misinformation run wild. 

Friend MTS has been on the side of broadcasters, content owners, rights holders and consumers for over 25 years when we first began protecting video content against pirates and unauthorised users.  Over the years we have developed a very detailed understanding of how pirates tamper with video content. Our portfolio of video security solutions includes unique forensic watermarking proprietary technologies that ensure pirates can’t tamper with the video to remove watermarks and our Emmy® award-winning fingerprint-augmented content monitoring can establish the origins of content and trace content back to its original source, such as a specific user, device, or distribution channel.

In today’s ever evolving media landscape, it’s reassuring to know that our team of content security experts and the big brains in our R&D department are focused on how to mitigate the impact of deep fakes. The team is continually working behind the scenes, expanding the scope of our watermarking, fingerprinting and monitoring technologies to rapidly detect whether a video has been tampered with and the extent and scale to which any misrepresentation has occurred so that consumers can put their trust back into videos. 

It's just a matter of time before it will be as easy to spot the trustworthiness of content as it is to see how much saturated fat is in your favourite snack. 

Watch this space…

In the meantime, to find out more about our watermarking and monitoring solutions, please get in touch

Share: