Seeing is not believing...?

by Albert Silver
3/17/2021 – That is how the saying goes doesn’t it? Doctored photos are anything but new, so much so that the program called Photoshop has now become an actual verb in the English language, much like Xerox in its heyday. However, while advanced modified videos were once the domain of special effects studios and the technically gifted, now free mobile apps are available so that even Morphy, Carlsen and Anand can sing with just a pic and a click.

ChessBase 17 - Mega package - Edition 2024 ChessBase 17 - Mega package - Edition 2024

It is the program of choice for anyone who loves the game and wants to know more about it. Start your personal success story with ChessBase and enjoy the game even more.

More...

The idea of being able to modify images is nothing new, and has been a staple of the fashion and beauty industry for decades. While many such tools are standard, they have become so ubiquitous in even our phone’s selfies, that to say a picture was ‘photoshopped’ requires no further explanation, except maybe to our grandparents. Video modifications have been much slower and facial changes or outright creations have still been the domain of make-up artists, while visual effects were more concerned with the environments.

In recent years, the combination of technology and AI has allowed greater freedom in not only modifying live actors but in actually bringing back the dead to the silver screen. Martin Scorcese’s 2019 film, “The Irishman” famously rejuvenated Robert de Niro by decades to portray his character's life as a gangster. Traditionally, younger actors are portrayed with gradual layers of makeup to show their increasing age, since making a 75-year-old look like he is 25 is a bit more complicated. To be fair, while de Niro did look younger, he did not quite move like a 25-year-old, so some things have yet to be solved. 

Here is a Making Of video of the de-aging process in Martin Scorcese's "The Irishman", and the special tri-camera setup used to make it happen.

Then we had the Star Wars spinoff, Rogue One, which used computer animation to add the characters of two dead actors, Peter Cushing as Grand Moff Tarkin, and Carrie Fisher as Princess Leia. All this has created waves as we are forced to realize that little by not-so-little we are faced with ever more, and higher quality digital fantasies of people that are passed off as real. 

In this news report by ABC News, we are shown the efforts and challenges in bringing back the characters played by Peter Cushing and Carrie Fisher.

Deepfakes

While the term ‘deepfakes’ originated around the end of 2017 from a Reddit user named "deepfakes", the modern technology goes back at least 20 years earlier.

This seminal paper, Video Rewrite, laid the foundation for the technology and algorithms used in Deepfakes to this day. [link to the project's page]

The father of the current technology behind it can be traced back to the landmark Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track. It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face. 

This is one of the most important works in the development of deepfakes. In fact, many of today’s common video effects that are bundled into programs like Premiere Pro or Final Cut use upgraded algorithm philosophies from this paper.

There are plenty of deepfake projects on Github, with some containing prebuilt executables ready for immediate use. It’s easy for even an amateur to make deepfakes today with the biggest hurdle being patience. That said, the incoming efficiency gains from hardware and software development are only going to make them more prevalent.

Just recently, one of the latest examples of the evolution of this technology could be seen in the prank/demonstration of the Tom Cruise TikTok cameos that had many believing the famous actor had decided to join the popular video social media platform. However, it was then unveiled to be an elaborate stunt meant to demonstrate how advanced deepfakes had become, and as a showcase of the author’s skills. The creator came forth with it, and even demonstrated how he had done it.

In the title of this article by The Verge, it is interesting that the author of the Tom Cruise deepfakes somehow underestimates the very technology he used to pull his prank.

An excellent video showing how the Tom Cruise deepfakes were done with photorealistic quality.

However, rather ironically, the headline of the otherwise excellent piece in The Verge stated, “Tom Cruise deepfake creator says public shouldn’t be worried about ‘one-click fakes’” This was dated on March 5th this year, but one week later we have a new free mobile app, Wombo, that allows the user to take a selfie or use a photo of their choice to create animated videos of the face, singing well-known over-the-top songs, but with astonishing levels of animation. This goes far beyond simply appending someone’s facial features over another’s. Here the software will animate head and body movement, eyebrows, mouth, eyes, and even turn the head to recreate ears and hair where the photo had not shown any.

Notice how each person was modified with different eyes and mouths, even if they follow a similar pattern of where they look or how they move.

We share some examples with a titan of the past such as Paul Morphy, but also former and current champions Vishy Anand and Magnus Carlsen, with the images these were done with. It took all of one minute from image choice to pressing the button. Even if you plug your ears, observe just how impressive it is for a basic ‘one-click’ app on a phone.

Paul Morphy

Original image

...cleaned and coloured

...then animated

...and in song (click to start)

Vishy Anand and Magnus Carlsen

Excellent portrait of Vishy Anand,
taken as a standard 2D image.

Also an excellent portrait of Magnus
taken from the banner ad for his event.

...then animated with bobbing shoulders,
moving mouth, eyebrows and eyes.
(click to play)

Notice how the AI has his head turning
side-to-side, filling in the gaps. Impressive.
(click to start)

Now questionable music aside, this simple and as-yet early app demonstrating where this is going should serve as a wake-up to any who receive such videos (or audios) on social media as 'revelations' or 'news', since it is clear that seeing is not believing anymore. 

Where this will lead and what we will see in the next few years is worrisome to say the least.
 


Born in the US, he grew up in Paris, France, where he completed his Baccalaureat, and after college moved to Rio de Janeiro, Brazil. He had a peak rating of 2240 FIDE, and was a key designer of Chess Assistant 6. In 2010 he joined the ChessBase family as an editor and writer at ChessBase News. He is also a passionate photographer with work appearing in numerous publications, and the content creator of the YouTube channel, Chess & Tech.

Discuss

Rules for reader comments

 
 

Not registered yet? Register