How do I identify AI-generated images? – DW – 04/09/2023

It has never been easier to cRecreate images that look shockingly realistic but are actually fake.

Anyone with an internet connection and access to a tool that uses artificial intelligence (AI) can create realistic images in a matter of secondsand they can then post it to social networks at breakneck speed.

In the past few days, many such photos have surfaced It went viral: Vladimir Putin apparently got caught or Elon Musk stuck to GM CEO Mary Barra, to name a few.

The problem is that bBoth AI images show events that never happened. until Paparazzi posted selfies which turned out to be AI-generated images.

And while some of these images may be funny, they can also pose real risks in terms of misinformation and propaganda, according to the experts DW consulted.

This viral image generated by artificial intelligence is meant to show Elon Musk with General Motors CEO Mary Barra. it’s fake

An earthquake never happened

Pictures showing the arrest of politicians such as Russian President Vladimir Putin or Former US President Donald Trump They can be verified fairly quickly by users if they check reputable media sources.

Other images are more difficult, such as one in which the people in the image are not recognizable, AI expert Henry Ader told DW.

One such example: the German far-right Alternative Party member of Parliament a AI-generated image of men screaming on his Instagram account to show that he is against the arrival of refugees.

And according to Ajder, it’s not just AI-generated images of people that can spread misinformation.

He says there are examples of users creating events that never happened.

Such was the case with the powerful earthquake that reportedly shook the Pacific Northwest of the United States and Canada in 2001.

But this earthquake never happened, and Pictures shared on Reddit Created by artificial intelligence.

This could be a problem, according to Ader. “If you’re creating a landscape rather than a portrait of a human being, it can be hard to spot,” he explains.

See also  Google's AI-powered search experience is very slow

However, AI tools make mistakes, even if they are evolving quickly. Currently, as of April 2023, programs like Midjourney, Dall-E, and DeepAI suffer from glitches, especially with images that show people.

DW’s fact-checking team has put together some suggestions that can help you gauge whether a photo is fake. But an initial word of warning: AI tools are evolving so quickly that this advice only reflects the current state of things.

1. Zoom in and look carefully

Many AI-generated images look real at first glance.

That’s why our first suggestion is to look closely at the image. to do this, Find the image in the highest possible resolution and then zoom in on the details.

Enlarging the image will reveal inconsistencies and errors that may not have been detected at first glance.

2. Find the image source

If you are not sure if the image is real or AI generated, try to find its source.

You may be able to see some information about where the photo was first posted by reading the comments other users have posted below the photo.

Or you can do a reverse image search. To do this, upload the image to tools such as Google Image Reverse Search, TinEye or Yandex, and you may find the original source of the image.

The results of these searches may also show links to fact-checks conducted by reputable media that provide additional context.

In this photo, Putin is supposed to be on his knees in front of Xi Jinping.
Putin is supposed to bow to Xi Jinping, but a closer look shows that the photo is fakephoto: Twitter/DW

3. Pay attention to body proportions

Do the people pictured have the correct body proportions?

It is not uncommon for AI-generated images to show inconsistencies when it comes to proportions. The hands may be too small or the fingers too long. Or that the head and feet don’t match up with the rest of the body.

See also  Apple opens a store on the Chinese platform WeChat

Such is the case with the photo above, which is supposed to be of Putin He kneeled in front of Xi Jinping. The shoes of the kneeling person are disproportionately large and wide. The calf appears elongated. The half-covered head is also very large and disproportionate to the rest of the body.

More about this fake in our dedicated fact check.

4. Watch out for typical AI errors

Hands are currently the main source of errors in AI image software such as Midjourney or DALL-E.

sPeople often have a sixth finger, like the policeman to Putin’s left in our photo above.

or also in These are pictures of Pope Francis, which you may have seen.

But did you realize that Pope Francis only appears to have four fingers in the right photo? And did you notice that his left fingers are unusually long? These pictures are fake.

Other common mistakes in AI-generated images include people with a lot of teethor The frames of the glasses are strangely deformedor ears with unrealistic shapes, as in the aforementioned fake photo of Xi and Putin.

Surfaces that reflect, such as helmet visors, also cause problems for AI programs, sometimes appearing to disintegrate, as in the case of Putin’s alleged arrest.

Henry is an artificial intelligence expert However, Ajder warns that newer versions of programs like Midjourney are getting better at generating hands, which means users won’t be able to rely much longer on catching these kinds of errors.

5. Does the image look artificial and soft?

application Midjourney in particular creates many images that seem too good to be true.

Follow your gut feeling here: Could this perfect picture with flawless people really be real?

“The faces are very pure, and the textiles on display are very harmonious,” Andreas Dingel of the German Research Center for Artificial Intelligence told DW.

See also  The rumors were true! The Xiaomi 13 Ultra's Dream Camera Phone comes to life with a Leica-inspired body

The skin of the subjects in many AI images is often smooth and devoid of any irritation, and even their hair and teeth are flawless. This is not usually the case in real life.

Many images also have an artistic, glossy, and shiny look that is difficult for professional photographers to achieve in studio photography.

AI tools often seem to craft perfect images that are supposed to be perfect and satisfy as many people as possible.

6. Background check

The background of a photo often reveals whether it has been manipulated.

Here too, things can appear distorted; For example, Street lighting poles.

In a few cases, AI programs clone people and objects and use them twice. It is not uncommon for the background of AI images to be blurred.

But even this obfuscation can contain errors. Like the example above, which purports to show an angry Will Smith at the Academy Awards. The background is not only out of focus but looks artificially blurred.

Conclusion

Many AI-generated images still exist today Expose with a little research. But technology is improving and bugs are likely to become more rare in the future. Can AI detectors like Hugging Face Help us detect manipulation?

Based on our findings, detectors provide clues, but nothing more than that.

Experts we interviewed tend to advise against using them, saying that the tools are not sufficiently developed. Even the original photos were declared fake and vice versa.

Therefore, when in doubt, the best thing users can do to differentiate between real and fake events is to use common sense, rely on reputable media, and avoid sharing images.

Graphical list of how you can spot fake photos

Leave a Reply

Your email address will not be published. Required fields are marked *