Looking back at CVPR 2023

21 July, 2023

By Fred Lichtenstein and Shaunagh Downing

Looking back at CVPR 2023

2023’s annual Computer Vision and Pattern Recognition (CVPR) conference has come to an end, and after returning from Vancouver, we’ve spent the last few days reflecting on what an incredible event it was. With more than 6500 in-person attendees, we were lucky enough to meet some amazing people and gain some insights from leaders in the field of computer vision.

Below, we’re unpacking our top takeaways from the event.

Attending CVPR was a great experience. I had the opportunity to witness all the cutting-edge research going on in the field of computer vision and engage in interesting discussions with a diverse group of people.” - Shaunagh Downing

CVPR is a rare opportunity for academia and industry to exchange perspectives – essential for driving ideas forward. The only way we combat emerging online threats is to act as a community, and CVPR is a vital part of making that happen.” - Fred Lichtenstein

AI is evolving, fast

CVPR 2023 was full of high-quality exciting research and provided the perfect opportunity to gain some valuable insight into the current state of the field, to find out what research topics are popular, and to see where the field is going. It is important to stay on top of all the rapidly developing research, both to ensure we are harnessing current tools to the best of our ability, and to assess the potential threats of new technology.

Traditional forensic methods are just as important as ever

How can investigators quickly classify if an image is AI-generated or not?

We were lucky enough to attend some great workshops during our time there, including an insightful media forensics workshop led by Hany Farid (a world-renowned professor in image analysis and digital forensics).

One such school of thought useful for classifying whether an image is AI-generated or not returns to traditional image forensics techniques.

Currently, generative AI, while able to create detailed and complex images, may still make several mistakes – such as creating numerous sources of sunlight in the same image. While returning to traditional analysis techniques may help determine if an image is generated or not for now, there’s no telling how long these techniques will stay relevant. AI-based detectors are another promising option for determining the authenticity of image.

The content authenticity initiative

Image authenticity was a big focus throughout the event. Founded by Adobe, the Content Authenticity Initiative is a worldwide initiative that aims to create a standardised practice of authenticating the source of an image’s creation. This would consist of adding a ‘layer of verifiable trust to all types of digital content’ with a stamp attached to each image showcasing where it originated – whether through photography, digital art, or AI-generated.

Image courtesy of the Content Authenticity Initiative

The result? A universally accepted stamp of image provenance that can help eradicate some of the uncertainty currently plaguing online media. We’d love to see this initiative continue making progress in the hopes of making online media more trusted, transparent, and authentic.

A spotlight on misinformation

Amidst a significant US Presidential Election next year and continued fighting in Ukraine and Russia, AI-generated images have already been recognised for the role that they can play in spreading misinformation and destabilising online news with fraudulent images.

We previously touched on this in our blog introducing AI-generated imagery, but it was great to see this threat acknowledged during the conference. A Meta spokesperson agreed that misinformation was a concern, and that their teams would seek to classify viral images for AI detection. However, we believe that an AI generated image can cause a lot of harm even without going viral, and earlier intervention would be better.

Generative AI: a pattern of harm and misuse

The CVPR conference also highlighted some of the other dangers of AI image generation, including its bias towards generating inappropriate images, including content that is of a hateful, sexual or violent in nature.

One key insight that gained our attention is that, since AI image generation models are trained on large datasets scraped from the internet, they suffer from biased behaviour, and can often produce inappropriate images, even when not prompted to do so. A study found that Stable Diffusion had a 90% probability of generating images containing nudity when the prompt “body” was associated with Japanese ethnicity, highlighting instances where AI is reinforcing the biased representation of Asian women in the training set.

The need for further attention on ethics

CVPR involved a large focus on the development of AI tools, with a much smaller number of papers based on the ethics or responsibility behind these tools. We expect, and hope, this may become a bigger topic at future CVPR’s.

Rather than simply focusing on the capabilities of generative AI tools, it’s so important that we also focus on the morality, ethics, and responsibility driving their use. With equal attention, we can create tools designed to enable accurate and detailed content creation, without encouraging a pattern of exploitation and harm.

Closing thoughts

Generative AI continues to evolve at an extremely rapid pace and shows no sign of slowing down. It was fascinating to learn more about current AI sentiments, focuses, and capabilities, and the things we have learned will inform how we interact with AI-generated imagery as it becomes more and more widespread.

Now, more than ever, a global focus on ensuring authenticity, and introducing responsibility is essential. It will help us mitigate potentially harmful content before it becomes a contributing factor to online exploitation and harm.

Want to learn more? Read our blog on understanding AI-generated imagery.


Subscribe to the Newsletter