Deepfake Damages Worth $40 billion? — The Impact of Generative AI on Identity and Countermeasures

Improving the Accuracy of Deepfakes

Deepfake technology has rapidly developed due to advancements in AI. Early deepfakes were low-quality and easily identifiable as fake. However, from 2018 to 2019, AI-driven image generation technology improved, significantly enhancing image quality with the advent of GANs. Since 2020, Transformers technology has improved consistency in long-duration videos. By 2023, the amount of deepfake content increased by 3000% compared to the previous year.

The Damage from the Misuse of Deepfakes

According to Deloitte’s estimates, fraud losses due to deepfake misuse in 2023 amounted to $12.3 billion and are projected to reach $40 billion by 2027. This represents an average annual growth rate of 32%, with losses more than tripling in four years. New generative AI tools have made it possible to create deepfakes at low costs, particularly targeting the financial services industry. In 2023, deepfake incidents in the fintech sector increased by 700%. Annual losses due to voice deepfakes in contact center fraud are estimated at around $5 billion. It is predicted that deepfake-related incidents will increase by 60% year-on-year in 2024, reaching 150,000 cases globally. Concerns are also rising over unauthorized sexual content and forged identification documents. An illicit industry has emerged on the dark web, selling fraud software.

Real Cases of Fraud Due to Deepfakes

Deepfake fraud targeting corporate executives is on the rise. One example involves a WhatsApp scam targeting the CEO of the world’s largest advertising agency group, WPP. In another case in Hong Kong, a corporate executive impersonation incident led to losses of tens of millions of dollars. There are reports of increasing cyberattacks that exploit AI.

Cyberattacks Using AI Beyond Deepfakes

According to research by Ivanti, many companies report an increase in AI-driven cyberattacks. Such attacks are expected to grow further. Particularly concerning threats include phishing (45%), attacks targeting software vulnerabilities (38%), ransomware attacks (37%), and attacks targeting API vulnerabilities (34%).

Current Status of Deepfake Countermeasures

Banks and other financial institutions are introducing fraud detection systems using AI and machine learning. JP Morgan uses large language models for detecting email fraud. Mastercard has developed a “Decision Intelligence” tool to predict the legitimacy of transactions. However, existing risk management frameworks may not fully cope with new AI technologies.

National Efforts to Counter Deepfakes

There are concerns that it is becoming increasingly difficult to visually distinguish deepfakes. OpenAI plans to offer a deepfake detection tool using its AI. However, since deepfakes are rarely created by a single tool, the effectiveness of such tools is limited. The C2PA initiative is developing standards to display the production process of AI-generated content, akin to food ingredient labels. The UK government has launched a “Deepfake Detection Challenge.” Public awareness campaigns are also being promoted.

Thoughts from an Identity Perspective

The impact of generative AI on identity is extensive, with deepfakes being just one aspect. In terms of deepfake management strategies:

  1. Digital Sender Authentication for Organisational use-cases
  2. Provenance Transparency for disseminated information through web and social media
  3. Human Measures

are some of the countermeasures that come to one’s mind.

Sender Authentication

Rather than relying on humans to judge based on voice or facial images, high-level authentication of the information sender should be conducted before important transactions (technical measures). An example of sender authentication would be using CIBA to push notifications to pre-registered devices for user authentication in response to requests via phone or video.

There has to be organizational measures to ensure this as well. it is also important to guarantee that someone won’t lose their job if they make such requests. A typical scam tactic involves applying pressure by saying, “The company’s survival depends on this. If you don’t act immediately, you’ll be fired.” Protecting employees from such pressure is crucial. This requires not just technical measures but also organizational rules within the company.

For deepfake-related forgery of identification documents, moving to digitally signed documents is effective. Fortunately, in Japan, public individual authentication and digital agency digital authentication apps are available, so it is necessary to rely on these for high-level identity verification.

Provenance Transparency

It is essential to clarify both how the disseminated information was generated and who the information source is. This is crucial for maintaining the consistency of identity. For example, what would happen if someone created and spread unauthorized sexual content or videos of a person committing a crime? If believed, it would undoubtedly damage others’ perceptions of the person and lead to a loss of trust.

The role of C2PA and Originator Profile is crucial here. They help identify whether the video or image was generated by AI and who the sender is. However, care must be taken regarding free speech.

C2PA and Freedom of Speech

C2PA is a technology designed to prove the origin and editing history of digital content, aiming to prevent the spread of fake news and deepfakes. However, if misused, it could restrict freedom of speech. For instance, the C2PA system could be used to identify journalists, allowing governments to suppress speech. There is also concern that C2PA could be used to enforce specific laws.

Originator Profile and Freedom of Speech

Originator Profile is a technology to verify the authenticity and trustworthiness of web content senders, aimed at preventing misinformation and ad fraud. However, by identifying the sender, anonymity may be lost, potentially restricting freedom of speech. If sender information is misused, it could lead to self-censorship.

Impact on Freedom of Speech

  1. Privacy Concerns: Both technologies collect and manage sender information, raising privacy concerns. This could make it difficult for senders to freely express their opinions.
  2. Risk of Misuse: If these technologies are misused by governments or other authorities, there is a risk that freedom of speech will be restricted, especially targeting journalists or activists.
  3. Transparency and Accountability: Transparency in how these technologies are used and how data is managed is necessary. Without appropriate accountability, freedom of speech could be threatened.

These technologies are important for improving the reliability of digital content, but careful consideration of how they are used and managed is essential to protect freedom of speech.

Human Measures

Finally, human measures are also critical. Even if technical measures are implemented, they are meaningless if not used. However, this is challenging. While it is possible to enforce organizational education and penalties for members within an organization, it is difficult to do the same for the general public. We would have to rely on the school education and public advertisement for it. This is an area of concern.

Conclusion

While the capabilities of tools used by attackers are evolving exponentially, human skills do not evolve in the same way, making it impossible to counter them with skills alone without technical support. Therefore, it is necessary to strongly promote technical measures.

At the same time, in terms of social communication, the relationship with freedom of speech is also important, so it is essential not to overdo it. Additionally, the difficulty of human measures must be kept in mind.

Considering all these factors comprehensively, it is crucial to implement balanced measures.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.