By Dr Selasi Kwashie (pictured, inset), Senior Lecturer and Teaching Lead in Executive Education in the Charles Sturt University Artificial Intelligence and Cyber Futures Institute.
The recent ABC 4Corners program ‘AI rising: The new reality of artificial life’ about the rising risk of generative artificial intelligence (AI) is an excellent, timely and much-needed contribution to bring public attention to the issue.
As we learn to live in the era of generative AI, there is an urgent need to better understand the risks associated with AI.
One of the main challenges for the current digitally enhanced society is the rise of ‘deepfakes’ (‘deep learning’ and ‘fake’) which are synthetic media that have been digitally manipulated to convincingly replace one person’s likeness with that of another.
Indeed, as AI-generated images, videos, and audio continue to evolve, their applications have expanded into both positive and negative realms.
Through our Responsible AI program within the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University we are contributing to a comprehensive approach to tackling this complex issue, promoting responsible AI and human rights by design, and educating people about the potential risks associated with emerging technologies.
Deepfakes for good
It is important to understand that deep fakes are not necessarily evil as such. They can be used for good. Here are several notable examples:
- Automating routine tasks: deepfakes can be used to perform routine tasks. Think of ‘Jill Watson’, a positive example of a deepfake created by Professor Ashok Goel from Georgia Tech University to answer routine student questions about a popular university course. By efficiently and accurately answering queries related to course syllabi and instructional manuals, ‘Jill Watson’ served as a Virtual Teaching Assistant to enhance the learning experience for students and alleviate the workload for instructors. This innovative application demonstrates the potential of deepfake technology in supporting education and fostering knowledge-sharing. ‘Jill Watson’ was answering student questions so effectively that in the first year of its usage it was nominated for a ‘best teaching assistant award’ by the students who did not even realise they were talking to an algorithm.
- Historical re-enactments: Deepfakes can breathe life into historical figures, enabling museums and educational institutions to create engaging and immersive experiences that foster a deeper understanding of the past. For example, we need deepfakes to better understand what life was like many centuries ago, what people looked like, what languages they spoke, what challenges they were facing.
- Entertainment and fashion industry: Actors’ performances can be seamlessly integrated into movies or television shows even when they are unavailable, deceased, or ageing. This technology can also facilitate diverse casting, bringing more representation to the screen. In the movie Rogue One: A Star Wars Story (2016), deepfake technology was employed to bring the late British actor Peter Cushing back to the screen as the iconic villain Grand Moff Tarkin. Despite Cushing’s death in 1994, the visual effects team at Industrial Light and Magic recreated his likeness, enabling the seamless continuation of his character’s role in the Star Wars saga. This application of deepfake technology showcases its potential for enhancing storytelling and preserving the legacy of beloved actors within the entertainment industry. Deepfakes can also be used to design creative fashion.
- Medical training: Deepfakes can generate realistic patient scenarios for doctors, nurses, and other medical professionals to practice diagnosis and treatment, ultimately enhancing their skills and patient care.
Deepfakes for evil
Yet, deepfakes can also be used for evil.
- Political misinformation: Fabricated videos or audio of politicians can be used to spread false information, deceive the public, and manipulate opinions, undermining the democratic process. For example, at the eve of the current Russia-Ukraine war, Russia released the deepfake video of the Ukrainian president Zelensky calling Ukrainian people to surrender. The video did not have the intended effect on the Ukrainian public, but under different circumstances it could have been a lot more damaging.
- Cyberbullying and revenge porn: Deepfakes can be utilised to create explicit content featuring individuals without their consent, leading to severe emotional and psychological harm.
- Corporate espionage: Fraudsters can use deepfakes to impersonate business executives or employees, siphoning sensitive information or causing financial loss. For example, voice deepfakes are often used in financial scams.
- Enhancing wrong standards: Fake Instagram influencers (like Shudu) have hundreds of thousands of followers and can propagate wrong standards of beauty in young women, who try to become unhealthily thin, trying to copy their favourite influencer.
A comprehensive approach for Australia
To combat the malicious use of deepfakes, it is necessary to approach the problem holistically, not just from the regulatory angle, but also from the educational standpoint. What are the key ingredients of this holistic approach?
- Responsible AI and human rights by design: In Australia, we need to encourage the development and deployment of AI in a manner that respects human rights, values, and ethics. This includes promoting transparency, explainability, and accountability within the AI community. We need to incorporate human rights considerations in the design, development, and deployment of AI technologies, ensuring that privacy and dignity are protected throughout the process.
- Legal frameworks: We need to review and update existing laws to address the challenges posed by deepfakes, including defamation, privacy, and intellectual property. The European model (AI Act) in this regard is a good example and it would be useful to catch up with the European legislation, keeping in mind our local specifics.
- Public-private partnerships: We need to foster collaboration between government, industry, and academia to develop and implement robust solutions, including deepfake detection and verification tools.
- Respecting our roots: Incorporating indigenous values and thinking in AI development is crucial for fostering inclusive, diverse, and ethical technology. Embracing indigenous perspectives enriches our understanding, promotes cultural preservation, and ensures more holistic approaches to AI. A country-centric approach to AI, which considers local values, cultures, and priorities, further emphasises the importance of contextualising AI development to address the unique needs and aspirations of diverse communities while mitigating biases and discrimination.
- Awareness and education: None of the above measures will work if we fail to educate people about deepfakes and the risks and benefit of generative AI. We need to empower the public with knowledge and resources to identify and report deepfakes, fostering a more discerning and informed society. If people can recognize deepfakes better, the society may potentially become immune to the dangers of the current ‘post-truth’ world. Correspondingly, at the Charles Sturt Artificial Intelligence and Cyber Futures Institute we are working on a comprehensive educational program, which would allow people to better understand how algorithms function, and which issues they raise for society.
It is only by combining legislation, private sector standards, and education together, we can harness the power of deepfakes for good while minimising their potential for harm.
By promoting responsible AI and human rights by design, Australia can pave the way for a more ethical and secure digital future.
The Charles Sturt Artificial Intelligence and Cyber Futures Institute aims to help Australia to build this future.
Social
Explore the world of social