Project Description
Advancements in artificial intelligence have led to the rise of deepfake technologies. Using such neural network models as Generative Adversarial Network and the Video Autoencoder, computer users are able to manipulate existing photos and videos of people, allowing them to create the appearance of actions or words the subject did not say or do. With software for video detection and methods for harm mitigation still in the works, the rise of deepfake technology brings about questions of computing ethics: how society is working to protect vulnerable populations and how industry leaders and authority figures ought to regulate the open-source niche of deepfake technology to best combat the spread of misinformation and exploitation that goes along with it? The reality is that anyone can have their photos used in deepfake videos without their knowledge, and an analysis of the implications of open-source development is long overdue.
Name of Student
Caroline Frisiras
Major
Media Management
Philosophy
Minor
Business Law