man in dark room working on computer

Deconstructing Deepfakes with an AI Expert

Join us this week as we delve into the enigmatic world of deepfakes with Dr. Federica Fornaciari, who brings a wealth of knowledge and expertise to the conversation. We take a hard look at how this technology, which can convincingly manipulate appearances and actions through AI and deep learning, is capable of causing reputational damage, emotional distress, and even legal consequences. Dr. Fornaciari walks us through how deepfakes can undermine trust in media and create a climate of skepticism, but also emphasizes the importance of fostering a climate of ethical use and reasonable creation.

In our thought-provoking chat with Dr. Fornaciari, we look beyond the challenges and delve into the potential benefits of deepfakes, touching on their use in entertainment and historical preservation. We explore the ethical implications that come with using this technology, underlining the importance of ethical guidelines, transparency, and media literacy initiatives. As Dr. Fornaciari aptly reminds us, technology itself is neutral - it's how we use it that determines its ethical implications.

And we don't stop there. Together with Dr. Fornaciari, we explore strategies to detect deepfakes and differentiate them from authentic content. We highlight how deepfakes can perpetuate harmful stereotypes, but also underscore their potential to provide opportunities for underrepresented groups. Through this discussion, we learn how to spot unnatural movements, inconsistencies, and contextual anomalies in deepfakes. We peek behind the curtain of deepfakes, navigating this complex landscape with a keen focus on collaboration and transparency. An episode you wouldn't want to miss!


Show Notes

  • 0:00:11 - Rise of Deepfakes Concerns (42 Seconds)
  • 0:03:17 - Understanding Deepfakes and Their Creation (117 Seconds)
  • 0:05:58 - Concerns of Deepfakes (87 Seconds)
  • 0:15:42 - The Profound Implications of Deepfakes (70 Seconds)
  • 0:18:22 - Transparency and Accountability in Media (88 Seconds)
  • 0:29:38 - The Impact of Technology on Privacy (85 Seconds)
  • 0:36:09 - Deepfakes and Representation in Media (155 Seconds)
  • 0:44:16 - Regulating Deepfake Technology and Freedom (89 Seconds)

0:00:01 - Announcer

You are listening to the National University Podcast.

0:00:10 - Kimberly King

Hello, I'm Kimberly King. Welcome to the National University Podcast, where we offer a holistic approach to student support, well-being and success- the whole human education. We put passion into practice by offering accessible, achievable higher education to lifelong learners. Today, we're talking about deepfakes. According to a recent article in the Guardian, ai-generated fake videos are becoming more common and convincing, and why we should be worried. Today's guest discusses fostering a climate of ethics, trust and reasonable creation, and encourages content creators to use integrity.

On today's episode, we're talking about deepfakes and how to spot them, and joining us is Dr. Federica Fornaciari, and she's a professor, a researcher, a children's book author and the academic program director for the main strategic communications at National University. She's received a doctorate in communication, with a concentration in electronic security and privacy from the University of Illinois at Chicago and a master of arts and journalism and mass communication from Marshall University. Her research and teaching revolve around emerging technologies, privacy issues, digital identities, frame theory and media representation Wow, interesting. Federica has published several peer-reviewed articles and book chapters, and her multifaceted journey embodies the spirit of exploration, nurturing intellectual growth and fostering connection amidst the ever-evolving landscape of communication. We welcome her to the podcast. Dr. Fornaciari, how are you?

0:01:51 - Doctor Federica Fornaciari

I'm great. Thank you so much, Kim, for your introduction. It's a pleasure to be here.

0:01:57 - Kimberly King

Wow, what a fascinating area that you're studying and so relevant. And so today I want to see if you could fill our audience in a little bit on your mission and your work before we get to today's show topic.

0:02:09 - Doctor Federica Fornaciari

Absolutely yeah. So I am passionate about social justice, for sure, and passionate about human behavior. I'm passionate about helping people, ultimately, to navigate through the media landscapes in a successful way, and we will talk a lot about media literacy today. I believe We'll see how the conversation goes, but with my students, I try to help them understand the complexity of today's environment handling technology successfully with social media, artificial intelligence, how the media frame and talk about issues, what they include in the issue, what they live out. It's all about media literacy and it's all about trying to make the world a better place ultimately.

0:03:09 - Kimberly King

I love it. And boy, again, no time. Like the present right, it's just gone just to the wild wild west. It feels like. Today we're talking about how deepfakes work, and what is a deepfake? What is that technology? And so, Doctor, could you provide a brief explanation of what deepfakes are and how they're created?

0:03:28 - Doctor Federica Fornaciari

Yeah, absolutely, Kim. You know, deepfakes are, for sure, a fascinating aspect of modern media, though they also are a very concerning one. You know, essentially they are synthetic videos or audios or images that use the power of artificial intelligence and deep learning. That's why they're called deepfakes, because it has to do with deep learning to convincingly manipulate someone's appearance and action. Right? So picture these.

Through cutting edge neural networks, deepfakes analyze an enormous data sets of a specific person. Typically, you know, it can be a celebrity or a public figure, because there is obviously way more content about public figures and celebrities, but it can be, you know, a regular person too, and they learn every minute detail of their facial expression, of their mannerism, of their speech patterns. So, as a result of these pretty sophisticated learning process, the technology can seamlessly superimpose the likeness of that target individual into someone else who we can refer to as the actor, right? So the outcome is a pretty deceptively realistic video or audio or image that makes it look as if the target person is doing or saying something they're actually never done or never said, right? So it is pretty astonishing to witness how realistic these deepfakes can be, especially in a technology evolves at an incredible speed but obviously also raises critical concerns. You know the potential to infringement privacy, to manipulate identities. They've become a hotbed for ethical and societal discussion. You know, from tarnishing the reputation of public figures to spreading misinformation they can cause pretty significant challenges for media integrity and for public trust ultimately.

0:05:46 - Kimberly King

That's really frightening to think what they can do and, I guess, the concerns that have been raised about privacy, as you mentioned, and identity manipulation. How do you see this technology impacting individuals and society as a whole?

0:05:52 - Doctor Federica Fornaciari

It's pretty scary, honestly. Though, you know I'm a positive thinker too. You know, deepfakes have emerged as a huge concern when it comes to privacy, when it comes to identity manipulation. The ramifications of this technology can be far-reaching, as you can imagine. They can impact both individuals and society in very significant ways. So you know, on an individual level, deepfakes can have devastating effects, whether it's for a public figure or me or you, possibly, who knows?

With the ability to so convincingly create fabricated video, fabricated audio that impersonates someone, innocent individuals might find themselves entangled in false narratives on compromising situations they've never experienced.

So these can lead to so many consequences, you know reputational damage, emotional distress, legal consequences, possibly. Potential harm to personal relationships, you name it. So, even more so, deepfakes can erode, if you will, the very essence of trust, immediate information. As the technology evolves, which it does day in and day out, it becomes more accessible. Me and you could fairly easily access, have access to apps that create deepfakes. So distinguishing authentic content, genuine content, from manipulated content becomes increasingly challenging and this goes without saying, it can lead to skepticism or people question whether you know every piece of media they encounter is authentic or fabricated. So society may witness a pretty concerning decline in the credibility of video evidence, which can have consequences, you know, in the legal context, or they can create obstacles in determining the truth, hindering the pursuit of justice. So you know, this can have implications for criminal investigations, core proceedings, the confidence in the judicial system, whether there is any left at this point anyway. (Laughter.) Which I mean, we don't need to make it worse.

0:08:23 - Kimberly King

Seriously. I mean even going back to what you were saying about the trust in the media. I mean it doesn't matter what side you're on, this day and age the media it's hard to find it trustworthy because adjust the content, that and it's you know. I don't have to go into detail here, but you know where I'm going with this. Again, it doesn't matter what side- politics have come and played a part- and I guess who's held accountable when it becomes an AI issue and you do want to take it to the legal process, who's accountable for that?

0:08:57 - Doctor Federica Fornaciari

Yeah, that's, you know that's. That's something that we all need to work on, like policymakers, the legal systems, the individuals, the you know educators everyone should be involved. The technology companies, every. Everyone needs to work together to make sure that deepfakes are you know, that this technology is used in ethical ways, that this technology fulfills a need rather than create a problem.

0:09:29 - Kimberly King

Wow. Well, when considering the impact of deepfakes, which individuals or groups are particularly vulnerable to the potential consequences of these manipulative technologies?

0:09:40 - Doctor Federica Fornaciari

Oh, you know, I mean potentially everyone, right? The reach of deepfakes, unfortunately, is very wide, but certain groups, for sure, tend to be more significantly affected and more vulnerable. Marginalized communities are particularly at risk. You know, deepfakes can exacerbate existing inequalities, they can target those who are already disproportionately disadvantaged. Manipulated content can, you know, amplify hate speech. It can fuel discrimination, it can perpetuate false narratives and further make it even more problematic for social progress to happen. Right, so we've seen the consequences of deepfakes in so many domains. In humanitarian, crisis or emergency situations. Deepfake, they can hinder relief efforts. They can exacerbate confusion. Humanitarian aid workers and those who are affected by the crisis could encounter challenges in delivering and receiving accurate information. It is scary, you know.

0:10:53 - Kimberly King

You know, and on that point and I don't mean to interrupt you, but because you brought up the humanitarian crisis and relief efforts- so how do you protect yourself? How do you know if you want to donate to a relief effort and that it's not a deepfake?

0:11:06 - Doctor Federica Fornaciari

Yeah, I mean it's complicated. For sure, you need to double-check your sources, you need to do some research, try and use the- you know, probably the fire department or organizations, or you know entities that are easier to check, and we can always make phone calls. You can always... It's hard, it's complicated, especially for those who you know may feel the urgency of act, like to act. They may feel the urgency to help. They may not think through the need to protect themselves as well and to make sure that their money goes in the right direction.

0:11:53 - Kimberly King

So that's such great advice and especially, as you say, for those that have been marginalized, or even the elderly who seem to always get involved, or often that seems to be a huge target, you know, for financial elder abuse. Absolutely, absolutely.

0:12:07 - Doctor Federica Fornaciari

You know like media literacy, like the elderly are used to, they grew up before the crazy technologies that were used to today's right, so they were used to have a complete trust in traditional media right. So now they don't. Necessarily. They haven't not all of them, at least, have been able to update their literacy to understand the implications of current technologies and to understand that we cannot trust them 100%. So you know like it used to be a physical problem of someone knocking at the door and trying to get money from them. Now it's even more problematic because you know they are trusting the device, they are trusting the internet. You know it's, it's someone wrote it there, someone let them publish it. So it must be double checked, the fact checked. So that's why media literacy is such an important component of helping.

0:13:10 - Kimberly King

I love what you're doing, and again, it's so timely as deepfake technology becomes more accessible. What are some potential ethical implications that media consumers and creators should be aware of?

0:13:26 - Doctor Federica Fornaciari

Well, there are many for sure. One primary concern for sure revolves around the potential for misinformation and deception. As we have said, deepfakes can pretty convincingly fabricate content. They blur the line between reality and unreality. So media consumers need to exercise caution in trusting sources. They need to develop critical thinking skills. They need to verify the authenticity of the content that they encounter and they share, because there is another component here to remember. You know, maybe I wouldn't trust the information coming from I don't know a name that I haven't seen, but if my friend or like someone that I know shared that information, I used their sharing, as you know, a confirmation that this may be true. So before we share, we need to think about what we're going to do. So before we share, we need to think about what we're sharing as well, because we are endorsing a statement in a way.

Right Privacy violation are another critical consideration there. You know, the ease of creating deepfakes raises questions about individuals, consent and the control over their own likeness. So creators must responsibly seek proper consent when they're using someone's image or identity in their work. They have to respect individuals and autonomy. The risk of abuse, harassment, cyberbullying again you name it.

These are all troubling concerns, you know, deepfakes can so easily be exploited to target and harm different communities, different individuals Again, particularly public figures, but also vulnerable communities. So there are very serious consequences for the individual and for society. So media authenticity is at stake here on. Consumers that rely on accurate information, incredible sources to make their informed decisions, can encounter problems there, of course, reputation and identity can be at stake. You know, when deepfakes portray someone engaging in actions or they never did, these manipulations can have far reaching personal and professional consequences. Obviously so intellectual property rights and, if we want to look at a bigger picture too, and other consequences that we may not immediately think about. But it's that deepfakes can have profound cultural and social implications. You know they can distort historical events, they can distort cultural figures, so they may alter the collective memory and the perception of our shared past, which is, you know, another layer of scariness, I think, or something that's pretty concerning for sure.

0:16:32 - Kimberly King

Absolutely. I mean and we've already seen a little bit of this just with, you know, wanting to alter history, and it really is a very important aspect of our culture. We've seen history and erase the past, but and being that it's in a digital format rather than just in books and you know, and again in perspective, but it's really frightening to think of what, where we can go or what's happening. Deepfakes have also been used to create fake news. We just talked about media literacy and misinformation and how can we combat the spread of malicious deepfake content and then also protect the integrity of the media.

0:17:07 - Doctor Federica Fornaciari

Yeah, you know, Kim, that's a complex, layered question. Again, you know, fighting the spread of malicious deepfake content to make sure that the integrity of the media is preserved is a critical challenge and it requires a multifaceted approach, for sure. First of all, raising awareness, promoting media literacy, is crucial. You know, as media consumers, we must be equipped with the knowledge and with the tools to identify potential deepfakes. You know, by fostering critical thinking skills, we can better discern what's authentic and what's manipulated. Collaboration between technology companies, researchers, policymakers everyone involved is also crucial. Being able to create a sophisticated deepfake detection technology that you know could be keen identifying and flagging malicious content across different platforms. You know, working together would be a way to stay one step ahead of those who seek to exploit the technology for harmful purposes. Transparency, accountability for content creators are essential as media experts right, if we're on the other side, we must make sure that we're adhering to a ethical standards even more than we ever did. We must disclose any use of deepfakes that can be displayed in our work. We must provide clear labels, watermarks. Perhaps that can help distinguish authentic content from manipulated content. You know, fostering a climate of transparency and trust would be necessary to help media consumers stay, you know, with the like, keep reading and stay there to gather information, trusting that the sources that they're using are authentic, right.

Obviously, regulatory measures would be might be necessary to curb the misuse of deepfakes right through. Technology is always evolving faster than the legal system and the regulation, so it can be a little difficult for the legal system to keep up. But governments could explore policies that require explicit disclosure of deepfake content, for instance, and penalties for those who create and disseminate malicious deepfakes with harmful intent. But again, it's a very fine line. It's complicated to be able to create regulations that don't hinder the creation while protecting the possible harm.

Ultimately, I think that ethic is often the answer, especially when you know, because the technology evolves so fast. So creating a culture of ethical and responsible creation and sharing again of media content is essential. By emphasizing the ethical use of deepfakes, encouraging content creators to prioritize and check accuracy and authenticity, we can hope to combat the spread of malicious content. Fingers, finger crossed, always right, you know. But it's so important to preserve the integrity of the media in the fake, especially in the face of the fake generated content, fake news, misinformation. It needs to be a collective effort there, right so? Media literacy, education, detection technologies, responsible content creation through these multifaceted approach, we can hopefully empower ourselves to navigate the complex landscape in a more responsible way and hopefully safeguard the credibility of our media ecosystem. Even so, credibility and media are not going hand in hand that much anymore. It's unfortunate.

0:21:37 - Kimberly King

Right. It is unfortunate, but at least these are some language, this language we've been hearing more and more about with the, you know, fake news and just the integrity that is lacking in media, in the media today. With the rise of deepfake videos, do you think that there's a risk of undermining the credibility of real video evidence in legal context? And then, how does challenge be addressed?

0:22:02 - Doctor Federica Fornaciari

I mean without question, for sure.

You know, as deepfake technology becomes more sophisticated and accessible, the potential to create convincingly fake videos that are increasingly difficult to distinguish from real footage raises serious concerns for the judicial, for the justice system and for the integrity of the evidence that we present in courts.

One of the primary challenges, I believe, lies in the difficulty of detecting deepfakes, especially as they become more and more realistic, right. So courts and legal professionals have always heavily relied on video evidence to establish fact, to determine guilt versus innocence right and to deliver just verdicts ultimately. So if deepfake videos are mistakenly admitted as authentic evidence, it can lead to wrongful judgments, ultimately compromising the fairness, the accuracy of the judicial process. So this is scary for our society for sure. So, and again, I believe that addressing this challenge requires a multi-pronged approach. First, research development, so investing in new technologies, advanced deepfake detection tools, is critical. So you know, the collaboration between technology experts and legal professionals could lead, hopefully, to the creation of reliable and efficient methods to identify manipulated content, whether videos or audios. So these detection techniques could help courts to scrutinize video evidence effectively, hopefully, and distinguish between real footage and deepfakes.

Media literacy again among media. Professional endure is vital. We need to ensure that everyone involved in the legal proceedings understand the challenges of deepfakes and acts with the understanding of that.

0:24:32 - Kimberly King

This is such an important conversation and we have to take a quick break. It's so interesting, dr. Thank you, stay with us, we'll be. I have more in just a moment, don't go away. And now back to our interview with National University professor, Dr. Federica Fornaciari, and we are discussing deepfakes and how to spot them. And, Dr., this has been such a fascinating conversation. Some argue that deepfakes could be used positively for entertainment or historical preservation, but this doesn't sound this way when we're discussing what you've mentioned already. How can we strike a balance between the creative use of this technology and, of course, its potential for harm?

0:25:10 - Doctor Federica Fornaciari

Yeah, you know, there's always hope. To start to begin with, I believe that embracing transparency and responsible use of deepfakes is essential, though it may sound like a fairytale at this point, but there's hope again. As content creators, we should be fort right about the use of this technology, and these includes, you know, the context of entertainment and historical representation as well. If we transparently label deepfakes contents, we can help the audiences to differentiate between what's authentic and what's manipulated media Fostering, you know, hope for trust and informed consumption. So, as we were discussing earlier, implementing ethical guidelines and industry standards as well, obviously, can help mitigate harm. By, you know, establishing best practices for deepfakes creation and dissemination, we can hope to make sure that these technologies use responsibly, at least within. You know, the part of the industry that we can control and use responsibly in ways that prioritize respect for individual privacy, individual rights, identity rights. So, as always, I am an advocate for bringing all the stakeholders on board. So I believe again that collaboration between technology companies, researchers, educators, policymakers and artists is crucial in shaping the responsible use of deepfakes. You know, if we work together, we can address challenges, we can share insights, we can hopefully develop comprehensive solutions that safeguard against the possible negative impacts of the technology.

You know, every time that there is a new technology comes about, we are faced with a dual nature. You know, a blend of the potential benefits and ethical concerns. You know so. This phenomenon in the scholarship is often referred to as the technological sublime that encapsulates the awe-inspiring aspects of innovation. However, it's important to remember that technology itself is neutral. Right, technology itself is not inherently good or bad. It's the intention behind it. It's the applications that determine its ethical implications.

So, again, education, media literacy initiatives can play a very important role in empowering individuals to critically evaluate deepfakes content. Teaching the public how to spot and how to verify manipulated media can foster a discerning audience that can engage with this technology more responsibly. Right. So you know, if we embrace the creative potential of deepfakes for entertainment and historical preservation, while also we safeguard against the potential harm, we need to have a multifaceted approach. Again, we need to promote transparency, we need to promote ethical practices and teach ethical practices, we need to promote responsible use, coupled with advancement, again, in detection technology, media literacy again. So with all these components together, I think we can strike the right balance between harnessing the benefits of deepfakes because there are potential benefits while mitigating the risks. You know again, it's always through a collaborative effort, a collective commitment that we can hope for responsible innovation.

0:29:38 - Kimberly King

So and that's what you said earlier to your point about how you know anytime there's an introduction of a new technology or something, this is the era that the media advances in tech. You know every the digital. You know forecast of everything we've seen, from type writers to computers. You know to where we had a pager to now this phone does everything right. So we're living this to see it and the checks and balances this is. It's going by at a rapid rate, isn't it?

0:30:09 - Doctor Federica Fornaciari

Oh, absolutely it's. It's fascinating and you know, like when I was going through my pH, when I was working on my dissertation, I was looking at how the you know public understanding of privacy changes with technology and it was fascinating to see that. You know, I was starting to research how people reacted to the telegraph and you know what they were writing in the media about the telegraph Privacy is dead. Get over it.

0:30:40 - Kimberly King

So that's the narrative that it's been around for a while that should be the beginning of your book, by the way, because it's fascinating, I was thinking about back in the what in the fifties they would have the party line or whatever and everybody can come on board, so they were probably also talking about privacy during that point in time too. But my how we've changed.

0:31:00 - Doctor Federica Fornaciari

Oh, absolutely, yeah, it's fascinating for sure.

0:31:04 - Kimberly King

Well, so you've been talking about the deepfakes and how often they target public figures and celebrities and how this impacts public trust in media and the authenticity of information. Do you have any specific examples of, or anybody that you can think of, any cases?

0:31:22 - Doctor Federica Fornaciari

Oh, I mean, there's countless. You know, we've seen deepfakes with Mark Zuckerberg, we've seen them with President Trump, we've seen them with many other political figures that fake news during the political elections in 2016. There is a huge number of examples that we could bring about and that they, you know it's often a media war right that, the creation of misinformation to direct political elections, to direct support endorsements, to direct the public understanding of current events.

0:32:06 - Kimberly King

Right, and that is true, I mean, especially as we're coming up on another political of way you know, stand by.

0:32:11 - Doctor Federica Fornaciari

Yes.

0:32:14 - Kimberly King

You know back in the day, just even with television news and sensational ratings, just whoever is going to put eyes on a specific station because they've got that interview or whatever. But everything is so sensationalized, right.

0:32:28 - Doctor Federica Fornaciari

Oh, absolutely.

0:32:31 - Kimberly King

Yeah, it's just wow. Well, okay. So how are social media platforms and tech companies responding to the threat posed by deepfakes, and are their current efforts to adequate and what more can be done?

0:32:45 - Doctor Federica Fornaciari

You know, social media platforms and tech companies have been actively responding to the threat of deepfakes. You know, they have obviously have recognized the potential or harm that deepfakes can cause to their users and to the integrity of the platforms. So, you know, while the efforts have been commendable, for sure, they ever evolving nature of technology, the ever evolving nature of deepfakes technology especially, calls for ongoing vigilance and cut for continuous improvement. So, you know, many social media platforms have implemented policies and guidelines, community guidelines, that prohibit the sharing of deceptive or harmful deepfakes content. Right, the guidelines typically aim to prevent the spread of deepfakes targeting public figures, celebrities, private individuals, as mentioned earlier. You know, the technology is often also the answers. So platforms are investing in artificial intelligence and machine learning technologies to try and detect and remove deepfakes content from the platforms. They're collaborating with researchers, with academics, industry partners to develop detection algorithms. However, you know, I mean there are many efforts but there are also remaining challenges. Right, the technology advances so fast. So this rapid advancement means that the new and more convincing forms of manipulation will always emerge. So it's crucial for social media platforms and tech companies to try and stay ahead of the curve, keep updating the detection technologies that they have. You know, is like the black hat white hat fight. Here.

I believe that a collective, industry wide approach is needed, is necessary to be able to, you know, create standardized practicing practices that help them and us to handle deepfakes. Collaboration among competitors to address the shared challenge would be, you know, would help leading to a more robust solution. Greater transparency I can never emphasize that enough. You know, in content moderation is essential because we don't know sometimes how content is moderated. We don't understand always how these platforms remove content or discriminate between what content can stay and what content has to go. So it'd be helpful if these social media platforms are as forthcoming as possible in disclosing their strategy for tackling deepfakes, including, you know, including detection mechanism for content removal, because obviously this transparency would help to build the trust with the users and with the wider public. So understanding how content is removed would help us.

0:35:59 - Kimberly King

Right, kind of having that bird's eye view or that you know behind the scenes. What are the ramifications? In what ways might deepfakes exacerbate existing issues of representation and diversity in the media, and how can we mitigate these effects?

0:36:19 - Doctor Federica Fornaciari

Oh yeah, Kim, this is another very interesting problem. You know, as we said earlier, when there's something that's already weak, technology can, you know, foster that weakness or like, in a way, exploit it. You know, deepfakes can be manipulative in nature. So they obviously do run the risk of perpetuating harmful stereotypes and misrepresentation, right, they can further marginalize diverse groups. They can reinforce power imbalances in media landscape.

One concern is the potential of deepfakes to reinforce biases, right, and biased narratives, by altering the appearance or behavior of individuals from different racial, ethnic or cultural backgrounds, for instance. You know, so this kind of misrepresentation can be very damaging. It can distort the authentic experiences of these diverse communities, right, so I believe that we also need to remember that the use of deepfakes for tokenistic representation is the worrisome trend. You know, when creators use manipulated content to appear inclusive without genuinely representing diverse voices, you know it only serves to exploit these identities for superficial gains, if you want. You know, while authentic opportunities for underrepresented group to actually be there may remain limited. So you know, again, to address these challenges, it's crucial that media creators and platforms prioritize authentic representation.

You know, amplifying diverse voices and experiences is a way, hopefully, to counteract the negative effects of misrepresentation. So, you know, supporting underrepresented voices, investing in platforms that promote diversity, can go a long way. I believe in creating a more inclusive media environment that celebrates the richness of human experiences instead of, you know, distorting it. In the end, I think that by staying proactive and, again, by working together, we have the opportunity, we will have the ability to navigate the challenges of deepfakes and to ensure that our media landscape becomes a place of authentic representation again, and authenticity again. You know, again, technology is not inherently bad. We have to keep this in mind. As long as we all work together for the common good, we can celebrate the opportunities that technology creates.

0:39:21 - Kimberly King

Yeah well, and inthis evolving landscape of increasingly convincing deepfake, what strategies can individuals employ to effectively identify these manipulative creations and differentiate them from authentic content?

0:39:37 - Doctor Federica Fornaciari

Did I mention media literacy before? Right, you know, Kim, I mean I think that navigating this intricate terrain requires a combination of acute observation and critical analysis. Right, so there are, for sure, there are guidelines that we can follow to try and detect deepfakes, though we also, again, need to keep in mind that the pace of technological evolution is so fast, so these guidelines will quickly become outdated, probably. Right, so we'll have to keep working on our media literacy. But to just, you know, give some ideas of what we can do, is that, first, when we look at audio-visual content, we can pay attention to facial expression, vocal modulation. Deepfakes often have subtle incongruities, especially in these areas, due to the complexity, obviously, of mimicking human nuances. So, you know, for instance, in lower quality deepfakes, people don't blink, right, but there are, you know, high, and they call them, I believe, cheap fakes. Right, but there are, you know, more sophisticated technologies that have learned how to make people blink. So, again, the line is always blurring. You know, they are evolving so fast that we need to update our literacy almost daily. Another thing that we can do is pay attention to inconsistencies. Right, we become detected, so we can try and make it fun. Right. We can look at lightning variations. We can look at shadow distortions, background congruities you know there are little hard to notice irregularities that maybe trade the artificial nature of the content Again, especially for deepfakes. We can look at unnatural movements that might give away cheap fakes, again anomalies, gestures, hand rotations, other motions that can be a sign of digital manipulation. So you know that would be the detective way to go about it. And also, you know we can think a little more broadly about contextual analysis, which is also a helpful strategy. So we can evaluate the content that we see in relation to the typical behavior of the individual depicted right, and discrepancies could unveil the presence of a deepfake as well.

You know, vetting the source. Authenticity is another strategy that we can use. You know tracing the origin of the content to make sure that it's legitimate, or hopefully legitimate. You know the New York Times is a more reliable source than, you know, someone's blog that I've never heard of before. And also, when technology is the problem, technology is often the solution too, right. So, again, leveraging cutting edge technology that can help us vet whether content is fake or authentic. We can use artificial intelligence-driven tools designed to identify deepfakes. You know, sometimes when my students submit their work and I know their voice so I can tell whether they have, you know, played around with Chat GPT or not, you can just copy a piece of text to go to Chat GPT and say, hey, did you write this? And it will tell you. So you know, sometimes you can use it for detection too.

0:43:26 - Kimberly King

It can, yeah, it can turn around and get and bite you (laughter).

0:43:32 - Doctor Federica Fornaciari

Right? Exactly. But yeah. But you know. Ultimately, you know, as we try and figure out what's real and what's not, you know we can also trust our gut sometimes, you know, and into it is. Sense plays a pivotal role there. You know, if a piece of media appears very sensational, you know, or suspicious, you know, we may need to engage in a little more examination there. So skepticism is often a good defense mechanism too.

0:44:02 - Kimberly King

There is no AI that can capture that right- Skepticism and being, you know that's very human.

0:44:10 - Doctor Federica Fornaciari

Absolutely, and hopefully it'll stay that way. Who knows.

0:44:15 - Kimberly King

Oh my gosh. Well, we've come down to the last question, and this has just been so fascinating. What role do policymakers and governments have in regulating these kind of deepfake technology, while preserving freedom of expression and creativity? And I know we talked a little bit about this, but if there's anything else you want to say about that?

0:44:35 - Doctor Federica Fornaciari

Yeah, you know, I mean it's striking the balance is tricky for sure, you know, policymakers and governments play a very important role in regulating the use of deepfake technology while preserving freedom of expression and creativity.

So, again, there are several factors that we can consider and we should consider there. On one hand, there is the need and the imperative to safeguard integrative information to protect individuals from potential harm that deepfake can cause. So these might involve, again, crafting regulations that, you know, align the permissible uses of deepfakes, especially in contexts where deception can lead to very meaningful consequences, such as fraud of information, you know, or such as, yeah, legal consequences as well. Though these regulations should be crafted with, I want to say, a keen awareness of the importance of freedom of expression, right and artistic creativity. So deepfake technology has the potential to be a tool for innovative storytelling, for entertainment, for education, for satire, you know. So, again, the line is blurry there. Heavy handed regulations could work against these forms of expression and so, you know, hinder the good part of these technologies, good potentials.

So, you know, as policymakers will need to tread carefully, you know, perhaps even considering a case by case scenario. In certain circumstances, I believe that they might focus on measures such as, you know, transparent labeling, again, of deepfake content, making sure that the viewers or the readers are aware that what they are watching is fabricated content. You know, again, it strikes a balance between allowing creative views of technology while also ensuring that consumers can distinguish between facts and fiction. Again, you know, we need to work together, collaborating between governments, technology developers, content creators, advocacy groups, educators and artists is key, I think, you know, an open dialogue can, I believe, lead to more nuanced regulations that protect against the negative uses and the malicious uses of deepfakes, while also upholding the creative freedom and the artistic freedom.

There, you know, again, education plays a role. Media literacy programs are needed. So you know, as you can see, the role of policymakers and governments in regulating deepfakes. Technology is, again, multifaceted. It has to do with very delicately navigating that fine line between safeguarding trust and ensuring creative expression right, so forging, if you want, a path where innovation can succeed and can flourish, where society can remain resilient, if you want, in the face of technological challenges. You know, I've always been a positive thinker, so I believe that if we foster collaboration, if we embrace transparency, if we promote education and media literacy. I think that we can collectively shape a future where deepfakes technology enhances our world without compromising the essence of our shared values and aspirations.

0:48:44 - Kimberly King

Wow, well, this is fascinating. Please get in touch with me when you write your book. This is fascinating, but you're thank you for your time, doctor. If you want more information, you can visit National University's website. It is nu.edu, and we really look forward to your next visit. Thank you so much.

0:49:02 - Doctor Federica Fornaciari

Thank you, Kim. It's been a pleasure talking to you. I appreciate it.

0:49:07 - Kimberly King

Thank you.

0:49:08 - Doctor Federica Fornaciari

Bye, thank you.

0:49:12 - Kimberly King

You've been listening to the National University podcast. For updates on future or past guests, visit us at nu.edu. You can also follow us on social media. Thanks for listening.

Show Quotables

"Deepfakes can erode, if you will, the very essence of trust... So distinguishing authentic content - genuine content - from manipulated content becomes increasingly challenging. - Federica Fornaciari https://shorturl.at/koE24" Click to Tweet
"Deepfakes can pretty convincingly fabricate content. They blur the line between reality and unreality. So media consumers need to exercise caution in trusting sources... They need to verify the authenticity of the content. - Federica Fornaciari https://shorturl.at/koE24" Click to Tweet