Protective Measures for Your Business Against AI-Generated Deepfakes
Recently, cybercriminals used ‘deepfake’ videos of the executives of a multinational company to convince the company’s Hong Kong-based employees to wire out US $25.6 million. Based on a video conference call featuring multiple deepfakes, the employees believed that their UK-based chief financial officer had requested that the funds be transferred. Police have reportedly arrested six people in connection with the scam. This use of AI technology is dangerous and manipulative. Without proper guidelines and frameworks in place, more organizations risk falling victim to AI scams like deepfakes.
Deepfakes are forms of digitally altered media — including photos, videos and audio clips — that seem to depict a real person. They are created by training an AI system on real clips featuring a person, and then using that AI system to generate realistic (yet inauthentic) new media. Deepfake use is becoming more common. The Hong Kong case was the latest in a series of high-profile deepfake incidents in recent weeks. Fake, explicit images of Taylor Swift circulated on social media, the political party of an imprisoned election candidate in Pakistan used a deepfake video of him to deliver a speech and a deepfake ‘voice clone’ of President Biden called primary voters to tell them not to vote.
Less high-profile cases of deepfake use by cybercriminals have also been rising in both scale and sophistication. In the banking sector, cybercriminals are now attempting to overcome voice authentication by using voice clones of people to impersonate users and gain access to their funds. Banks have responded by improving their abilities to identify deepfake use and increasing authentication requirements.
Cybercriminals often resort to ‘spear phishing’ attacks using deepfakes, targeting individuals directly. They commonly deceive a person’s friends and family members using voice cloning technology to imitate the voice of a known person during a phone call, asking for funds to be transferred to an unidentified account. A survey conducted by McAfee last year revealed that 70% of participants were unsure about their ability to discern between real voices and their deepfake versions. Furthermore, nearly half of the individuals surveyed expressed that they would respond to financial requests if the person making the call claimed to be in a situation of distress, such as being robbed or involved in a car accident.
In addition to these, cybercriminals have also used deepfake voices to impersonate officials from tax authorities, banks, health service providers, and insurance companies to gain access to personal and financial details of individuals.
Challenges posed by the use of deepfakes attracted the attention of the Federal Communications Commission in February. They ruled that phone calls involving human voice replicas created by AI are illegal unless the called party has given their explicit prior consent. Similarly, the Federal Trade Commission approved a rule preventing AI from impersonating government organizations and businesses. They are also considering a proposal for a rule that bans AI from impersonating individuals. These developments mark a growing trend of legal and regulatory measures being implemented worldwide to fight the risks associated with deepfakes.
Leaders must adhere to the following steps to safeguard employees and the reputation of their brands against deepfakes:
Though deepfakes pose a threat to cybersecurity, it is crucial for companies to view them as complex and emerging issues with wider implications. A proactive and informed strategy to tackle deepfakes can aid in educating stakeholders and ensure that the countermeasures employed are responsible, balanced and suitable.
Picture by Markus Spiske
Related: UK and US sign agreement to develop AI safety tests
Interested to know more about AI and big data from industry experts? Visit AI & Big Data Expo hosted in Amsterdam, California, and London. This comprehensive event is co-hosted with other top events that include BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Discover other upcoming enterprise technology events and webinars by TechForge here.
Tags: ai, artificial intelligence, deepfakes, enterprise, scams
You must be logged in to post a comment.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More