Q&A: Chat GPT and the Great Cybersecurity Risk

in Press

Digital Journal

By Published April 11, 2023 | Listen to the Podcast


There has been considerable discussion about ChatGPT and other similar solutions. While these could be business game changers, they also present a potential cybersecurity risk.

How are fraudsters are using generative AI to boost the effectiveness of their activity? To unpick the concerns, Digital Journal spoke with Oliver Tearle, Head of Innovation Technology at The ai Corporation (ai).

Digital Journal: Synthetic identity fraud appears to be on the rise – how worried should we be?

Oliver Tearle: Synthetic identity fraud is already a major problem for many banks and financial institutions, and fraudsters are using generative AI and synthetic data to help them to bypass verification checks. Fraudsters now can generate impressive fake identity documentation, which is realistic enough to pass many of the standard checks involved in creating a bank account, for example. Once a false bank account has been granted, fraudsters can apply for credit, or to take advantage of buy now, pay later offers, or worse.

Generative AI is helping to make this sort of fraud easier, quicker, and more scalable. Fraudsters can launch multiple fraudulent attacks with the click of a button. Automating the process of producing fake identities and personas, and dramatically increasing their chances of success. Each attack adds to a fraudsters’ large corpus of real data, which improves their model’s ability to train itself and boosts the accuracy of any subsequent attack. Some fraudsters are even using generative AI tools to generate synthetic identity data for sale on the dark web, as opposed to using it themselves.

DJ: What challenges could this pose for fraud teams?

Tearle: Conversational AI’s ability to respond to questions in a similar way to most humans poses a unique problem for fraud investigators, and generative AI is enhancing fraudsters’ ability to develop models that act and sound like a real person. Those models can also learn from and quickly understand the context and history of any conversation, improving a fraudster’s ability to pass more stringent verification checks, for example. Hypothetically by taking advantage of a stolen identity, generative AI could be utilised to make a fraudulent insurance claim, then be used to argue the case.

DJ: Can you tell us a bit about your role at The ai corporation?

Tearle: I am the Head of Innovation Technology at The ai Corporation (ai), who are specialists in Online Payments and Fraud Prevention. Reporting to the Head of Product, I am responsible for the technology strategy of the business, researching and integrating technology into ai’s solutions, internal tools, and processes with a focus on operational efficiency through automation.

I designed and co-developed aiAutoPilotML; a world first for automated financial fraud rule creation and fraud strategy management, which uses custom developed machine learning techniques to reduce the average human effort of manually managing the fraud strategy by 90% and detect 25% more fraud. I pioneered cloud application development, systems design, and deployment system for AutoPilotML. Which is now the standard from which other ai developed cloud applications are developed.

DJ: Can you describe some of the ways that fraudsters are using GPT to threaten the payments industry?

Tearle: Generative artificial intelligence (AI) has already had a pervasive impact on our lives, with many experts sharing their opinions on the technology’s industry changing commercial potential and how it might be used in future. While there are many positive applications for generative AI in our sector; there are, unfortunately, also many people who are looking to use the technology for fraudulent purposes.

Generative AI is a very powerful tool for fraudsters and it’s clear all fraud case investigations, credit applications, and insurance claims will now require a much higher level of scrutiny. Many firms, including OpenAI, are launching tools to detect AI generated content which could be instrumental in detecting generative AI’s use in fraud. However, rapid education on the nefarious uses of generative AI and warning businesses and individuals about the threats is now essential for protecting the payments industry against this powerful new technology.

In short, generative AI is helping to make fraud easier, quicker, and more scalable. Fraudsters can launch multiple fraudulent attacks with the click of a button. Automating the process of producing fake identities and personas, and dramatically increasing their chances of success. Each attack also adds to a fraudsters’ large corpus of real data, which improves their model’s ability to train itself and boosts the accuracy of any subsequent attack.

DJ: What about phishing? What are some of the trends you see there?

Tearle: We are already seeing more sophisticated attacks being launched. Indeed, as recently discovered by Cyble, fraudsters are even creating fake websites that offer free downloads of the ChatGPT tool. This type of hyper reactive approach is typical of fraudsters, who will often take advantage of new trends to attract people’s attention. In this case fraudsters are offering fake ChatGPT downloads as cover for installing malware on their victim’s device or machine, which will enable them to access personal data, passwords, or even cash.

This example may not be using generative AI itself, but the technology is also boosting the effectiveness of these types of scams by improving a fraudster’s ability to compose genuine looking fake emails, websites, and other forms of communication much more accurately.

Fraudsters can generate bona fide emails by using generative AI models that have been trained using officially produced information to better mimic the approach, language, grammar and even how links are managed in these emails, to make their phishing attempts appear more legitimate. The more convincing the fake, the higher success rate. If a fraudster gains access to someone’s email, they can then phish information, deploy malware, or cause further damage by training their tools on real emails, so that any future communication more closely resembles the real thing.

DJ: Fraudsters are often looking to steal sensitive data – how is generative AI helping them to do that?

Tearle: Fraudsters already use huge libraries of tools for collecting sensitive information, but now they are using generative AI powered scams to improve their ability to gain access to, and then take over social media accounts. Taking over an account enables a fraudster to target a victim’s contacts in a trusted environment, where they may have the opportunity to ask a victim’s close friends or family for money in a synthesised, Authorised Push Payment attack. They may also try to use social engineering techniques to access further sensitive information, hinting at possible passwords and secret answer questions used for online banking log ins, for example. In each scenario, generative AI is constantly learning the victim’s communication style, which makes any future attacks even more effective and harder to detect.

DJ: GPT is capable of posting extremely convincing fake online content – can you give us an example?

Tearle: Fraudsters are using generative AI to generate and post extremely convincing job applications to official job boards. CVs contain highly personal information and scams like this are typically used to target vulnerable people, especially those who have been made redundant, or are just starting out in their career, and are more likely to give out personal information to recruiters. Upon successfully ‘hiring’ an employee, fraudsters could even ask for their victim’s ID and bank details, knowing it would take at least a month before payday– ample time to clone the identity and use it.

DJ: How are fraudsters able to take advantage of natural disasters, such as the recent earthquakes in Turkey and Syria?

Tearle: Fraudsters have always tried to take advantage of natural disasters, such as the recent earthquakes in Turkey and Syria, but these attacks have become more sophisticated with the utilization of AI image generation. Fraudsters are using generative AI to create fake images of real-world situations as backgrounds for fake charitable pages. The recent earthquake quickly attracted the attention of fraudsters, who began setting up fake donation pages on TikTok and other channels. Many of these pages used AI to generate fake images and produce convincing messaging to coax money out of their victims.

Read the full article in Digital Journal: http://bit.ly/3KUidN4