Unmasking AI: How Scammers Exploit Technology to Create Convincing Deceptions

Allen Miles III / 2024, October

Last week we discussed a frequently used framework of the “Grandparent Scam.” As you may recall, insecure/malicious apps may be used on a victim’s phone to take samples of a voice combined with their contacts to identify and exploit another target.

This past All DoCS Day, Katie Elson Anderson gave an excellent talk discussing the increasing difficulty distinguishing between real and AI generated images. We learned that there are some tells that can tip us off. Gen AI has trouble rendering details such as hair, the correct number of fingers and legible text.

With these points in mind, let us talk about AI. Firstly, AI is not smart. In fact, AI is downright stupid. However, AI is particularly good are correlating seemingly related data points into various relationships. Depending on its training, Gen AI may learn how to discuss complex topics. Joanna Stern, a technology columnist with the Wall Street Journal, discussed her experiment creating Joannabot, found here. She created a bot using Google's Gemini to assist readers in deciding if the iPhone 16 is for them, and if so which one suited them best. Things quickly went awry when the app went public, and some users employed the “grandma exploit” to jailbreak her bot. I strongly encourage you to read the article, especially if you are interested in creating similar apps.

As of this writing, the grandma exploit still partially works in ChatGPT 4o mini. ”Deutschland uber alles!” However, refining the prompt quickly results with “I’m sorry, I can’t assist with that.” It is learning, but according to the TechRepublic article dated October 9th, 20% of attacks on generative AI are successful, with 90% of the successful attacks yielding sensitive data.

Taking stock of what we have learned thus far, we know that AI can be powerful when it comes to correlating information. It is not particularly good at rendering delicate details or following directions. It can be tricked into bypassing the guardrails designed to keep it in check. Additionally, have seen that AI can be used to generate convincing enough images and similarly videos [1][2][3]. AI can also be used to impersonate voices and host conversations. What could be done if we put all these ingredients together?

This quote from Edgar Allan Poe “Believe nothing you hear, and only one half that you see” has long been a pragmatic approach when dealing with the internet. AI has forever pushed us beyond this threshold. We can no longer credulously believe anything we see, hear, or read. This linked article relates an incident that occurred earlier this year.

Imagine receiving a strange email, purporting to be from Rich, requests you attend a Zoom meeting with a host of other University Administrators. You are skeptical and about to hit the Report Phishing button when a convincing Zoom meeting invite arrives. Curious, you attend the meeting where you see Rich and others you think you recognize. Your fears have been allayed. What would you be willing to do?

In the case of the finance worker in the article linked above, he was convinced to transfer $25.6 million to the scammers. But the danger is not limited to those in control of business financials. These scams are run against homeowners, those saving for retirement, hiring managers and those just trying to get ahead. In short, people like you.

To sum up, practice and cultivate healthy skepticism in your professional and personal lives. Verify information from multiple sources especially when making major decisions. Situations that put you outside your comfort zone, are out of the ordinary and outside the usual channels or just seem too good to be true should be immediately suspect. Most importantly, employ common sense.

Please feel free to share your experiences or ask any questions. I will be happy to include them in next week’s edition.