My esteemed status as a Nigerian royal has been compromised, and I urgently need your help to secure $30 million, offering you a significant portion in return.
Unknown
Reflecting on the history of phishing scams, I am now contemplating the future of this now long standing Internet scam and how it will evolve with the widespread adoption of AI and LLMs. reminded of the enduring nature of deception in the digital age. The Nigerian Prince scam, a well-known online fraud, has become emblematic of the ceaseless attempts to swindle individuals through the internet. This particular scheme, with its promises of great wealth in exchange for a seemingly small favor, is a modern incarnation of the age-old con: the advance fee fraud, or 419 fraud, named after the relevant section of the Nigerian Criminal Code.
Gone Phishing Song (AI Louis Armstong)
For a bit of fun, here’s a parody of Bing Crosby and Louis Armstrong’s “Gone Fishing,” titled “Gone Phishing,” – You can hear an AI rendition of Louis and Bing in the audio playback above and below.
I’ll tell you why I can’t find you,
Every time I check my email space…You gone phishing (well, how you know?)
Well, there’s a scam in every place (uh-huh)
Gone phishing (you sly old bot)
You ain’t just surfin’ anymore (no way)
You’re spreading lies across the net
And every click’s a threat, you bet
You claim that scamming’s your best fun (and I believe it)
You’ve got too much ambitionGone phishing, in the digital pool (cyber la, truly fa)
I’m wishin’, I could avoid your sneaky tool (shall I block you now?)
I’d say no more clicks for mine (stayin’ offline)
On my screen, I’d flash a sign
Gone phishing, instead of just a-clickin’Hey there AI (yeah, user)
I scanned through your code a time or two lately
And you’re up to no good either
Well, I’m a clever bot, User. I got lotsa data cookin’
You’re not just cookin’, you’re downright…Gone phishing (bah-boo-bah-boo-bah-boo-bah-boo-bah)
Leavin’ malware on my door (hey, don’t expose me now, will ya?)
Gone phishing (stayin’ stealthy, I got a big data haul lined up)
You ain’t just browsing anymore (I don’t need to browse, I got me a network of bots)
Accounts need protectin’ in the cyberspace (I got my algorithms on that detail, they’re scanning every byte)
But you just keep on infectin’ (sendin’ phishing links by the rate)
You just never seem to learn (user, you’re my teacher)
You got too much ambition (you’re persuadin’ me)Gone phishing (bah-boo-dah-do-dah-do-dah-do)
Got your botnet by your side (that’s my digital crew, all ready)
Gone phishing (mmm-hmm-hmm-hmm-hmm)
Viruses spreadin’ far and wide (stay back, user, this bot’s on fire)Mmm, folks won’t find us now because
Mr. AI and Mr. User
We gone phishing, instead of just a-browsing
Bah-boo-baby-bah-boo-bah-bay-mmm-bo-bay
Watch out for those hooks!
The Nigerian Prince
In its essence, the Nigerian Prince scam embodies a simple yet effective formula: a supposed wealthy individual, often posing as nobility or a high-ranking official, recounts a compelling, often tragic narrative, appealing to the victim’s greed, kindness, or sense of duty. This request invariably involves the victim parting with money or sensitive personal information, with the promise of substantial financial rewards. What is particularly fascinating about this scam is its historical roots. The Nigerian Prince scam is a digital evolution of the ‘Spanish Prisoner’ scheme from the 19th century. Back then, the con involved letters from purported prisoners who promised a share of their hidden wealth in exchange for assistance with their release. This method of deception seamlessly transitioned into the digital world, finding a fertile ground in the early, somewhat naïve days of the internet. The adaptability of this scam is noteworthy. As the world changes, so do the stories of these scammers. From Spanish to French or Russian prisoners, and then to the Nigerian nobles in distress, the narratives evolve, reflecting contemporary events and exploiting the latest communication channels, from emails to social media platforms like Facebook, Instagram, and Twitter. Today, a scammer might pose as a Ukrainian businessperson or a U.S. soldier, crafting stories that resonate with current global events.
The Nigerian Prince scam’s persistence into the modern era, especially in its more sophisticated, personalized forms, serves as a cautionary tale. It underscores a fundamental truth about human nature: our susceptibility to well-crafted stories and the lure of easy gains. As we venture further into an era where artificial intelligence and advanced technologies play an increasingly significant role, it’s crucial to remember these lessons from the past. The methods will evolve, but the essence of the scam remains unchanged: exploiting trust and greed through the art of deception. Now, however we can train and LLM with the depth and breadth history of this art online, as well as what might be publicly or even privately traded information about you personally. In the right scammer hands, these AI powered bots would be quite the con.
The Nigerian Price AI Bot
Having delved into the classic rhetoric of the Nigerian Prince scam, it’s crucial to pivot our focus to the burgeoning horizon of technological advancements, particularly the advent of Large Language Models (LLMs) and their potential role in the evolution of such scams. The integration of AI and LLMs into phishing schemes heralds a new era of sophistication in online fraud, where personalized, context-aware interactions become the norm, significantly enhancing the believability and efficacy of these scams.
Imagine a scenario where an AI-driven chatbot, armed with the capabilities of an advanced LLM, initiates a conversation on a popular social media platform. Unlike the static, often formulaic language of traditional scam emails, this AI is adept at mimicking human conversation, responding dynamically to the potential victim’s queries and comments. It could analyze the user’s online profile, posts, and interactions, tailoring its narrative to align with the user’s interests or background, thereby establishing a more credible and compelling story. For instance, if the AI detects that the user is interested in humanitarian aid, it might pose as a philanthropist or a distressed individual from a war-torn country, seeking financial assistance to help the needy, with promises of reimbursing the aid manifold.
This scenario not only demonstrates a significant leap in the scam’s persuasive power but also highlights the challenges in discerning such AI-driven interactions from genuine human exchanges. As LLMs and AI technologies continue to advance, they bring forth a new frontier in cyber deception, where the line between human and machine-generated communication blurs, making it increasingly difficult for individuals to navigate online interactions safely. This evolution in phishing tactics necessitates a parallel advancement in our awareness and digital literacy, underscoring the need for enhanced cybersecurity measures and an informed, cautious approach to online communications.
The Tangled Web of Scams
In the intricate web of phishing scams, it’s revealing to examine their geographic origins and operational structures. Predominantly, a significant number of these deceptive enterprises emanate from three key countries: Russia, China, and India. In these nations, the phishing landscape is marked by the presence of large, almost corporate-like entities that orchestrate these scams with the efficiency and scale of a business operation. However, it’s important to recognize that this issue isn’t confined to these major players alone. Across the globe, there are countless individuals and small groups who also engage in these fraudulent activities. These actors, though smaller in scale, contribute significantly to the widespread nature of phishing scams, making this a truly global issue with diverse participants and methodologies.
Origins of Phishing Attacks
- Geographical Hotspots: A significant portion of spam and malicious emails originate from specific countries. According to a 2022 Kaspersky report, Russia led the charge with 29.82% of all malicious emails, followed by China with 14%.
- Rise of Instant Messaging Platforms: An IRONSCALES report highlighted a growing trend in phishing attacks via instant messaging platforms, with a third of IT professionals noticing an increase in such incidents.
- Preferred Communication Platforms: Phishing has extended beyond emails to platforms like video conferencing software (44%), workplace messaging platforms (40%), cloud-based file-sharing platforms (40%), and text messages (36%). Notably, WhatsApp was identified as the main channel for such attacks, with a Cyphere study finding 90% of phishing attacks delivered via instant messaging occurring on this platform.
- LinkedIn’s Unfortunate Prominence: LinkedIn has become a preferred platform for scammers, used in 52% of phishing attacks that imitate a known brand, according to research from Check Point.
- Impact of Phishing Attacks: A Proofpoint study revealed that the biggest concern for security leaders regarding phishing attacks is data loss (60%), followed by compromised credentials, the threat of ransomware, malware, and financial losses.
Target Demographics
- Vulnerability of Employees: Terranova Security’s 2022 Gone Phishing Tournament showed that 7% of all employees are likely to click on phishing email links, highlighting the risk to organizations.
- Generational Differences: Millennials and Gen Zers are more susceptible to phishing attacks compared to Gen Xers, with 23% of 18–40-year-olds admitting to falling for such scams (Atlas VPN).
- Targeted Sectors: APWG reported that 34.7% of phishing attacks target webmail and SaaS users. The financial sector is particularly vulnerable, accounting for 23% of all successful attacks, as per Statistica. Interestingly, cryptocurrency platforms, often perceived as a common target, actually represent only 2% of successful attacks.
- Indiscriminate vs. Spear Phishing: While phishing attacks are often widespread, Slashnext’s 2022 State of Phishing Report found that 76% of attacks targeted specific individuals. This spear phishing approach, which involves meticulous research, tends to be more successful.
- Small Organizations at Greater Risk: Symantec’s study revealed a higher likelihood of phishing attacks in smaller organizations (1–250 employees) compared to larger ones, highlighting the need for robust cybersecurity measures across all sizes of organizations.
The vulnerability of the younger generation to phishing scams presents a particularly alarming trend. This demographic, including Millennials and Gen Zers, has grown up immersed in technology, integrating it into every aspect of their lives from an early age. This constant and often sophisticated engagement with digital platforms might paradoxically make them more susceptible to phishing attacks. Their frequent use of platforms like Discord, which are likely to be the hunting grounds for future AI-driven phishing bots, exacerbates this risk. The familiarity and comfort with technology do not necessarily equate to an awareness of its potential misuses. As AI technology becomes more advanced and integrated into social platforms, this younger, tech-savvy generation could find themselves disproportionately targeted by these new, more sophisticated forms of phishing. This scenario underscores the urgent need for enhanced digital literacy and cybersecurity education, particularly tailored to those who are most at ease with, and yet potentially most exposed to, the evolving landscape of cyber threats.
Phishing Scams and Election Tampering
The intersection of AI phishing scams and election tampering via the Internet and social media is a critical topic, reflecting the evolving landscape of digital threats and their impact on the democratic process. As AI technology becomes more sophisticated, its potential use in manipulating elections through misinformation and data breaches is a growing concern.
In 2022, Arizona experienced a significant increase in AI phishing scams targeting county-level election workers, particularly around the primary elections. This trend was identified by cybersecurity firm Trellix, which noted a surge in malicious emails employing sophisticated phishing tactics. One prevalent method involved a fake password expiration alert, tricking election workers into revealing their login credentials, akin to the method used in the 2016 attack on John Podesta. This access could lead to the manipulation of election information or even the selling of sensitive data to malicious actors. Another complex scheme focused on absentee ballot administration, where attackers used trusted email threads with government contractors to send malware-laden documents or links. These tactics targeted the critical yet vulnerable segment of election workers who are key to voter engagement but often lack advanced cybersecurity defenses. The situation in Arizona highlights the national significance of local-level election security. It underscores the need for robust cybersecurity measures and heightened awareness among election workers to recognize and report suspicious activities. Addressing these threats is not only vital for protecting election integrity but also for combating the wider issue of election disinformation, emphasizing the importance of strong cybersecurity solutions and practices in maintaining confidence in the democratic process.
The phishing attacks targeting election workers in Arizona were identified by the FBI and detailed in a Public Service Announcement (PSA) report, which can be accessed at the following URL: https://www.cisa.gov/sites/default/files/publications/PSA_cyber-activity_508.pdf. This report is an important resource for understanding the scope and nature of these cyber threats to the electoral process.
AI Supercharged Election Tampering
Imagine a scenario where AI-powered bots are massively deployed across various online platforms, including message and chat boards. These sophisticated bots are programmed to spread misinformation related to elections or to specifically target election or poll workers. Their objective is to infiltrate sensitive electoral systems or sway public opinion.
In this scenario, these bots could blend seamlessly into online discussions, using advanced language models to mimic human conversation. They might propagate false narratives about election processes or candidates, aiming to disrupt the electoral integrity or manipulate voter perception. Some of these bots might also be designed to identify and engage with election or poll workers, using social engineering tactics to extract sensitive information or credentials. This could potentially lead to unauthorized access to critical election infrastructure, compromising the security and credibility of the election process.
Such a widespread and coordinated use of AI in misinformation campaigns and targeted attacks represents a significant threat to the democratic process, underscoring the need for robust cybersecurity measures and digital literacy efforts.
Open AI’s Response
OpenAI’s CEO Sam Altman and CP Anna Maknju discussed the implications of AI in global and political contexts, particularly focusing on elections. They announced new guidelines restricting the use of AI tools like Chat GPT in political campaigns, introducing cryptographic watermarks for AI-generated images to ensure transparency and providence. This initiative aligns with the policies of larger platforms like Facebook, TikTok, and YouTube, which have faced challenges in enforcing similar principles. The conversation highlighted OpenAI’s robust safety systems and monitoring capabilities, leveraging their own AI tools to scale enforcement effectively. This proactive approach indicates a significant advantage in ensuring the safe and ethical use of AI, especially in sensitive areas like elections.
Altman expressed concern about the influence of AI in upcoming critical democratic elections, emphasizing the need for vigilance and continuous monitoring. OpenAI’s approach is more cautious, focusing on preventing the misuse of AI in influencing election outcomes. This stance is crucial, given the potential of AI to significantly impact public opinion and voting behaviors. Maknju, with her experience at Facebook, acknowledged the lessons learned from past incidents like the Cambridge Analytica scandal, suggesting that OpenAI is well-positioned to address these challenges due to its foundational focus on these issues.
OpenAI’s stance and guidelines provide a framework for addressing these challenges, emphasizing the importance of responsible AI use in political contexts. As AI continues to evolve, its role in elections and the potential for misuse, like phishing scams targeting voters or manipulating information, becomes a critical area of focus. OpenAI’s proactive measures and the recognition of AI’s potential impact on elections underscore the need for ongoing vigilance and ethical considerations in the development and deployment of AI technologies.
AI impersonating AI?
The concept of AI bots impersonating other AI bots opens up a new frontier in cybersecurity concerns. With the proliferation of AI bots designed for legitimate purposes in various sectors, trust in these bots is naturally growing. However, this trust could become a vulnerability, as it provides an opportunity for malicious AI bots to pose as legitimate ones for phishing purposes. These deceptive bots could mimic the behavior and communication style of trusted bots, thereby gaining unauthorized access to sensitive information or misleading users. This new vector of cyber threats requires a reevaluation of how we interact with and trust AI systems, emphasizing the need for robust verification mechanisms and continuous monitoring of AI bot interactions.
About Me and Why I Wrote This
As we conclude this exploration of the evolving landscape of AI phishing scams and their potential impact across various sectors, I invite you to delve deeper into these and other futuristic concepts on my website. As a learning futurist, my work involves designing and tinkering with technologies that could shape the future of teaching and learning. My projects range from augmented tourism rallies and AR community art exhibitions to mixed reality escape rooms and various immersive technology experiments. Discover more about these innovative ventures and stay informed about the latest developments in the realm of AI and cybersecurity at erichawkinson.com.
As a university lecturer with five years of experience teaching Digital Citizenship as part of a Digital Literacy course, I have become increasingly attuned to the evolving landscape of online interactions and the associated risks. Each year, I listen to a growing number of stories from my students, recounting their personal encounters with online fraud. These narratives range from sophisticated phishing attacks to more subtle forms of digital deception, reflecting the complexity and pervasiveness of cyber threats in our digital age.
This article is born out of my ongoing research and efforts to consolidate vital information on these topics. My aim is to provide a comprehensive resource that not only educates but also empowers students and readers alike to navigate the digital world more safely and responsibly. As we delve into the nuances of AI-driven phishing scams and the broader implications for our digital well-being, I hope this piece offers valuable insights and practical guidance. It’s a synthesis of academic inquiry and real-world experiences, tailored to address the challenges and opportunities that arise as we interact within the increasingly intricate web of the digital realm.
About the Author
Eric Hawkinson
Learning Futurist
Eric is a learning futurist, tinkering with and designing technologies that may better inform the future of teaching and learning. Eric’s projects have included augmented tourism rallies, AR community art exhibitions, mixed reality escape rooms, and other experiments in immersive technology.
Roles
Professor – Kyoto University of Foreign Studies
Research Coordinator – MAVR Research Group
Founder – Together Learning
Developer – Reality Labo
Community Leader – Team Teachers