Home Featured Why ChatGPT’s impacts will be social not technical

Why ChatGPT’s impacts will be social not technical

by Harry Choms
ChatGPT

Since the release of ChatGPT, OpenAI’s latest artificial intelligence demonstration, the technology world has been on fire.

It is truly a remarkable achievement to be able to converse with artificial intelligence (AI) and ask it to do anything from writing essays to coding computer programs.

As a computer security expert, I immediately did what people like me do: I attempted to hack it. Could I manipulate it to do something nefarious? Could criminals or spies exploit this to enable new types of cybercrime?

Of course, the answer, like most tools, is yes. Someone with malicious intent can exploit these miraculous scientific achievements to do things that are likely to cause harm. The surprising part is that the danger is in the social arena rather than the technical one.

While ChatGPT can be tricked into writing malicious computer code, this isn’t particularly frightening. Computer security products can analyze computer code in milliseconds and determine whether it is malicious or safe with a high degree of certainty.

Technology can always be used to counteract itself. The issues arise when we are attempting to detect words and meanings that will be interpreted by humans rather than machines.

This is dangerous due to two factors. The first is that it was previously impractical to have a computer create enticing lures for victims to be duped into interacting with. The technology is now not only available but also inexpensive or even free. The second is that the primary way users protect themselves today is by detecting mistakes made by attackers in their grammar and spelling to determine whether an email or communication is from an intruder.

How will we defend ourselves if we remove the last remaining indication that a malicious email or chat message was crafted carelessly by someone who lacks a strong command of the language?

Here’s an example of a current spam lure. It is relatively simple and contains few explanation words. I asked ChatGPT to write a more informative letter of the same type, and the result is shown in the second example.

Now, I didn’t format this to include an appropriate mail services logo or make the button as pretty as in my example, but these minor improvements are trivial when compared to mastering the English language. Without any knowledge of email formatting or programming skills, you could ask ChatGPT to generate the HTML code required.

In my opinion, this marks the end of most computer users’ ability to distinguish between legitimate and fraudulent emails. These tools currently only work well with English language text, but this is a simple training issue. It is now possible to write fluently in any language in the world (including computer programming languages). We must reconsider our approaches to user education and put in place technical safeguards to prevent these messages from ever reaching their inboxes.

The good news is that computers are quite capable of detecting and potentially blocking the vast majority of this content. Finally, a spam campaign will always include a call to action, such as asking you to call them, respond, click a link, or open an attachment. These are difficult to remove and can help with detection. We can also train AI models to detect when text is generated by ChatGPT and either displays a warning banner or block the message.

The issue arises when we fail to block them and they end up in someone’s inbox. It is a small percentage, but it is not zero, and thus we must prepare a defense. Having defensive layers is critical, and with humans’ reduced ability to detect a scam, it is even more critical that users connect through firewalls and web protection that can detect and block threats.

User training will need to shift away from “watch for spelling mistakes” messaging and toward risk-based approaches to verifying who you’re speaking with. You’ve been asked to do something monetary, with a password, or with sensitive data. Before proceeding, pick up the phone and confirm.

As machine intelligence advances, the task of distinguishing fact from fiction will become increasingly difficult. We must ensure that we build systems that are adaptable enough to combat these messages, while also educating our employees on the importance of taking extra precautions when receiving sensitive requests via email.

 

Written by Chester Wisniewski, field CTO applied research at Sophos.

related posts

Leave a Comment