monitor screen with openai logo on black backgroundPhoto by Andrew Neel on <a href="https://www.pexels.com/photo/monitor-screen-with-openai-logo-on-black-background-15863044/" rel="nofollow">Pexels.com</a>

The world’s most popular chatbot, ChatGPT, is having its powers harnessed by threat actors to create new strains of malware.

Cybersecurity firm WithSecure has confirmed that it found examples of malware created by the notorious AI writer in the wild. What makes ChatGPT particularly dangerous is that it can generate countless variations of malware, which makes them difficult to detect. 

Bad actors can simply give ChatGPT examples of existing malware code, and instruct it to make new strains based on them, making it possible to perpetuate malware without requiring nearly the same level of time, effort and expertise as before. 

For good and for evil

The news comes as talk of regulating AI abounds, to prevent it from being used for malicious purposes. There was essentially no regulation governing ChatGPT’s use when it launched to a frenzy in November last year, and within a month, it was already hijacked to write malicioius emails and files

There are certain safeguards in place internally within the model that are meant to stop nefarious prompts from being carried out, but there are ways threat actors can bypass these.

Juhani Hintikka, CEO at WithSecure, told Infosecurity that AI has usually been used by cybersecurity defenders to find and weed out malware created manually by threat actors. 

It seems that now, however, with the free availability of powerful AI tools like ChatGPT, the tables are turning. Remote access tools have been used for illicit purposes, and now so too is AI. 

Tim West, head of threat intelligence at WithSecure added that “ChatGPT will support software engineering for good and bad and it is an enabler and lowers the barrier for entry for the threat actors to develop malware.”

And the phishing emails that ChatGPT can pen are usually spotted by humans, as LLMs become more advanced, it may become more difficult to prevent falling for such scams in the neat future, according to Hintikka.

What’s more, with the success of ransomware attacks increasing at a worrying rate, threat actors are reinvesting and becoming more organized, expanding operations by outsourcing and further developing their understanding of AI to launch more successful attacks.

Hintikka concluded that, looking at the cybersecurity landscape ahead, “This will be a game of good AI versus bad AI.”

Go to Source

Follow us on FacebookTwitter and InstagramWe are growing. Join our 6,000+ followers and us.

At TechRookies.com will strive to help turn Tech Rookies into Pros!

Want more articles click Here!

Deals on Homepage!

M1 Finance is a highly recommended brokerage start investing today here!

WeBull. LIMITED TIME OFFER: Get 3 free stocks valued up to $6300 by opening & funding a #Webull brokerage account! “>Get started >Thanks for visiting!

Subscribe to our newsletters. Here! On the homepage

Tech Rookies Music Here!

Disclaimer: I get commissions for purchases made through links in this post at no charge to you and thanks for supporting Tech Rookies.

Disclosure: Links contain affiliates. When you buy through one of our links we will receive a commission. This is at no cost to you. Thank you for supporting Teachrookies.com

Disclaimer: This article is for information purposes and should not be considered professional investment advice. It contains some forward-looking statements that should not be taken as indicators of future performance. Every investor has a different risk profile and goals. All investments have risks. Always do your own research or hire an expert before investing and trading.