Workers are more likely to divulge company secrets to a workplace AI tool than their friends, a new report has claimed.

In a study of over 1,000 office employees from the US and UK, data analytics firm CybSafe found many are positive about generative AI tools, so much so that a third of both US and UK workers admitted that they would probably continue using them even if their company banned them. 

69% of all respondents also said that the benefits of such tools outweigh their security risks. US workers were the most sanguine, as 74% of them agreed with this statement.

AI dangers

Half of all respondents reported using AI at their work, with a third using it weekly and 12% daily. When it comes to US workers, the most common use cases include research, copywriting and data analysis, all closely tied at 44%, 40% and 38% respectively. AI tools were also employed for other tasks, such as helping with customer service (24%) and code writing (15%).

CybSafe believes this is a cause for concern, as it claims that businesses are not properly alerting their employees to the dangers posed by using such tools.

In its reports, CybSafe comments that, “as AI cyber threats rise, businesses are in danger. From phishing scams to accidental data leaks, employees need to be informed, guided, and supported.”

A worrying 64% of US workers have entered information pertaining to their work into generative AI tools, and 28% weren’t sure if they had. CybSasfe further claims that a massive 93% of workers are potentially sharing confidential information with AI. And the icing on the cake is that 38% of US workers admit to sharing data with AI that they wouldn’t “in a bar to a friend.”

“The emerging changes in employee behavior also need to be considered,” says Dr Jason Nurse, CybSafe’s director of science and research and current associate professor at the University of Kent. 

“If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks. Our behavior at work is shifting, and we are increasingly relying on Generative AI tools. Understanding and managing this change is crucial.”

Another issue from a cybersecurity perspective is the inability of workers to distinguish between content created by a human or an AI. 60% of all those surveyed said that they were confident in they could do so accurately. 

“We’re seeing cybercrime barriers crumble, as AI crafts ever more convincing phishing lures,” added Nurse. “The line between real and fake is blurring, and without immediate action, companies will face unprecedented cybersecurity risks.”

These concerns are amplified given the fact that the uptake of AI at work is increasing at a rapid pace. A new report by management consulting firm McKinsey has labelled 2023 the breakout year for AI, with nearly 80% in its survey claiming to have had at least some exposure to the technology at home or at work.

Go to Source

Follow us on FacebookTwitter and InstagramWe are growing. Join our 6,000+ followers and us.

At will strive to help turn Tech Rookies into Pros!

Want more articles click Here!

Deals on Homepage!

M1 Finance is a highly recommended brokerage start investing today here!

WeBull. LIMITED TIME OFFER: Get 3 free stocks valued up to $6300 by opening & funding a #Webull brokerage account! “>Get started >Thanks for visiting!

Subscribe to our newsletters. Here! On the homepage

Tech Rookies Music Here!

Disclaimer: I get commissions for purchases made through links in this post at no charge to you and thanks for supporting Tech Rookies.

Disclosure: Links contain affiliates. When you buy through one of our links we will receive a commission. This is at no cost to you. Thank you for supporting

Disclaimer: This article is for information purposes and should not be considered professional investment advice. It contains some forward-looking statements that should not be taken as indicators of future performance. Every investor has a different risk profile and goals. All investments have risks. Always do your own research or hire an expert before investing and trading.