The healthcare sector is not prepared for the “dark side” of AI

AI has a “dark side” and deep fakes and cyber attacks are coming into play. CFOs receive contact via Microsoft Teams from the “CEO” requesting reports on financial transactions. Anahi Santiago, CISO at ChristianaCare, tells the story on Healthcare IT News.

The healthcare sector is the main target of cyber attacks and its employees are the first line of defense. But they are also your greatest ally when you are not informed and prepared. All it takes is one absent-minded click on a malicious email link and the door is wide open for a ransomware attack. Or, without clicking, the attack was avoided.

The industry still has a long way to go to prepare for security risks – and cyber surveillance. Informing and training professionals is essential to face the challenges of emerging threats.

But, AI talents depend on the instructions it is fed. For this reason, a new risk profile for large and small healthcare systems is emerging, with new attack techniques emerging every day.

“Trying to understand what’s coming next is always harder than fighting the last battle,” said Eric Liederman, CEO of CyberSolutionsMD.

“The problem most organizations face is that they take a top-down approach to the how,” Liederman said. While organizations use a variety of approaches to help train the workforce on secrets like phishing emails, “there is no science behind it,” he added.

For Anahi Santiago, head of information security at ChristianaCare, “It’s about education, but it’s also about helping them to connect”. The security officer listed three keys to cybersecurity training:

Know your audience.

Learn how to engage your audience.

Leave the door open to “report, report, report”.

The healthcare sector encompasses a diversity of professionals, and from a security perspective, what’s relevant to a doctor will likely be different from what’s relevant to someone in the financial sector, Santiago said.

“It’s not treating everybody the same and assuming that everybody’s going to process the information in the same way … and tailoring the message so that it’s relevant to what they’re doing.” Considering the evolving risks that arise, the IT industry has adopted an accessible and open stance: “It’s OK if it’s not a reportable concern – report it anyway.”

However, Santiago considers that it is not enough to accept all complaints, it is necessary to inform and train, which is why she says “One of the things that we also do, which I think has been really helpful, is this concept of a security roadshow.” IT teams meet with departments to express: “We’re not just these cybersecurity professionals that work on what you think are really scary things, and you don’t know what we do,” she explained.

It’s not just a matter of distracted clicking. “We’re all known as the ‘don’t click on that link people,’ and a lot of people think that’s the only thing that they need to worry about,” but there is so much more that the healthcare workforce needs to be informed about. “Emergent threats are always an area where we need to sort of shift and think about – what are the risks that are coming down the pike?”

This is where cybersecurity professionals come into the picture and need to think of new ways to prepare. Deep fakes are a great example.

Corporate emails are under attack, and Liederman says these types of cyberattacks “been really turbocharged this year.” IT teams warned to avoid phone calls in emails and “don’t open any attachments from anything that you weren’t expecting.”

But the evolution of cyber attacks has led to the adaptation of the discourse. Now it’s not “If you have any doubts at all, contact the person who sent it. Well, now if you do that, how do you know you’re talking to the real person?” reports Liederman.

AI has brought a level of voice and video sophistication to deep fakes that vastly increases the security risks healthcare organizations face.

The crimes go as far as scheduling Teams calls “and they’re on video, and they look exactly like the person that you would normally engage with on video,” Santiago said.

To illustrate the deep level of threats presented to ChristianaCare’s board of directors, she asked her team to create a video to talk about emerging cyber threats from generative artificial intelligence, which she said cost about $0.09.

After playing the fake two-and-a-half-minute video, “I said to them, ‘I had absolutely nothing to do with that video,’ and the board looked bewildered.” 

0.00 avg. rating (0% score) - 0 votes