Blog article

New cyberthreats for 2021

adam_levin-2018

by Adam Levin, Chairman and Founder, Cyberscout (USA)

7 January 2021

This guest blog was written by Adam Levin, Chairman and Founder of ICMIF’s Supporting Member Cyberscout (USA). We would like to thank Adam for his permission to share his article with ICMIF members as we move into 2021 and an ever-changing landscape of cybersecurity risks.

The threat landscape of cybersecurity changes daily, with hackers and cybersecurity professionals in a perpetual cat-and-mouse chase; hackers discover new ways to infiltrate and exploit their targets, and the cybersecurity industry looks for vulnerabilities, tries to anticipate new threats and responds when cyber security issues arise. 

The cybersecurity industry faced a challenging combination of new and familiar challenges in 2020. The massive shift to work from home in response to the Covid-19 pandemic has meant a rush to secure a wider range of home devices and networks, and an instant spike in demand for training and services that protect employees in identifying attempted cyberattacks and scams. 

In 2020, hackers actively exploited the Covid-19 pandemic as well as the resulting unemployment. Economic stimulus checks were targeted. Approximately 30% of phishing web pages were related to Covid-19. In April 2020, Google reported 18 million instances per day of malware and phishing email sent via its Gmail service using Covid-related topics as a lure. 

While Covid-19 was unheard-of prior to 2020, most of the methods of attack used to target people this past year were all too familiar, either recycled or repurposed to monetise the fear of the pandemic. Phishing emails were a prevalent mode of attack, and they have been in circulation since at least the mid-1990s. Many of the contact tracing scams of 2020 similarly followed social engineering scripts that have been used in taxpayer identity theft schemes since the 1990s as well. 

Ransomware was a relatively obscure form of malware until the early 2010s, but it has increased in scope and the amount of damage it has caused year after year, aided by a proliferation of botnetscryptocurrencies, and sophisticated criminal enterprises. 2020 saw a record number of ransomware attacks, and we can expect more of the same in 2021.  

While it is crucial to protect against more well-established hacking techniques, to invest in security training, and to follow good data hygiene, it is also necessary to look ahead to possible forms of cyberattack from newer and still developing vectors. And while 2021 is not likely to feature a host of new threats, there are trends to monitor.  

Deepfakes

Deepfakes have received a lot of attention in recent years, but their deployment by cybercriminals or hackers has still been relatively limited. We can all help keep it that way by familiarising ourselves with threats before they become realities. As with any potential cybercrime, deterrence here will be aided by an awareness of what deepfakes are, how they work, and what they can and cannot do. 

deepfake is a combination of Artificial Intelligence “deep learning” and that watchword of the 2010s: “fake.”  

deepfake can be a digital image, video, or audio file. Any digital media asset created with the assistance of Artificial Intelligence qualifies.  

A few examples of deepfakes:  

Facebook CEO Mark Zuckerberg admitting to various misdeeds including enabling genocide; an audio clip of popular podcast host Joe Rogan, and perhaps most startling; software that enables real-time deepfakes on video conferencing platforms of well-known people, including Steve Jobs, Eminem, Albert Einstein, and the Mona Lisa. 

While doctored videos or photos are sometimes labelled deepfakes, true deepfaked files are typically created using algorithms that create composites of existing footage, effectively “learning” to identify faces and voices and combining them to create new content. A website called “This Person Does Not Exist” demonstrates the potential of this technology by presenting eerily lifelike photos of fictional people assembled in real-time by amalgamating thousands of photos. 

How big of a cybersecurity threat are deepfakes?

Deepfakes have the ability to deceive, which makes them a threat. A 2018 deepfaked video of Barack Obama synced to an audio track created by comedian Jordan Peele sparked popular demand that technology companies more actively filter out content utilising the technology, citing concerns about potential election interference. 

“There is a broad attack surface here — not just military and political but also insurance, law enforcement and commerce,” said Matt Turek, a programme manager for the Defense Advanced Research Projects Agency to the Financial Times.   

These concerns were apparently validated by a 2019 incident where deepfaked audio technology was used to scam a CEO out of USD 243,000. The unnamed UK executive was fooled into wiring money to someone claiming to be the chief executive of his parent company. According to the victim, the caller had convincingly replicated his employer’s German accent and “melody.”  

Despite the above examples, the widespread threat posed by deepfakes has yet to materialise, at least not up to this point. The technology is primarily still used for viral videos and adult content and not the sort of high-tech cyberespionage that has worried computer scientists, security experts, and politicians alike. 

One of the reasons why deepfakes haven’t been deployed at their full threat potential has to do with the way they are generated: at this point in the technology’s evolution the deep learning and AI algorithms required to generate a convincing deepfake implement huge amounts of sample content.  

Barack Obama and Mark Zuckerberg are some of the most famous people on the planet, meaning that there are hundreds, if not thousands, of available hours of video to be able to employ into creating a deepfake. Politicians and entertainers find themselves in front of cameras or being recorded. The average target of a scam or a cyberattack does not have such a catalogue of images and sounds, placing limitations on how much can be “learned” by an AI program. 

Another factor limiting the spread of deepfakes: Scammers don’t need them. There are plenty of low-tech ways to fool people. A “fake” deepfake 2019 video of Nancy Pelosi was viewed by millions and was retweeted by President Trump; it was a speech the teetotaling Speaker of the House had given earlier played back at a slower speed. Likewise, the audio track in a widely distributed deepfake of then-President Obama wasn’t compiled by AI, but rather recorded by a skilled impersonator. 

Scammers will often cold-call targets pretending to be relativessupervisors, co-workers, tech support, without any need for high-tech solutions. Providing a target with a sense of urgency combined with a convincing story is all a scammer needs to get someone to install malware, assist in the commission of wire fraud, or surrender sensitive information.  

That doesn’t mean deepfakes are harmless. The barriers for scammers looking to create convincing digital frauds can and will inevitably diminish. As deepfakes grow in popularity, we can expect to see new apps create faster, more convincing, and cheaper digital fakes. 

The best defence against scams or cyberattacks that exploit deepfake technology is knowledge. It is harder to dupe informed people.  

IoT Hacks

Internet of Things, or IoT devices, already represent a mature technological industry. These connected devices are commonplace in homes and offices alike. In a perfect world, they make life easier, and the products supported by IoT more useful.  

Unfortunately, IoT devices can be vulnerable to data leaks, cyberattacks and hackers. Connected devices will be an area of concern for 2021, and their threat potential is almost guaranteed to get worse. 

The security issues common to IoT devices stem from rapid growing demand for smart devices. The number of products connected to the internet surpassed the number of people on the planet somewhere between 2008 and 2010 and is expected to exceed 75 billion by 2025.  

The hacking risk isn’t just a question of having more devices so much as having a higher concentration of devices. At the beginning of 2020, US households had an estimated 11 internet-connected devices per household. The number is expected to explode with the rollout of higher-speed wireless technologies like 5G in addition to the upward trend fuelled by people working and attending school remotely during Covid-19 pandemic.  

More internet-connected devices mean a bigger attackable surface, and having a higher concentration of IoT devices in a household or office means more points of entry for hackers.  

Case in point: an unsecured internet-connected coffee machine was successfully infected by ransomware in September 2020. While security vulnerabilities in IoT devices are commonplace, infection by relatively sophisticated malware represents a potentially massive evolutionary leap and could very well become the norm in 2021. 

As IoT devices become more numerous and more interoperable, the large-scale cyberattacks on businesses, local governments and agencies could migrate to home networks, effectively locking residents out of their homes and appliances and exfiltrating their data. 

The issue is not limited to home and office environments, either. IoT is expanding in the medical, industrial, and military fields, with expected annual growth at 21%21.3%, and 6% respectively. The possible benefits from internet-enabled applications to any of these fields are enormous, but the increased risk cannot be overlooked.  

Medical and industrial infrastructures have been constant targets of ransomware and cyberespionage in 2020. With the onlining of more and more connected devices, and with that points of entry for hackers, makes an upward curve in IoT-related hacks seem likely. 

AI and Machine Learning Hacking

The good news first: 

In the face of overwhelming hacking attempts, phishing emails, a skills shortage, and an ever-increasing attackable surface, cybersecurity companies and experts are increasingly turning to AI solutions to defend networks and devices.  

Advanced machine learning algorithms are deployed to identify phishing emailsmalware attacks and general suspicious or out-of-the-ordinary behaviour on networks that could suggest a cyberattack.  

The bad news is that hackers have access to the same technology. 

In much the same way that cybersecurity AI and machine learning can be leveraged to scan and analyse massive amounts of data to identify a phishing attack, hackers and other threat actors also have an enormous amount of data at their disposal, especially from previous data breaches and leaks.  

While deepfakes tend to garner more attention as the preferred AI-fuelled method for hackers and threat actors, the potential attacks have a much broader reach. 

As a proof of concept in 2017, researchers at the Stevens Institute of Technology used data from two large-scale data breaches where millions of passwords had been compromised. By analysing tens of millions of passwords from a compromised gaming site, the AI enabled network was able to artificially generate hundreds of millions of passwords based on patterns it identified. When applied to a set of 43 million compromised LinkedIn passwords, it was able to crack them with 27 percent accuracy.   

Although this was only an experiment, more powerful programmes exist. One programme discovered in February 2020 reportedly had the capacity to analyse more than a billion compromised login and password credentials and generate new variations. This represents an evolutionary step beyond credential stuffing (a crime where the target’s passwords are used to access other accounts). AI makes it possible to identify patterns and correctly guess passwords. 

IBM created a similar experiment with DeepLocker, and created a form of WannaCry ransomware that could evade detection by sophisticated antivirus and malware detection tools, and ultimately deliver its payload by recognising the face of a laptop user (by hijacking their camera). This combined two AI-enabled methods of deploying malware, one to recognise the patterns of security software and avoid detection, and the other to scan available images of faces online to be able to recognise a specific target.  

“These AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals. In fact, we would not be surprised if this type of attack were already being deployed,” said DeepLocker’s creators. 

While it’s uncertain exactly how many threat actors or hackers are actively utilising AI and machine learning, governments and technology firms alike are taking the threat seriously and actively building defences. This is considered likely to lead to an ongoing arms race between cybersecurity firms and hacking groups, and is guaranteed to continue to escalate for years to come.  

For member-only strategic content on the cooperative/mutual insurance sector, ICMIF members have exclusive access to a range of online resources through the ICMIF Knowledge Hub 

Scroll to Top