Six common misconceptions about cybersecurity

Dealing with Cybersecurity is a rather unloved task in some companies. In many cases, IT administrators already have a pretty good idea of what is wrong with their company and how their own IT could be put to the test in order to identify and mitigate or even eliminate security gaps. However, this does not mean that administrators will get through to management with their proposals. Cybersecurity costs money. As long as IT systems and infrastructure are functioning, it is often difficult to invest the resources that would be needed to reduce risks and ensure smooth operation in the future, that is: Creating cyber resilience If companies systematically underestimate their cyber risk, this has to do with various misconceptions. The following are six of the most common misconceptions.

Assumption 1: It only affects the others anyway.

“After all, our company is not interesting enough for a cyberattack.” This assessment is anything but rare. Unfortunately, the reality is completely different. Statistics suggest that as many as 99 percent of all cyber damage cases are the result of attacks that were not targeted at all. In other words,the vast majority of attacks are spray-and-pray. In a scattergun approach, cybercriminals launch a general attack attempt without a specific target. Then they simply wait to see which companies or organizations respond to their phishing attack. Unfortunately, for many companies, the hurdle for an initial compromise of their IT is not high enough to withstand these attacks in the long run. This plays into the attackers’ hands. Especially if they have primarily financial interests and want to blackmail the company, for example, by encrypting it via crypto-Trojan or ransomware. In such cases,the vast majority of attacksis most effective for cyber criminals. This in turn means that every company is a potential victim.

Politically motivated attacks are clearly distinguishable from this: Here, the success ultimately is only a question of manpower because in an ideologically based attack, monetary cost-benefit considerations play a completely subordinate role. In such cases, zero-day attacks are also used more frequently, which exploit vulnerabilities in software that are not yet publicly known. With a zero-day exploit, attackers play a wild card, so to speak. Because when the new attack method becomes public through its use, this attack vector is ultimately burned because software vendors then roll out appropriate security updates.

Assumption 2: Attacks from the supply chain do not play a major role.

In fact, supply chain attacks are on the rise. In this class of cyberattacks, software solutions, devices, or machines that are supplied to a company and that it uses to conduct its business act as the attack vectors. Thus, it was a question of Log4j vulnerability disclosed in December 2021 to a zero-day vulnerability in a Java logging library. Log4j is used to collect and save logging informationfrom software, applications and hardware appliances. However, because Log4j is sometimes deeply embedded in many different solutions, in thousands of instances, a simple vulnerability scan is hardly sufficient to identify all vulnerable instances here.

In general, even open source software is not immune to security vulnerabilities. For example, a professor at the University of Minnesota managed to introduce vulnerabilities into the Linux kernel in the context of a study. For this scenario, he and one of his students pretended to provide bug fixes for the Linux community. The aim of the controversial action was to demonstrate how vulnerable even open source projects can be. A security vulnerability in the Linux kernel is potentially so serious because Linux is very widely used. Today, it can be found in servers and smartphones and also in a wide variety of embedded devices – from cars to smart homes to machines.

With the increasing digitalization of our economy and our lives, networked devices can also become a gateway for cybercriminals. For example, a supermarket chain was hacked by the attackers choosing the smart refrigerated shelves in the stores as an attack vector. The same risk exists for networked devices in the smart home sector. They also represent potential points of attack – a serious reputational risk for the device manufacturer or distributor. In both private and commercial environments, a much more conscious approach to installed software and purchased equipment is therefore required. In the manufacturing industry, for example where a machine can have a life cycle of several decades, sooner or later there are usually only mitigating measures available to reduce safety risks. As manufacturers then no longer exist, or they no longer supply security patches after a few years. Sometimes the only option is to isolate the machine from the rest of the network and accept the residual risk. As a general rule, it would be negligent for a company to shift all responsibility for its cybersecurity to the supplier. Threats from within the supply chain are real and commonplace today. Companies therefore not only need appropriate risk awareness but also experts to help them develop an effective Establish cyber resilience

Assumption 3: Our employees already have sufficient safety awareness.

All too often, employees’ careless behavior still provides cybercriminals with a convenient gateway into the company. Creating and maintaining appropriate risk awareness among employees is a building block for cybersecurity, the importance of which a company should never underestimate. Only if they are aware of the danger will employees consistently avoid, for example, passing on passwords over the phone or carelesslyclicking on a dubious link in an e-mail. Sometimes the potential for danger is also a direct consequence of daily work. Employees in the human resources department, for example, open applications almost daily without knowing whether or not the digital resume contains malicious code. It is the same with invoice PDFs in the accounting mail inbox. That is why the company needs of course technical measures against such attacks.

But it is equally important to reduce the likelihood of successful phishing attempts by raising awareness of the dangers ofsocial-engineering attacks more generally. Social Engineering means that attackers use deception to gain unauthorized data or access. Human psychology methods are misused to manipulate employees and persuade them to transmit information or to perform certain actions – such as the fatal click on the link in the phishing e-mail or the disclosure of the password to a supposed support agent on the phone.

Assumption 4: The scope of this safety check will already be sufficient.

Putting enterprise cybersecurity to the test through penetration testing is an important building block in buildingCyber Resilience. However, if you select the scope of the Pentest too small, little is gained. As this creates a supposed feeling of security. A typical example is the exclusion of certain systems, such as those that are at the end of their life cycle because they will soon be shut down or replaced anyway. However, as long as they have not yet been shut down, it is precisely these legacy systems that often offer the most tempting attack vector. Another example: On the server which runs a web application to be chcked also hosts a FTB service which could result in a complete compromise – but all services except the web application are excluded from testing. Similarly, it happens that, for example, a financial institution chooses the scope of its audit only as large as is required by regulation and officially required. Again, the result would be a deceptive false sense of security.

If pentests are to be truly meaningful, they must not be directed at just one section of the company’s IT. Rather, they must be holistic in nature. That’s because the goal of a penetration test isn’t to merely make management feel positive about cybersecurity – it’s to identify real vulnerabilities and potential attack vectors so they can be fixed before they are exploited by criminal attackers.

Assumption 5: Penetration testing can be done by the IT department on the side.

Pentests cannot be an in-house task at all in most companies. After all, IT administrators have one thing above all else to do: they have to ensure that the systems in the company run reliably. As a rule, the administration team is already working at 100, if not 120 percent, capacity with its operational tasks. In addition, penetration tests require highly specialized and cutting-edge expertise that the company’s own IT department usually does not have at its disposal. It is important that management understands that a pentest is not something that can simply be done on the side. At the same time, internal IT staff need to be clear that a security audit is never about discrediting their own job in terms of Cybersecurity – but rather to strengthen it. A meaningful penetration test would not even be feasible with in-house resources because know-how and time are lacking. This is only different if the company is large enough to have its own dedicated Red-team – the attackers – for more or less continuous pentests to perform. This Red-team is then faced with a dedicated Blue team of defenders. But even its own Red-team can sometimes benefit greatly from external support from Ethical Hackers.

Assumption 6: Our backups save us in an emergency.

Just over five years ago, this statement may have been true. Today it is no longer, not in every case. It is important to keep in mind that the quality of malware has increased significantly. Crypto-Trojans that encrypt corporate data for extortion purposes no longer do so immediately. There is now ransomware that first nests in a company’s backups and gradually destroys them. Only months later, once the backup has become unusable, the crypto-trojan beings encrypting the company’s data – and the actual blackmail begins.

That’s why today it’s important to firstly secure backups against malware with suitable protection concepts and secondly to check them regularly. Only a backup that can actually be set up can be relied on in an emergency. Companies should therefore regularly test, practice and try out their disaster recovery. And if a company encrypts its backup for security reasons: This backup key itself is also a potential point of attack because cybercriminals can of course also encrypt the company’s backup key. The backup would then again be unusable, and the blackmail attempt by encrypting the company’s data could begin. That’s why it’s important for organizations to keep their backup crypto keys offline and also document their disaster recovery training offline.


The threat of cyberattacks has not diminished; on the contrary. If a company wanted to conclude from a past that went smoothly that it will continue to be safe from cybercrime in the future, this would be perhaps the most serious misconception of all. Operational reliability in IT can only be established if a company can keep its Creating established, maintained and further developed with suitable, holistic concepts and measures. In any case, it is worth the effort to deal with the matter as the financial damage in case of an emergency weighs many times more than the foresighted investment in cyber security. As in medicine, the same applies in matters of cybersecurity:Prevention is better than cure.


Michael Niewöhner

Manager at Ventum Consulting and expert in Cybersecurity

More Articles

Bogus security through vulnerability scans: Why Pentesting Abandonment Costs Billions

Why Cybersecurity Needs Ethical Hacking

Penetration testing - perceived security is not enough.

Scroll to Top