The last year had been an interesting one for information security with a number of different studies and media coverage on (web) application security. So it’s worth looking a bit closer at that data. I will try to put these statistics a little bit into perspective. There are some Gartner quotes related to attacks on the application layer that are quite old but have nevertheless used even last year quite often, especially in presentations. Also, I often found presentations with a lot of statistics but inconsistent or completely missing sources. I will therefore only focus on latest data from the last two years and sources I find good reasons to rely on.
First let’s have a lock at the attack side: According to the Mcaffee Threat Report from September 2017, only 4% of all network attacks are related to web (applications). The vast percentage of attacks did not target webapps but much more often web browsers (40%) or where related to brute force (20%), Denial of Service (15%) or Worms (13%). This is of course in total and covers both end user as well as enterprises. I’m not sure if this data helps us a lot, especially since they could have overlaps (e.g. brute forcing against web applications). Also, we find other surveys claiming that application layer attacks are much more frequent than on then on the network layer.
According to Akamai, the overall number of those web application attacks increased by 69% in Q3 2017 compared to Q3 2016 though. SQL injection (SQLi) and Local File Inclusion (LFI) attacks accounted for 85% of attacks, XSS for only 9%.
Other statistics from 2017show, however, a different picture with XSS with 56% instead of SQLi on the top. It gets even more diverse when we look at DDoS attacks that are conducted via HTTP. On the one side, Akamai states that only 0.59% of DDoS attacks happened on the application layer, according to the Kaspersky Q2/2017 Security Report, this vector has been used in 11.2% of all cases (20 times as much).
So what do we learn by this? Mostly that we shouldn’t give too much on attack statistics – at least not in respect of percentages of specific attacks. From a risk perspective, such data is not that important anyway. Perhaps the fact that certain vulnerabilities are frequently targeted by attackers but for what do we need to know if this is the case in 40% or 15% of the cases. So far, we where basically only looking at network traffic that had been analyzed for web application attacks, nothing more.
It gets much more interesting when we instead look at actual security incidents that resulted from successful attacks. Unlike attack stastics this data has been retrieved from actual reported events. One of the most important sources for that is the Verizon 2017 Data Breach Report. According to this study, only 15.4% of reported incidents where related to web application attacks, almost half the amount of DDoS attacks in that year. This percentage, however, increases a lot when we look at the vector behind confirmed breaches.
According to this Verizon study, 29.5% of breaches where caused by web application attacks (by far the most common vector), 77% of which were caused by automated attacks and botnet activity, respectively. The number of confirmed breached by web applications differes a lot from industry to industry. Where manufacturing or “accommodation and food service” where only affected by small percentage here, industries like information (10% of attacks and 53% of breaches) and “financial and insurance” (37% of attacks and 76% of breaches) where affected much more. Many examples of breaches which where covered by the media last years where in fact in on one of these two industries and also affected web applications. A good example was the breach of Equifax in 2017 or the breach at the SEC on year before.
However, it would be too easy to say that an organization is at risk by breaches in web applications only by the industry it is in. Instead, one reason for these large differences between specific industries is probably simply the fact that the most affected industries use public facing web sites as an important sales channel more than those that are relatively low affected by such attacks in general. Another reason could be that less affected industries may often do not have instruments like application layer IDS capabilities in place. Especially attacks on the application layer are simply much harder to attack then on the network layer.
This was confirmed by an majority (57%) of respondents in a survey by the Ponemon Institute. And one of five respondents in the already mentioned survey by the SANS Institute answered, to have no clue whether they experienced a breach with applications as a source or not. So I would make the assumption that the risk of being affected by a breach through an insecure web application does not so much depend on the industry but more on the fact wether web applications provide an interesting target for an attacker or not.
Simply put: Web application attacks remain the most frequent cause of confirmed breaches. Organizations that connect a lot of business critical applications to the Internet are clearly more at risk to be breached by that. And from what we know, most confirmed breaches from last year where not caused by complex vulnerabilities but mostly by rather simple ones like SQL Injection that almost any developer should be aware by now. This is of course a sign of a lack of web security know-how and controls that still exists in many organizations.
However, this does not always mean that internal dev teams do not know how to write secure code. There can be a number of reasons for insecure applications. For instance, many applications are not developed internally but by external software companies. And if not the whole application than at least parts of it. According to the Sonatype 2017 State of Software Supply Chain Report, 80 – 90% of an applications are built from 3rd party components that often consist critical vulnerabilities as well. Often dev teams use out-dated versions with known security defects. But even if they are up-to-date, 84% of OpenSource projects (probably mostly smaller ones) do not fix known security defects at all.
Also, many business critical applications are not even visible to the IT function of an organizations. In the mentioned survey by the Ponemon Institute, 68% of respondents claimed that their IT function doesn’t have visibility into many applications deployed in their organizations. One enabler for this so called Shadow IT is likely the increased use of cloud technology like Amazon AWS that allow non-IT departments to easily bypass their local IT and deploy own applications (e.g. as part of a marketing campaign). Deploying applications in a public cloud like AWS can actually be helpful in respect of their security. In reality, however, it often leads to an increased attack surface and an increased risk of being breached (e.g. by a misconfigured system where sensitive data was stored).
Not to mention the heavy use of insecure WCMS systems like WordPress that are often not only easy to identify by attackers but also to exploit.
One common problem here is most likely the lack of budget. On average, organizations still only dedicate 18 percent of the IT security budget to application security. However, it’s not only about money, but also for what it is spend on. Organizations do often focus to much on specific security solutions instead on a risk centric approach. As we have seen, there are many ways that applications can be exploited in an organization – focussing only on securing the internal development is simply not enough. From an attacker perspective, it does not matter wether the exploited vulnerability is in custom code, in a library or in some WCMS plugin.
Organizations should therefore take the threat of insecure web applications seriously and focus on protecting all applications (not just the internally developed ones) appropriately based on their specific security risk!