Agile Threat Modeling

Combining threat modeling with an agile development methodology such as Scrum is a quite challenging topic: Creating a threat model usually requires an experienced security expert and some effort to do this. But how does this work, wen a model can be outdated quickly when new threats are introduced by every new User Story and Sprint?

When a team works only on applications that are non-critical, internal or not live yet it may be acceptable that a security expert performs an initial threat assessment and then update it periodically or at some event.

But especially when a team actively works on a critical application that is already live and perhaps even exposed to the Internet, it is not enough to just periodically perform an external threat assessment (e.g. once a year) since critical security weaknesses & vulnerabilities may be introduced by this way.

These teams need to be enabled to identify potential threats and security concerns of their own User Stories independently. Therefore, they do not need to become security experts but to get some security mindset.

For this, we have two problems to solve: (1) Most teams will tell you that they do not have the time or expertise to do this and (2) most existing methodologies (such as STRIDE) are not created with agility in mind.

During the last years, I’ve worked with a lot of dev teams and other security experts on threat modeling techniques that are suitable for agile teams. I decided to write this blog post to share some approaches that I find helpful – most of them should be applicable to non-agile teams too.


It is helpful to first clarify the requirements for agile threat modeling to work. From my point of view, the following are the most important ones:

  1. Must be acceptable by a team (= creates value with as little impediment as possible)
  2. Must be able to be integrated into Sprints and Scrum Events
  3. Must be applicable to backlog items (e.g. User Stories)
  4. Must only require limited security know-how of the dev team

So basically, we primarily need to address the integration into agile development (mostly Scrum) & the acceptance of the approach by dev teams. Especially the latter aspect is one that I would not underestimate! Forcing a dev team to apply a technique that it doesn’t see any benefits in will hardly work in practice.

Evil User Stories & Card Games

One quite interesting technique that I’d like to start with is working with Evil User Stories. They are basically attack scenarios or negative requirements and usually not part of a Product Backlog that the team brainstorms about.

Here some examples:

“As a hacker, I want to manipulate object IDs in order to access sensitive data of other users.”

“As a hacker , I want to steal and use another user’s access token in order to gain access to his/her data.”

“As a hacker, I want to automatically test passwords of a known user in order to get access to his/her account.”

“As a hacker, I want to execute SQL exploits in order to gain access to the database.”

“As a hacker, I want to trigger a large number of ressource-intensive requests/transactions in order to have the application extensively consuming ressources.”

The team can now discuss and document the applicability of such a scenario and create a research story or spike to investigate it if they want (e.g. in case they found that verification of an Evil User Story needs to perform some code review). Teams could take such scenarios from a library maintained by your security team or get some selected by them.

If a team works already with Personas it may find it helpful to work with Evil Personas too. This is an interesting approach to constantly raise awareness of the capabilities and motivation of potential adversaries.

Finally, security card games like Microsofts EOP or OWASP Cornucopia basically work based on a similar idea as Evil Stories and should not be left unmentioned here.

Evil Stories and security card games are a nice, easy and often fun way to raise awareness for common attacks and to establish a required security mindset in your dev teams. It is perhaps a good starting point for working with threats in a dev team but clearly a rather limited one which does not replace a comprehensive threat modeling approach since it is generic and does not relate to specific User Storie, data flows and so on.

Threat Modeling of User Stories

Before we look at a full agile threat modeling approach, it is helpful to first understand how isolated threat modeling of User Stories (and perhaps other backlog items as well) could work. I’ve seen this applied as the primary threat modeling approach by several teams.

Step 1: Identify Security-Relevant User Stories

The basic idea here is to analyze a particular User Story for potential threats. A User Story may look like the following:

As a mobile app, I want to use an API that I can hit to verify if a user exists in the system.

From my personal experience, I have seen almost every team working with Technical (User) Stories that are basically technical specifications. Perhaps not fancied by many Scrum Masters but very helpful from a security point of view to analyze.

The aspect we first need to evaluate is whether a User Story may be security-relevant (= may introduce a security threat) or not. A security expert will usually be able to do this subconsciously. To enable developers to do this, it may be helpful to provide some sort of criteria (security indicator) like this one, at least for the start:

  1. Architectural changes regarding interfaces (e.g. new REST endpoint)
  2. Changes to processing of sensitive data (e.g. new validation logic)
  3. Changes to security controls (e.g. authentication or access controls)
  4. Any sort of other security concerns by the team

So in case of our example User Story from above, we can clearly say that it qualifies as security-relevant based on the first criteria. This quick evaluation is really helpful because the vast majority of User Stories are usually not security-relevant (the percentage depends on the component a team is building of course) and therefore do not need to be investigated further.

Teams may change this indicator if they want and should integrate a respective condition into their Definition of Ready (DoR) that each backlog item needs to be evaluated against these criteria.

Usually, it is really helpful to have a security expert supporting the team by defining an indicator and assessing the first User Stories together in a couple of Backlog Refinement meetings against it. After a short while, most teams will apply these criteria subconsciously as well.

Step 2: Teams Discuss Potential Security Impact of Relevant Stories

User Stories that have been identified as possible security-relevant should now be internally discussed/brainstormed by the team for potential security impact/threats and ways to circumvent security assumptions or controls, for instance during their Backlog Refinements:

  • How could an attacker abuse a new function; what should not go wrong from a security perspective?
  • Are there any security concerns (e.g. confidentiality of sensitive data or integrity of data that should not be changed) and how are they ensured?
  • What are the required/existing security mitigations (accessibility, etc.) and security controls (authentication, authorization, encryption, validation)?
  • Could a newly exposed API, function or user-input parameter be perhaps defined more restrictive (e.g. to limited users or not exposed to the Internet, only numbers are allowed as parameter values, a lower and upper bound could be defined)?
  • Are common secure design principles violated (e.g. minimization of the attack surface, least privilege)?

Teams may use checklists here to discuss technical security threats for certain components, e.g. common threats for a REST endpoint, databases, data flows, etc. STRIDE per DfD element mapping or modeling of Abuse Cases for particular User Stories (as some recommend) may also be an option, but more for advanced teams or in collaboration with a security expert.

If threats for a User Story cannot sufficiently be assessed, it may be helpful to not implement a respective Story in the current Sprint and to first create a Research Story to assess its impact and/or to reach out for a security expert and domain experts to discuss a Story. In the case of our example, the impact could be, that an anonymous attacker may disclose sensitive user data, so we need to prevent this from happening.

The last step is to define required security controls (e.g. specific authentication and access controls for the API endpoint) and (other) acceptance criteria (e.g. code review of the implementation by the security champion) for the User Story. Sometimes it is very useful to rephrase a User Story to address relevant security concerns as well (e.g. “As a user, I want all my profile data stored securely and only be accessible by me.”).

Again, it is usually a very good idea to have a security expert perform this evaluation the first couple of times and supports the team after that if it requires help or in case of security controls or Stories of a certain criticality. How much support a team requires depends of course on the team.

In general, every team should have at least some security mindset (e.g. awareness of common attack patterns like OWASP Top Ten and the attacker perspective). It could be taught within such sessions, basic awareness training, etc. In addition, you may provide additional threat modeling training (or coaching) to lead developers and security champions which preferably already gained some basic security mindset.

This approach is quite lightweight and easy to adapt. It works, however, also isolated on User Stories without the larger context of the entire application. This is what we look at next.

Full Agile Threat Modeling

Let us now take the last approach and extend it with some context, which brings us to a full threat mode and perhaps sounds easier than it actually is. The most important question is here, who is creating it and how can a complex model be updated within agile development.

One important lesson that I’ve learned was that you cannot expect developers to fully comprehend and apply complex threat modeling techniques and attack knowledge just by providing some isolated training. Being able to create a full threat model requires a lot of practice.

So it is important to have again an experienced security expert (e.g. security architect, member of the security team) onboard who creates the first version of the model and supports the team with this security know-how where needed afterward. Compared to the last approach, we have to add one additional first step to add:

Step 1: Perform Onboarding with the Team

Run a whiteboard threat modeling session with a security expert. Separate training is usually not required or useful.

The security expert will then analyze and document the model with a diagram in the team wiki (e.g. Confluence that allows you to embed diagrams build with so that the team can change later. MS Threat Modeling Tool (that I wrote a previous post about) is one I would not recommend using anymore. This usually takes some time but would be not part of a Sprint anyway.

Here is a simplified example of how such a diagram may look like (I usually annotate some more security aspects like security controls etc.):

Simplified Example Threat Model Diagram (Source Secodis)

The security expert will then later discuss the results together with the team and create relevant technical User Stories with them for identified security measures or required research Stories if something needs to be investigated further by the team.

Step 2: Define Criteria

Define criteria by which it needs to be investigated if the threat model requires updating (same as step 1 described above).

Since we want to be able to identify and assess these Stories later, it can be useful to additionally tag security-relevant Stories with a specific label (e.g. “security-relevant”) or with a custom field (e.g. dropdown) that has the advantage that you can select both security-relevant and not-security relevant.

Example Security Rating of User Story in JIRA (Source Secodis)

If a team has a security champion appointed (which should always be the case) his/her responsibility is to ensure that security-relevant Stories are identified and properly assessed for potential threats before they get implemented.

Step 3: Teams Discuss Potential Security Impact of Relevant Stories

Teams should internally discuss/brainstorm the potential security impact/threats of security-relevant User Stories during their Backlog Refinements. Again, the same as step 2 from above, but with the following aspect added:

  • Check if the threat model (of the relevant application or application component) needs to be updated (e.g. in case of additional endpoints, types of data transmitted changes to security controls).

As already mentioned above, teams should always be able to reach out for a security expert to discuss a relevant user story (e.g. by inviting him/her to the Backlog Refinement). It may be helpful to establish a periodic Security Refinement in which the security champion goes through the security backlog (as security-relevant tagged Stories of the Product Backlog) with the security expert. More complex Stories maybe than later discussed with the whole team.

In case a User Story affects a threat model, it should be updated by a security expert, ideally the one that created it. Mature teams may update the model independently, at least for threats with limited criticality or that have been already evaluated in the past.

This approach extends the previous one with the required context and is therefore much more comprehensive by limiting the required effort and expertise of the teams as much as possible. I would therefore always recommend aiming to apply this one if you are able to provide

Threat Intelligence & Community

All successful security organizations to my knowledge have some sort of active security community (or at least a Security Jour Fixe) within their dev organization from which security champions of dev teams should be mandatory participants.

This is a great place to discuss experiences and problems with threat modeling exercises and learn from other teams and also motivates teams to work more with threat modeling techniques in their teams.

Organizations may set-up some sort of threat intelligence that supports the creation of threat modelings (e.g. templates, questionnaires, common threats and relative countermeasures for specific technology stacks etc.).


One important aspect of agile development is of course automation. Automating aspects of a threat modeling process do make sense to think about too. At least for larger organizations, since it can reduce the need for security experts, ensures standards and continuous improvement. So it does not wonder that automation of threat modeling is considered the highest maturity level for threat modeling in new 2.0 release of OWASP SAMM.

Tools such as IriusRisk may allow you to define expert systems and to scale threat modeling in your organization. More mature teams may also use automation tools like pytm. Although using such tools be beneficial in some organizations, I always see the risk of teams get distracted by using a tool instead of focussing on the method.

Managing Risks

Sometimes costs for full mitigation of a critical threat may is too high or not possible for some reason. In these cases, a threat should be discussed with the security team and business (Product Owner) to assess the potential risk of it that outlines likelihood and impact (e.g. worst-case scenarios).

If its mitigation is decided against, the residual risk should be described and accepted by the business. Such decisions must be documented (e.g. in a risk register as shown below) but that this leaves the scope of this post.

Example of a Risk Register in JIRA (Source Secodis)


When you want to integrate threat modeling activities in dev teams, two aspects are important to concider:

(1) How much threat modeling is required? Altough not every developer needs to become a security expert, all should get some security mindset, especially when they work on critical applications. Not all teams require the same level of threat modeling / security expertise and effort though since they work on applications with diffent business risks. To understand the requirements and team-limitations are important factors. 

(2) How much threat modeling is possible? Even if a team may work on a critical application it may not be able to reach a required maturity overnight. Here it is important to improve the time step-by-step to keep motivation high. An important aspect here are security experts (e.g. security archivects) that support teams with onboarding, coaching and security expertise when needed.

Team acceptance is a critical factor here. Even a team that does only periodically do some brainstorming on potential threats and security concerns of User Story is a great start to establish a security mindset and much better as if they would do nothing. Finding a good approach for a team usually requires you to work closely with them and let them adapt it to their own way of working.

There is not “the right way” to do threat modeling only effective and lesse effective ways for different situations, conditions etc. 

Lastly, the best way to handle threats are to prevent them from happening. You archive this with a strong foundation that addresses them by secure standards and with a restrictive architecture.

Posted in Threat Modeling | Tagged , , , , | Leave a comment

AST Tool Evaluation – Key Findings and Limitations of OWASP Benchmark Project

Tools that test code for common vulnerabilities such as OWASP Top Ten fall today in three categories of AST (Application Security Testing) tools: SAST (static code scanning), DAST (dynamic app scan) and IAST (dynamic code scanning). Consequently, there are not few but a lot of tools, especially in SAST and DAST areas, both commercial as well as open-source.

There are a lot of functional aspects (e.g. coverage of certain programming languages or frameworks, deployment (SaaS vs. on-premise), integration aspects and so on) that you may have to take into consideration when you evaluate whether a tool is suitable for your company or not.

It gets a bit more complicated when we want to measure non-functional aspects, especially scanning quality.

General Scanning Metrics

First of all, we need to distinguish following the four test outcomes on which we can build respective metrics:

  • False Positives (FP): Tools finding that is no vulnerability (bad)
  • True Positive (TP): Tool finding that is a valid vulnerability (good)
  • False Negative (FN): Valid vulnerability not found be the tool (bad)
  • True Negative (TN): Invalid vulnerability not found be the tool (good)

In short: “True” is generally a good outcome, and “positives” (FP and TP) are the ones that we can measure via the false positive rate (FPR) and the true positive rate (TPR). What is missing is good test data that we can use to measure the scan quality of these tools.

The OWASP Benchmark

The OWASP Benchmark Project started in 2015 to provide exactly this. In the first major version (v1.1) it consists of more than 21,000 test cases that were then reduced to 2,000 one year later (v1.2).

The Benchmark project then scanned these tests with a number of SAST, DAST and IAST tools. The names of these tools are known, but specific scores (I suppose for legal reasons) are only published for the OSS tools that had been tested. The following graph shows the results as a combination of their TPR and FPR scores which results in the so-called Benchmark Accuracy Score (I will just call it the “score” here): 

OWASP Benchmark v1.2 Result Comparision (Source OWASP)

Basically, the best outcome for a tool would be to end up in the left upper corner (no false positives and 100% true positives). Which makes a lot of sense, since false positives cost a lot of time to validate and reduce the acceptance of a tool.

Here are some interesting findings that we get from this data:

  1. DAST tools are only able to find a small percentage of vulnerabilities, (e.g. security header) but do a quite good job here.
  2. SAST tools are generally able to find significantly more vulnerabilities than DAST.
  3. The more vulnerabilities a SAST tool finds the more false positives it generally finds as well.
  4. There are a lot of static code analysis tools (like PMD or FindBugs without FindbugSecPlugin) that are useless for security.

Although the latest release is now more than three years old, the Benchmark is still be used, especially by many tool vendors to advertise their products. There are a few aspects of the Benchmark that should be taken into account here:

Problem 1: Different Benchmark Versions

This is clearly not the biggest problem, but worth mentioning: The tools shown on the Benchmark results above are scored against different versions of the Benchmark, some with v1.1 and others (especially the commercial ones) with v1.2.

This makes comparisons of results of course problematic, especially since the Benchmark releases do differ a lot, at least in terms of their test cases (21,000 vs. 2,000).

Problem 2: Limited Test Coverage

The Benchmark has certain limitations that we should be aware of:

  1. The test cases are completely written Java servlets. No other languages are covers which are especially problematic for SAST and IAST.
  2. The test cases are based on plain Java – no APIs or frameworks (e.g. JSF, Spring or Hibernate) are covered.
  3. Only a number of implementation vulnerabilities are covered (command injection, weak crypto, XSS, SQLi, LDAPi, Path Traversal, and XPath Injection mostly). A lot of problems like insecure or missing validation, insecure configuration, XEE, insecure deserialization, etc. are not covered.
  4. There are a number of potential vulnerabilities such as insecure or missing authentication, access controls or business logic that we will hardly be able to cover with generic test cases like these or AST tools in general.

In other words: Even with a score of 100%, a tool would more or less only cover a baseline for DAST and Java-based SAST. This does not only mean that the Benchmark is limited. I’m really sure that AST tools will not be able to detect all (or most) vulnerabilities, at least not in the near future. This makes such Benchmarks in general limited of nature.

Problem 3: Tools adapt to the Benchmark

Since all test cases of the Benchmark had made public, tool authors/vendors can use them to tune their tools.

We can see this when we look at the results of Findbug Security Plugin, that started with a score of just 11,65% with version 1.4.0 and reached an impressing score of 39,1% (23% is commercial average) with version 1.4.6:

Findbug Sec Plugin Score (source: OWASP)

So it’s pretty clear, that the authors tuned the Findbug Security Plugin against the Benchmark. The same goes of course for commercial tool vendors. We do infact see the Benchmark used for marketing purposes a lot. Being good at the Benchmark does not necessarily mean that vulnerability detection in actual web apps and services has been improved though.


Does this mean that the Benchmark is bad? Of course not! The OWASP Benchmark is, in fact, a great project that helps tools authors to improve their tools and which helped us a lot to get a better understanding of the limitations of AST tools in general and differences of tool categories (SAST, DAST, IAST) in respect of detection capability.

The Benchmark is, however, limited in respect of test coverage and can be misleading, especially since vendors/authors can tune their tools to get a better score.

Therefore, be susceptible when a vendor announces to archive a fantastic Benchmark score in their latest release!

Here are some tips that may help you in case you want (or have) to evaluate tools for your company, project or team:

  • Use SATEC by WASC as a basis for functional requirements (old but still with a lot of good points in it). Here is another useful document. Although by a vendor it has a lot of valid aspects in it that may help you.
  • Use your own test code for measuring scan quality:
    • After shortlisting a few tools, run a POC or pilot and test them against test applications that are based on your technology stacks and integrate own common vulnerabilities into the code and configuration that you would expect to find. That may cause some work but it will be worth it.
    • In addition, test them against releases of your applications with known vulnerabilities (e.g. pentest findings from the past) in it.
  • Consider the general limitations of these tools. Do not rely on them but use them only as a safety net, enhance them with your own rules (if the tool supports this) and combine them with your own dynamic and static tests (e.g. Git hooks that blacklist insecure functions or configurations).

Posted in DAST, IAST, SAST | Tagged , , , | Leave a comment

Impressions of OWASP SAMM 2 Beta

Over the last ten years, I have been working with different maturity models for software security, including OWASP SAMM of course. I haven’t used OWASP SAMM 1.x (or OpenSAMM as it was called before it became an OWASP project) have in the last time – mostly when a customer requests such an assessment and very rarely as an instrument for planing security initiatives.

One of the reasons for this was that the current model does not cover important security aspects of modern software development (e.g. DevOps or agile development). Another one that I personally find the existing model quite inconsistent in terms of selection and hierarchy of many of its requirements (e.g. the improvement from one particular maturity level to the next one). For this reason, I’ve preferred to work with own maturity models I’ve customized for clients instead.

I was therefore quite thrilled when I studied the current beta of OWASP SAMM 2 and found that it does address these issues. For this reason, I decided to work with it within a customer project and to write this blog post to share my impressions. An overview of the changes can be found here.

Revised Structure and Requirements

The general structure of OWASP SAMM has not changed much: It still consists of business functions with three security practices each to which two security requirements (A and B) are defined. Here is how the current beta looks like:


The probably most obvious change here is the new business function “Implementation” that covers aspects modern software development, like secure build (e.g. security in continuous integration) and secure deployment (e.g. security in continuous deployment) and defect management. All where missing before and allows us now to also assess the maturity of DevSecOps.

Another important change relates to security testing: First of all, OWASP SAMM 2.0 introduces a new practice called “Requirements Driven Testing” (aka RDT), which is a common term used in software testing, and that basically covers functional requirements testing. The practice “Security Testing” still exists but is not much more focused on non-functional scanning such as SAST. I find this separation really great and useful.

The new practice “Issue Management” (replaces “Issue Management) which is subdivided into requirements for incident detection and incident response – two crucial aspects of security operations that are now explicitly addressed of this model.

The practice Environment Hardening was renamed to “Environment Management” and is now not only concerned with hardening aspects but also addresses now patch management. Especially due to insecure 3rd party components (e.g. insecure dependencies) this is a major concern of application security management today which has become more attention recently.

Besides these changes of the SAMM structure, far more happened regarding the actual requirements. A lot of great improvements happened here. For instance in “Security Architecture”, “Strategy and Metrics” (concerning with the AppSec program) or “Education and Guidance” which now officially mentioned the Security Champion role – unfortunately without greater differentiation. I don’t want to go into too much detail here though.

There are, however, a few vital aspects that I do miss. For instance about your internal security organization, security culture or adaption of agile security practices within your teams.

More Consistent Requirement Hierarchy

One improvement I don’t want to miss to point out is the revised requirement structure which I find much more logical and clean.

As mentioned, each security practice defines two security requirements (A and B) for each of the three maturity levels: 1 (“initial implementation”), 2 ( “structured realization”) and 3 (“optimized operation”). The problem with previous versions of OWASP SAMM was that these requirements did often not relate much to each other, or at least not consistently.

OWASP SAMM 2 now defines two requirement topics for each practice and three maturity level for each of them. This makes not only a lot of sense but helps to understand and teach the model much better. Let’s see how this works by taking the example of the Threat Assessment practice:

B: Threat
Maturity 1 – Best-effort identification of high-level threats to the organization and individual projects. Best effort ad-hoc threat modeling
Maturity 2 – Standardization and enterprise-wide analysis of software-related threats within the organization.Standardized threat modeling
Maturity 3 – Pro-active improvement of threat coverage throughout the organization. Improve quality by automated analysis

As we can see, the threat modeling requirement (or “B” requirement) is now consistently improved with every maturity level starting with ad-hoc assessments and ended with some sort of automated analysis.

This illustrates the structure quite well I think, although I personally would perhaps suggest other requirements, especially for maturity 3 and include aspects like the use of a threat library, threat assessment techniques or frequency here.


All in all, my overall impression of the current OWASP SAMM 2.0 beta is really positive: The model made a huge step forward in terms of quality and, yes, maturity. It is obvious that the authors integrated a lot of experiences from applying the previous version(s).

OWASP SAMM 2.0 become not only much more suitable for conducting meaningful assessments of the current state of security within a software development organization. With its improved and more logical structure, it is also way better suited for planing actual AppSec initiatives than the previous version.

You will probably still have to integrate your own practices or change existing ones when you planning to use it for this though. Own example could be the adaption of cloud security aspects, the mentioned agile security practices, or security culture aspects. OWASP SAMM 2 is also not replacing security belt programs but could be used to integrate them.

Since the model is still beta you should be aware that there might be changes before it is finally released. At the moment, the assessment sheet, perhaps the most important tool, has not been updated yet. So it will be rather complicated to perform actual assessments right now. But I’m very confident that we will see this updated very soon.

There seems also an initiative which is working on providing data to benchmark assessments results with. Which would be really great, although I have heard from such initiatives for a while now. But let’s see.

It will also be interesting to see if the new tool will allow us to perform assessments of particular teams as well instead of only organization-wide.

Posted in Secure SDLC, Secure Software Development, Security Requirements, Threat Modeling | Tagged | Leave a comment

State of Application Security

The last year had been an interesting one for information security with a number of different studies and media coverage on (web) application security. So it’s worth looking a bit closer at that data. I will try to put these statistics a little bit into perspective. There are some Gartner quotes related to attacks on the application layer that are quite old but have nevertheless used even last year quite often, especially in presentations. Also, I often found presentations with a lot of statistics but inconsistent or completely missing sources. I will therefore only focus on latest data from the last two years and sources I find good reasons to rely on.

First let’s have a lock at the attack side: According to the Mcafee Threat Report from September 2017, only 4% of all network attacks are related to web (applications). The vast percentage of attacks did not target webapps but much more often web browsers (40%) or where related to brute force (20%), denial of service (15%) or worms (13%). This is of course in total and covers both end user as well as enterprises. I’m not sure if this data helps us a lot, especially since they could have overlaps (e.g. brute forcing against web applications). Also, we find other surveys claiming that application layer attacks are much more frequent than on then on the network layer.

According to Akamai, the overall number of those web application attacks increased by 69% in Q3 2017 compared to Q3 2016 though. SQL injection (SQLi) and Local File Inclusion (LFI) attacks accounted for 85% of attacks, XSS for only 9%.

Other statistics from 2017 show, however, a different picture with XSS with 56% instead of SQLi on the top. It gets even more diverse when we look at DDoS attacks that are conducted via HTTP. On the one side, Akamai states that only 0.59% of DDoS attacks happened on the application layer, according to the Kaspersky Q2/2017 Security Report, this vector has been used in 11.2% of all cases (20 times as much).

So what do we learn by this? Mostly that we shouldn’t give too much on attack statistics – at least not in respect of percentages of specific attacks. From a risk perspective, such data is not that important anyway. Perhaps the fact that certain vulnerabilities are frequently targeted by attackers but for what do we need to know if this is the case in 40% or 15% of the cases. So far, we where basically only looking at network traffic that had been analyzed for web application attacks, nothing more.

It gets much more interesting when we instead look at actual security incidents that resulted from successful attacks. Unlike attack statistics, this data has been retrieved from actual reported events. One of the most important sources for that is the Verizon 2017 Data Breach Report. According to this study, only 15.4% of reported incidents where related to web application attacks, almost half the amount of DDoS attacks in that year. This percentage, however, increases a lot when we look at the vector behind confirmed breaches.

Percentage and count of breaches per pattern (Verizon 2017 Data Breach Investigations Report)

According to this Verizon study, 29.5% of breaches where caused by web application attacks (by far the most common vector), 77% of which were caused by automated attacks and botnet activity, respectively. The number of confirmed breached by web applications differs a lot from industry to industry.

Where manufacturing or “accommodation and food service” were only affected by a small percentage here, industries like information (10% of attacks and 53% of breaches) and “financial and insurance” (37% of attacks and 76% of breaches) were affected much more. Many examples of breaches that were covered by the media last years were in fact in one of these two industries and also affected web applications. A good example was the breach of Equifax in 2017 or the breach at the SEC on the year before.

However, it would be too easy to say that an organization is at risk by breaches in web applications only by the industry it is in. Instead, one reason for these large differences between specific industries is probably simply the fact that the most affected industries use public-facing web sites as an important sales channel more than those that are relatively low affected by such attacks in general. Another reason could be that less affected industries may often do not have instruments like application layer IDS capabilities in place. Especially attacks on the application layer are simply much harder to attack then on the network layer.

This was confirmed by an majority (57%) of respondents in a survey by the Ponemon Institute. And one of five respondents in the already mentioned survey by the SANS Institute answered, to have no clue whether they experienced a breach with applications as a source or not. So I would make the assumption that the risk of being affected by a breach through an insecure web application does not so much depend on the industry but more on the fact whether web applications provide an interesting target for an attacker or not.

Simply put: Web application attacks remain the most frequent cause of confirmed breaches. Organizations that connect a lot of business-critical applications to the Internet are clearly more at risk to be breached by that. And from what we know, most confirmed breaches from last year where not caused by complex vulnerabilities but mostly by rather simple ones like SQL Injection that almost any developer should be aware by now. This is of course a sign of a lack of web security know-how and controls that still exists in many organizations.

However, this does not always mean that internal dev teams do not know how to write secure code. There can be a number of reasons for insecure applications. For instance, many applications are not developed internally but by external software companies. And if not the whole application than at least parts of it. According to the Sonatype 2017 State of Software Supply Chain Report, 80 – 90% of applications are built from 3rd party components that often consist of critical vulnerabilities as well. Often dev teams use out-dated versions with known security defects. But even if they are up-to-date, 84% of OpenSource projects (probably mostly smaller ones) do not fix known security defects at all.

Also, many business-critical applications are not even visible to the IT function of organizations. In the mentioned survey by the Ponemon Institute, 68% of respondents claimed that their IT function doesn’t have visibility into many applications deployed in their organizations. One enabler for this so-called Shadow IT is likely the increased use of cloud technology like Amazon AWS that allows non-IT departments to easily bypass their local IT and deploy their own applications (e.g. as part of a marketing campaign). Deploying applications in a public cloud like AWS can actually be helpful with respect to their security. In reality, however, it often leads to an increased attack surface and an increased risk of being breached (e.g. by a misconfigured system where sensitive data was stored).

Not to mention the heavy use of insecure WCMS systems like WordPress that are often not only easy to identify by attackers but also to exploit.

One common problem here is most likely the lack of budget. On average, organizations still only dedicate 18 percent of the IT security budget to application security. However, it’s not only about money, but also for what it is spend on. Organizations do often focus to much on specific security solutions instead on a risk-centric approach. As we have seen, there are many ways that applications can be exploited in an organization – focusing only on securing the internal development is simply not enough. From an attacker perspective, it does not matter whether the exploited vulnerability is in custom code, in a library or in some WCMS plugin.

Organizations should therefore take the threat of insecure web applications seriously and focus on protecting all applications (not just the internally developed ones) appropriately based on their specific security risk

Posted in Uncategorized | 3 Comments

Agile Security & SecDevOps Touch Points

Agile software development has gotten more and more attention in the last couple of years. Not only internet startups or media agencies but also large companies from conservative business lines like automotive, banking, insurance and public sector more and more adjusting to the agile world. Since those companies are often already very much security aware, at least from a governance perspective, the question of how to ensure security of applications that are developed in such a way has been asked more and more frequently in the last time.

First of all, agile is not bad for security. It is, however, in fact challenging. It can in fact be quite positive for security. This however, often requires not less than a mind change. Not only in the development but also in the security as well. The later one often just don’t understand how Agile security and DevOps actually work, which is of course essential when you want to secure it.

So let’s have a look on what agile development is in respect of security very quickly: Agile development means that you have product iterations instead of a linear process. These iterations are often two or four weeks long and end up in some sort of testable artifact. It does, however, not mean that you have a release in production every two or four weeks. In fact you can have an agile development project that works on two weeks sprints but just pushed two releases in production each year. On the other hand we have DevOps that is based on methods like Continuous Delivery and Continuous Deployment that can lead to a numerous changes in the production each day.

This is an important aspect from a security point of view: If a team is working agile but just releases let’s say two in a year, we can of course easily implement a security sign-off (aka final security reviews) in form of a pentest before each release. In a DevOps world, however, this will clearly not be an option.

Secure (Test) Automation

The more Continuous or DevOps you are working (= the more frequently you push releases into production) the more you need to automate security. This does not necessarily mean test automation, although this is of course an important aspect.

Security test automation often means that we run certain code scanning tools (SAST) or web site scanning tools (DAST) within the build chain to ensure security. Nowadays a large number of commercial and OpenSource tools exists that we can automatically executed as part of a build job from within a continuous integration server such as Jenkins. Since at least OpenSource tools are often very much focused on specific languages or problems, we usually have to combine a number of them to test an enterprise application. This is what we call AppSec Pipelines.

Sounds great, is however often quite difficult to implement, especially if you want to apply them to complex applications and/or a couple of agile teams at once. Also, if you have DevOps teams they may not be delighted to have a special pipeline that runs for 30+ minutes only for security scans when the usual requirement for a complete build chain is 5-10 minutes.

At least this point can be improved by setting up a dedicated security pipeline that runs once a day whereas small and smart security tests are defined that are allowed to be executed within the regular build chain. The bellow screenshot shows the implementation of such a pipeline with Jenkins:

IAST solutions can help here, since they are not executed within the build itself but scan the application passively as it is tested by regular integration tests. Such solutions are, however, not cheap and therefore not an option to everyone.

Secure Foundation

When you work a lot with security test automation like I do you realize that this cannot be the solution for security, especially not in Agile teams with or without DevOps. It’s an important pillar, nothing more. Tools need to be configured a lot, they need to be operated by someone, they will through false positives and a lot of false negatives (vulnerabilities that have not been identified).

If you want to solve this problem, you need to think about how you can prevent vulnerabilities from being introduced in the application code in the first place. This can be accomplished with smart technology choices (e.g. secure frameworks), strict coding principles, secure defaults and a security architecture that enforces as much security as possible. This is actually what we need to spend more time thinking about. Agile security will not primarily be solved by testing but by engineering!

You will realize that with a solid secure foundation implemented, you will not have to test for everything anymore. Instead you can now focus on smart tests that covers those spots not covered by your secure foundation, for example insufficiently implemented access controls. Such security tests are usually fast and can normally be executed with very little or now false positive behavior at all with every build.

Team Responsibilities & Agile Security Practices

Lastly, but not least important, agile security is not a problem that we can solve with technology alone. It needs to be understood as a responsibility of the team itself. Agile security is a lot about shifting responsibilities into the development team (e.g. tests, operation, …), security needs to be one of it!

Furthermore, security needs to be “agilized”, that means security activities needs to be planned by the project manager and must be a part of each sprint planning and retrospective. Instead of executing a full-fledged pentest we can specify that a new functionality needs to be pentested and create a sub task for this. Such security-relevant stories can be again collected and handled within one dedicated “security” sprint, so that we do not need to on-board a pentester for each sprint.

What has to be done within a sprint from a security perspective (e.g. executing a SAST scan and assessing the results) can be defined in the Definition of Done (DoD). Security User stories can be created and get story points and many more.


As mentioned above, agile and security are not mutually exclusive. In fact its quite the opposite: Agile practices can influence product security very positively. This however often requires a lot of work, technology and often nothing less than a mind change, not only within the development. Agile development needs to be understood by the security function as well. And, most importantly, security needs to be accepted by the agile dev teams as their responsibility.

Posted in DAST, IAST, Java, SAST, Secure SDLC, Secure Software Development, Security Requirements, Security Test Automation, Threat Modeling | Tagged , , | 5 Comments

Create your own Web Security Standard in 60 Minutes

Security requirements for Web applications are vital because they are specifying what a team (e.g. a development team) has actually to do and what not. Many companies are however struggling with implementing such requirements for Web-based applications, at least consisting ones on an organizational level. There are many reasons for that: complexity, lack of know-how, fast changing threat landscape….

As a result, we very often find inconsistent, outdated or completely useless requirements in companies that cannot be implemented or (even worse) that lead to insecure implementations. In the practice, I find that existing security requirements are very often just ignored by the development teams and replaced with own ones. This may become really problematic from a security perspective, not only since it relies on the experience of certain developers.

I therefore had the idea to create a template that companies can use to implement their own Web security standard. I finished the first Version of in May 2014, but it took me another 2,5 years till I found it has reached a certain level of maturity to translate the original German version into English. Version 1.3 of the template for a Technical Security Standard for WEB-based applications and services (TSS-WEB) is not available (in English at the moment only as first draft) for both Word and PDF.

The requirements in this document mostly relate to common best practices that define a baseline level of security fir Web-based applications and services. You adapt it to your needs and your environment by removing existing requirements, adding new ones or changing existing ones, e.g. in respect of their rigor that is specified of each requirement based on RFC2119 terminologies like MUST or SHOULD:

snipped from TSS-WEB

This allows you to be very specific what is actually mandatory and what are recommendations for a project. In addition, you find protection classes defined and used in the document, that allows you to distinguish requirements of the levels of their rigor in respect of an application risk-profile (e.g. is it Internet-facing or just internal).

You may use this as a template for your own company-wide or team-specific standard or just use those requirements that you need and change them if they do not suite your environment.

Note that this is just a high-level technology specific standard that is focused on Web based applications and services in general. It is not mixed up with Java or PHP code on purpose. Because these implementation-specific topics will change a lot in the practice (e.g. when you introduce a new framework) so that is makes more sense to maintain such implementations for your standard for Java, .NET, PHP etc. with code snippets and details programmer centric explanations in a separate secure coding guidelines. Wikis such as Confluence work really great for this because they are less static, can be referenced in ticket systems and fit into the way many developers are used to work.

In the end you will end up with a security requirement pyramid like the following:

requirements pyramid

The more you go down the pyramid the more specific requirements become (e.g. high-level, related to Web applications, related to Java-based Web applications). With a standard in the middle of this pyramid like TSS-WEB, you can ensure a certain level of security but provide your developers the flexibility they need.

Document owner of the standard should be the security department that updates it at least once a year whereas the ownership of the secure coding guidelines can be transferred to the development teams – because in this scenario, security is not defined within the guidelines, they are “just” an implementation of the standard.

Posted in Secure SDLC, Security Requirements | Tagged | Leave a comment

An Organizational View on Application Security

When it comes to integrating application security into an (especially large) organization, we often experience a bunch of practical problems and frustration. In the end, a lot of money may have been spend, but little or no improvement to the security of developed applications have been accomplished.

The main problem that organizations made is that they have an isolated on security activities. For instance, they conduct security training but don’t have related requirements for the developers in place, the training is focused on a non-related technology stack or responsibilities for security have not been defined by the management and communicated to the development teams.

After struggling a while with such problems I came up with the following quadrant:


The basic message that is visualized here is that whenever we want to integrate security into an organization we need to consider all for dimensions: organization, guidance & requirements, training and technologies.

Some examples:

  • You plan to improve the security know-how of your developers? Identify roles that will be responsible for security, plan the training based on the technologies the teams actually work with, combine them with (secure coding) guidelines that the developers will later be able to use and look up what they heard.
  • You plan to buy a new code scanning technology? Identify roles that will operate it (ownership) first and that receive the qualification to be able to do it, processes that make sure they are actually used and define requirements that it will test.

When you think about this quadrant, you will find that almost any activity for improving application security can be mapped to it. Always considering all four dimensions will often lead to more effort and planning but clearly to a much higher chance of success and less frustration.

Posted in Uncategorized | Leave a comment

Microsofts New Threat Modeling Tool

A week ago I had the pleasure of giving a speach at OWASP AppSec EU in Rome on the new Microsoft Threat Modeling Tool 2016 that came out last November and is still available for free.

The Threat Modeling Tool implements one way to derive threats (potential security problems) from a system specification and this is via Data flow Analysis (DFD). As shown in the screenshot above, we can specify our system via DFD logic within the tool, when we are ready we switch in the analysis mode and see a couple of identified threats based on our DfD diagram.

Microsoft Threat Modeling Tool 2016

New Functionality

The functionality described above is basically how all versions of this tool had worked for the last 10 years it exists. The 2016 version, published last November, has one new great feature that distinguishes it from all the others though: It now allows you to completely change the XML based templates an thereby implement own stencils, properties and, most importantly, threat logic. That works actually really great, since Microsoft also included a quite usable threat template editor into its tool.

Customizing Threat Logic

Before we start implementing our own threat logic we must understand how DfD based threat logic is expressed. In general, rules can be formulated as followed: dfd threat logic

Basically everything that you can put in this logic you can have checked by Threat Modeling Tool 2016, both as include and exclude statements. Especially the use of custom attributes works really great for putting all kind of logic into that tool (e.g. “Uses PHP” for a stencil “Web Application”. As you can see it from the logic above, stencil always have a parent.

This logic can be used to identify the threat of data sniffing. Template Editor of Microsoft Threat Modeling Tool

In case of the stencil “Web Application” this is “Generic Process”. All rules that matches the parent automatically matches child stencils such as the web application. This allows you to define own custom stencils that will automatically derive all threat logic that matches its parent. Unfortunately there is only one level available, so a child stencil cannot have another child that restricts the threat logic a bit.


The tool itself can be downloaded here. All you need to be able to work with it is a Windows system.

In addtion, I’ve created a couple of sample models and a reduced template for web applications that you can all download from my github page.

Please be aware that if you want to replace an existing template you have to change the template id within the model file (both XML). Unfortunately the tool does not allow this within the GUI. I’ve describe the detailed steps for this on the github page referenced above.


Although it still has some limitations, Microsofts new Threat Modeling Tool is a good anf free tool for creating simple DfD based security diagram and threat models. It becomes a great tool when you are using its new customization capability that allows you to create your own custom threat templates, include all kind of stencil and threat logic that are specific to your organization. I highly recommend to make this effort because the existing logic is rather limited.

If you feel that some threats identified by this tool make no sense, just look at the threat logic within the template and perhaps change it if not suitable for your organization.

Besides automatically identifying threats from a DFD diagram, this tool has one great additional implicit use: Talking about interactions and data flows a system has with developers and architects often results in a lot of “aha” moments and the identification of security problems that were not aware to anyone.

Posted in Secure Software Development, Security Requirements, Threat Modeling | Tagged | 17 Comments

Automating DAST Scans with Jenkins, Arachni & ThreadFix

I’m often asked how security tests can be automated with non-commercial tools, e.g. triggered by a Jenkins build. Therefore I decided to write this post, to give you a bit of understanding which tools you can use and what you have to do in order to accomplish this goal.

To not over complicate this, I will only focus on tools that finds vulnerabilities in custom code and application config, such as SQL Injection or Cross-site Scripting (XSS).

As with commercial tools we basically have three types of security test tools that we need to distinguish here: static code scanners (SAST), dynamic code scanners (IAST) as well as dynamic web scanners (DAST). Especially for latter, a couple of good and free tools exist that we can use here. The most popular ones are at the moment most likely OWASP ZAP and Arachni. I worked with both tools and personally find Arachni to be the better suitable tool, especially for automated scans. I will therefore focus only Arachni here. Although my examples are based on integrating Arachni into Jenkins, I tried to only use functionality that should be available in any other CI as well.


The following diagram visualizes the components and its interactions described in this post. We have a Jenkins CI, a git repository (could be SVN or any other code repository as well), a tomcat as well as the both tools this post is about: Arachni for scanning and ThreadFix as a database where the results are stored and analyzed.arachni arch
Of course you may also be implement Arachni differently or use other components.


First we need a vulnerable demo app so that we can scan with Archni and see whether its working or not. I’ve created a rather simple Java-based WebApp that basically has one HTTP form with reflected Cross-site Scriptings (XSS) in each form field that can be exploited via HTTP POST parameters “age” and “name”:


The corresponding HTTP request looks like this:

Then we of course need a Jenkins installation set-up, that build our web app and deploys it to a app server. In this case I created a job called “insecure-webapp” for our demo app and used Jenkins Tomcat Plugin for its automatic deployment.

Installing Arachni

The installation of Arachni is pretty simple. You just need pick the right version here download it on the System where your Jenkins (or other CI) is running and extract it there. That’s it.

Integrating Arachni into Jenkins

Arachni provides a couple of different interfaces that we can use for automation. Besides a Web GUI there is also a command line interface (CLI) as well as REST and an RPC service that we can trigger. Although one of the latter two seems best suitable for automation, I find the CLI to be the most comprehensive one that is also very easy to integrate. The CLI can be very simply integrated by a shell as a post build step (no Arachni Jenkins plugin exist anyway):

In this case I just told Arachni to crawl the provided URL but only scan for XSS vulnerabilities. This configuration is good starting point for using Arachni. It is of course not a sufficient configuration for identifying all common Web vulnerabilities, especially not for an enterprise app! The CLI provides a lot of options, that you would perhaps need, especially when you want to scan a large application, selecting and configuring test cases, including/excluding certain URLs and providing authentication credentials.

Running Jenkins with Arachni

The next time the build is executed, Jenkins automatically grabs the source code from the repository, builds it, deploys it on the Tomcat and scans it with Arachni as we can see it on the following console output (stripped for demonstration purposes):

As we can see above, Arachni actually finds XSS vulnerabilities in both vulnerable HTTP parameters (“name” and “age”) and does this within every build! However, the build still succeeds, since we do not do anything with the Arachni results.

Breaking the Build

If we want to flag a build as unstable if Arachni finds a security problem, we need to do a little bit of extra work. As we can see in the console output, in case Arachni didn’t find anything, it outputs “0 issues were detected”. We can now easily parse the output for this string with Jenkins Text Finder Plugin that is executed as another post build action

If this string is not present, we assume that Arachni found something and tell the Finder Plugin to mark build as unstable. The result is the following output for a positive security finding:

.. resulting in an unstable build:


Sending Findings to ThreadFix

Regardless of whether you want to have your builds automatically failed when certain vulnerabilities has been found or you just want to monitor existing findings in your applications, ThreadFix is a great tool for that.

ThreadFix is a web-based tool for collecting findings from different tools such as Arachni. There is a Jenkins plugin available that can be integrated via an additional post build action step very easily so that findings are automatically send to ThreadFix where thex can be monitored and assessed via an Web interface.

threadfix screenshot

To be able to parse Arachni scan output, you must use the Arachni report command to convert the .asf files to .xml files via an additional build step though. I use an additional conditional post build step for this that checks if an Arachni report file exists and runs shell command

arachni_reporter ${BUILD_TAG}.afr –reporter=xml:outfile=${BUILD_TAG}.xml
to get the file format that we can upload via ThreadFix Jenkins plugin into our ThreadFix vulnerability database as shown in the screenshot above.

There is a community edition of ThreadFix that lacks of some enterprise features (such as ACL based on users / teams, SSOs, etc.) but can be uses without costs, even in an commercial environment.

Advanced Configuration

We can of course scan much more vulnerabilities besides just XSS. And we should. When you do not specific test cases Arachni will automatically scan for everything, including platform fingerprinting and SSL checks. Be careful with that as well, because this will most likely produce a lot of false positive. Instead, try to find out which test cases are useful for the tested technology stack (e.g. no SQL injection tests when you are shure that you have a Mongo DB). Start with a simple set-up and include more step by step.

Also, Arachni provides a number of ways to login into applications to perform deep scans. Do not scan authenticated sessions in production though since this can have a lot of problematic side effects.

Limitations & Other Tools

As mentioned, even with a highly customized Arachni configuration, the approach described here is only supposed to be a cheap and efficient approach for identifying low hanging fruits within the custom code and application config of an Web application, nothing more. If you want to cover more vulnerabilities (or identify them ealier in your SDLC) with free tools, you should also concider using dependency checkers such as OWASP Dependency Checker (for Java only) as well as static code scanners such as Findbug Security Plugin or similar tools.

Especially in this tool categories, both scan quality and integration capabilities of free tools are still very limited at the moment and far behind commercial tools such as Contrast IAST or Checkmarx SAST.

Posted in DAST, Java, Security Test Automation, Uncategorized | Tagged , | 4 Comments

IAST: A New Approach for Agile Security Testing

Static Application Security Testing (SAST) tools such as Fortify, Veracode, Checkmarx or IBM App Scan Source Edition have been available on the market now for a while. All of them have their specific pros and cons. But there are certain problems that leak all of these static scanning technologies. Here are three important ones:

  • False Positives: No matter what vendors might say, static code scans will lead to a number of false positives, especially the first scan that is performed on an application
  • Ownership: Who should be in charge of performing tests? Static code scanning often results in a large number of findings (not all of them false positives of course). Therefore, there need to be at least one (internal) tool expert if he/she actively performing the tests or will helping others with it.
  • Context: It is often hard to map a specific finding from a static code scan to application context (e.g. a specific url) where this could be exploited.

These might not be problems for all companies, and in fact SAST tools do a very good job in many organizations. Others do struggle a lot using such technologies though. Especially when such an tool expert is missing (e.g. when a security scanning tool should be operated within QA by non-security personal), implementing SAST technologies often doesn’t lead to the expected (or promised) situation.

What IAST is

Therefore, a I while ago, a new type of very promising technology has been emerged that could these problems: IAST. IAST stands for Interactive Application Security Testing and is another product group term that has been invented by Gartner.

IAST can be easily described as dynamic code scanning tools, whereas SAST are always static code scanning tools that are performed against either source, byte or binary code. It usually works by instrumenting (weaving) the deployed bytecode (in case of a Java application) or IL code (in case of a .NET application) during runtime and on the application server. The advantage of this: It allows you to analyze applications during runtime. All code that is executed by the application server will be analyzed and can be linked to the context (e.g. the Url as well as the relevant SQL statement).

Especially when it comes to agile security testing where continuous security testing becomes more and more important, the IAST approach offers huge advantages.

The technology itself is much older and has been widely known in the QA market for a while now. The first product in the security market was to my knowledge Fortify PTA (Program Trace Analyzer) which was available at least in 2008. It was a very exciting technology, but perhaps a bit early back then, so it was taken from the market in 2012. What changed a lot in the recent years was that products become much more mature and “enterprise ready” if you want.

And so it is that almost every SAST / IAST vendor is currently working on building or acquiring a IAST solution or has already one in its portfolio. Since it is not clearly defined what functionality an IAST tool has to offer, the differences between those solutions can be huge. For instance, some DAST products may just be extended by an additional server-side agent that improves the results of a DAST (Dynamic Application Security Scanning) scan. A couple of vendors now offers solutions with “IAST technologie”. When we look a bit deeper into them, it becomes clear, that we basically must distinguish two approaches:

IAST Light (Active)

The first approach are basically DAST solutions that have an additional agent installed on the application server to improve test results. The architecture more or less looks like this:

iast light architecture

Many vendors such as HP, IBM and Accunetix extended there tools with this function. The most sophisticated implementation of this approach is to my knowledge Seeker that has been recently acquired by Synopsys (Coveritry) from Quotium. Seeker is basically an enterprise scanning solution that integrates both DAST and IAST capability. It thereby actively runs continuous security tests (“attacks”) such as SQL Injection against a Web application and identifies (that different to a classic DAST solution) potential vulnerabilities with its agent that observes the application from within the application server.


Seeker is very easy to use (it even casts videos of vulnerabilities it identifies) and allows to be integrated in automatic functional testing tools such as Selenium or HP QTP and offers management reporting and dashboard functionality as well as the integration into existing systems such as Sonar.

Full IAST (Passive)

The only “full” IAST tool on the market is to my knowledge currently Contrast IAST from Contrast Inc. The approach of Contrast IAST is a bit different to tools like Seeker. The main differnce is that Contrast does not actively performs attacks against a web application but analyzes instrumented code purely passively. This is in fact a huge advantage, since it will not affect other testing activities that run at the same time and only business testing (manual or automatic) are required to trigger security tests:

iast architecture

I therefore call this a full IAST approach.

The integration and execution of Constrast IAST is extremely simple: It just need to be activated on the application servers that are used for testing once. After that you assign a license to the application you want to have tested on the Contrast management console and you are good to go. Whenever someone (or some tools) run tests against this application contrast will analyze the data flows for potential security problems and reports them on the central security console, to a central system such as Sonar or by custom alerts directly to an assigned mail address.

The dashboard view looks like this:
Contrast WebGoat Vulnerabilities

As we can see, the reported results are pretty comprehensive and can be easily tested and linked to the vulnerable code.


First of all, we see that there are major differnces between different IAST tools on the market. Some are more or less just an improvement of a DAST Tool (“IAST Light”), whereas “Full IAST” tools not only just provide just an alternative to SAST testing but a solution to major problems the industry currently stuggles with a lot (e.g. false positives and required security experts). Especially when it comes to testing security in an agile or even DevOPS environment.

However, in my opionion, there is a market for both technologies: IAST tools are (at least at the moment) more expensive reagarding licensing compared to SAST tools, they require the application to be executable and access to the runtime environment, provide less test cases then commercial SAST tools to name just a few reasons for this.

Posted in IAST, SAST, Security Test Automation | Tagged | 5 Comments