Agile Security & SecDevOps Touch Points

Agile software development has gotten more and more attention in the last couple of years. Not only internet startups or media agencies but also large companies from conservative business lines like automotive, banking, insurance and public sector more and more adjusting to the agile world. Since those companies are often already very much security aware, at least from a governance perspective, the question of how to ensure security of applications that are developed in such a way has been asked more and more frequently in the last time.

First of all, agile is not bad for security. It is, however, in fact challenging. It can in fact be quite positive for security. This however, often requires not less than a mind change. Not only in the development but also in the security as well. The later one often just don’t understand how Agile security and DevOps actually work, which is of course essential when you want to secure it.

So let’s have a look on what agile development is in respect of security very quickly: Agile development means that you have product iterations instead of a linear process. These iterations are often two or four weeks long and end up in some sort of testable artifact. It does, however, not mean that you have a release in production every two or four weeks. In fact you can have an agile development project that works on two weeks sprints but just pushed two releases in production each year. On the other hand we have DevOps that is based on methods like Continuous Delivery and Continuous Deployment that can lead to a numerous changes in the production each day.

This is an important aspect from a security point of view: If a team is working agile but just releases let’s say two in a year, we can of course easily implement a security sign-off (aka final security reviews) in form of a pentest before each release. In a DevOps world, however, this will clearly not be an option.

Secure (Test) Automation

The more Continuous or DevOps you are working (= the more frequently you push releases into production) the more you need to automate security. This does not necessarily mean test automation, although this is of course an important aspect.

Security test automation often means that we run certain code scanning tools (SAST) or web site scanning tools (DAST) within the build chain to ensure security. Nowadays a large number of commercial and OpenSource tools exists that we can automatically executed as part of a build job from within a continuous integration server such as Jenkins. Since at least OpenSource tools are often very much focused on specific languages or problems, we usually have to combine a number of them to test an enterprise application. This is what we call AppSec Pipelines.

Sounds great, is however often quite difficult to implement, especially if you want to apply them to complex applications and/or a couple of agile teams at once. Also, if you have DevOps teams they may not be delighted to have a special pipeline that runs for 30+ minutes only for security scans when the usual requirement for a complete build chain is 5-10 minutes.

At least this point can be improved by setting up a dedicated security pipeline that runs once a day whereas small and smart security tests are defined that are allowed to be executed within the regular build chain. The bellow screenshot shows the implementation of such a pipeline with Jenkins:
agile-sec-pipeline2

IAST solutions can help here, since they are not executed within the build itself but scan the application passively as it is tested by regular integration tests. Such solutions are, however, not cheap and therefore not an option to everyone.

Secure Foundation

When you work a lot with security test automation like I do you realize that this cannot be the solution for security, especially not in Agile teams with or without DevOps. It’s an important pillar, nothing more. Tools need to be configured a lot, they need to be operated by someone, they will through false positives and a lot of false negatives (vulnerabilities that have not been identified).

If you want to solve this problem, you need to think about how you can prevent vulnerabilities from being introduced in the application code in the first place. This can be accomplished with smart technology choices (e.g. secure frameworks), strict coding principles, secure defaults and a security architecture that enforces as much security as possible. This is actually what we need to spend more time thinking about. Agile security will not primarily be solved by testing but by engineering!

You will realize that with a solid secure foundation implemented, you will not have to test for everything anymore. Instead you can now focus on smart tests that covers those spots not covered by your secure foundation, for example insufficiently implemented access controls. Such security tests are usually fast and can normally be executed with very little or now false positive behavior at all with every build.

Team Responsibilities & Agile Security Practices

Lastly, but not least important, agile security is not a problem that we can solve with technology alone. It needs to be understood as a responsibility of the team itself. Agile security is a lot about shifting responsibilities into the development team (e.g. tests, operation, …), security needs to be one of it!

Furthermore, security needs to be “agilized”, that means security activities needs to be planned by the project manager and must be a part of each sprint planning and retrospective. Instead of executing a full-fledged pentest we can specify that a new functionality needs to be pentested and create a sub task for this. Such security-relevant stories can be again collected and handled within one dedicated “security” sprint, so that we do not need to on-board a pentester for each sprint.

What has to be done within a sprint from a security perspective (e.g. executing a SAST scan and assessing the results) can be defined in the Definition of Done (DoD). Security User stories can be created and get story points and many more.
agile-flow_en

Conclusion

As mentioned above, agile and security are not mutually exclusive. In fact its quite the opposite: Agile practices can influence product security very positively. This however often requires a lot of work, technology and often nothing less than a mind change, not only within the development. Agile development needs to be understood by the security function as well. And, most importantly, security needs to be accepted by the agile dev teams as their responsibility.

Posted in DAST, IAST, Java, SAST, Secure SDLC, Security Test Automation | 3 Comments

Create your own Web Security Standard in 60 Minutes

Security requirements for Web applications are vital because they are specifying what a team (e.g. a development team) has actually to do and what not. Many companies are however struggling with implementing such requirements for Web-based applications, at least consisting ones on an organizational level. There are many reasons for that: complexity, lack of know-how, fast changing threat landscape….

As a result, we very often find inconsistent, outdated or completely useless requirements in companies that cannot be implemented or (even worse) that lead to insecure implementations. In the practice, I find that existing security requirements are very often just ignored by the development teams and replaced with own ones. This may become really problematic from a security perspective, not only since it relies on the experience of certain developers.

I therefore had the idea to create a template that companies can use to implement their own Web security standard. I finished the first Version of in May 2014, but it took me another 2,5 years till I found it has reached a certain level of maturity to translate the original German version into English. Version 1.3 of the template for a Technical Security Standard for WEB-based applications and services (TSS-WEB) is not available (in English at the moment only as first draft) for both Word and PDF.

The requirements in this document mostly relate to common best practices that define a baseline level of security fir Web-based applications and services. You adapt it to your needs and your environment by removing existing requirements, adding new ones or changing existing ones, e.g. in respect of their rigor that is specified of each requirement based on RFC2119 terminologies like MUST or SHOULD:

snipped from TSS-WEB

This allows you to be very specific what is actually mandatory and what are recommendations for a project. In addition, you find protection classes defined and used in the document, that allows you to distinguish requirements of the levels of their rigor in respect of an application risk-profile (e.g. is it Internet-facing or just internal).

You may use this as a template for your own company-wide or team-specific standard or just use those requirements that you need and change them if they do not suite your environment.

Note that this is just a high-level technology specific standard that is focused on Web based applications and services in general. It is not mixed up with Java or PHP code on purpose. Because these implementation-specific topics will change a lot in the practice (e.g. when you introduce a new framework) so that is makes more sense to maintain such implementations for your standard for Java, .NET, PHP etc. with code snippets and details programmer centric explanations in a separate secure coding guidelines. Wikis such as Confluence work really great for this because they are less static, can be referenced in ticket systems and fit into the way many developers are used to work.

In the end you will end up with a security requirement pyramid like the following:

requirements pyramid

The more you go down the pyramid the more specific requirements become (e.g. high-level, related to Web applications, related to Java-based Web applications). With a standard in the middle of this pyramid like TSS-WEB, you can ensure a certain level of security but provide your developers the flexibility they need.

Document owner of the standard should be the security department that updates it at least once a year whereas the ownership of the secure coding guidelines can be transferred to the development teams – because in this scenario, security is not defined within the guidelines, they are “just” an implementation of the standard.

Posted in Secure SDLC, Security Requirements | Leave a comment

An Organizational View on Application Security

When it comes to integrating application security into an (especially large) organization, we often experience a bunch of practical problems and frustration. In the end, a lot of money may have been spend, but little or no improvement to the security of developed applications have been accomplished.

The main problem that organizations made is that they have an isolated on security activities. For instance, they conduct security training but don’t have related requirements for the developers in place, the training is focused on a non-related technology stack or responsibilities for security have not been defined by the management and communicated to the development teams.

After struggling a while with such problems I came up with the following quadrant:

Quadrant_en

The basic message that is visualized here is that whenever we want to integrate security into an organization we need to consider all for dimensions: organization, guidance & requirements, training and technologies.

Some examples:

  • You plan to improve the security know-how of your developers? Identify roles that will be responsible for security, plan the training based on the technologies the teams actually work with, combine them with (secure coding) guidelines that the developers will later be able to use and look up what they heard.
  • You plan to buy a new code scanning technology? Identify roles that will operate it (ownership) first and that receive the qualification to be able to do it, processes that make sure they are actually used and define requirements that it will test.

When you think about this quadrant, you will find that almost any activity for improving application security can be mapped to it. Always considering all four dimensions will often lead to more effort and planning but clearly to a much higher chance of success and less frustration.

Posted in Uncategorized | Leave a comment

Microsofts New Threat Modeling Tool

A week ago I had the pleasure of giving a speach at OWASP AppSec EU in Rome on the new Microsoft Threat Modeling Tool 2016 that came out last November and is still available for free.

The Threat Modeling Tool implements one way to derive threats (potential security problems) from a system specification and this is via Data flow Analysis (DFD). As shown in the screenshot above, we can specify our system via DFD logic within the tool, when we are ready we switch in the analysis mode and see a couple of identified threats based on our DfD diagram.

Microsoft Threat Modeling Tool 2016

New Functionality

The functionality described above is basically how all versions of this tool had worked for the last 10 years it exists. The 2016 version, published last November, has one new great feature that distinguishes it from all the others though: It now allows you to completely change the XML based templates an thereby implement own stencils, properties and, most importantly, threat logic. That works actually really great, since Microsoft also included a quite usable threat template editor into its tool.

Customizing Threat Logic

Before we start implementing our own threat logic we must understand how DfD based threat logic is expressed. In general, rules can be formulated as followed: dfd threat logic

Basically everything that you can put in this logic you can have checked by Threat Modeling Tool 2016, both as include and exclude statements. Especially the use of custom attributes works really great for putting all kind of logic into that tool (e.g. “Uses PHP” for a stencil “Web Application”. As you can see it from the logic above, stencil always have a parent.

This logic can be used to identify the threat of data sniffing. Template Editor of Microsoft Threat Modeling Tool

In case of the stencil “Web Application” this is “Generic Process”. All rules that matches the parent automatically matches child stencils such as the web application. This allows you to define own custom stencils that will automatically derive all threat logic that matches its parent. Unfortunately there is only one level available, so a child stencil cannot have another child that restricts the threat logic a bit.

Download

The tool itself can be downloaded here. All you need to be able to work with it is a Windows system.

In addtion, I’ve created a couple of sample models and a reduced template for web applications that you can all download from my github page.

Please be aware that if you want to replace an existing template you have to change the template id within the model file (both XML). Unfortunately the tool does not allow this within the GUI. I’ve describe the detailed steps for this on the github page referenced above.

Conclusion

Although it still has some limitations, Microsofts new Threat Modeling Tool is a good anf free tool for creating simple DfD based security diagram and threat models. It becomes a great tool when you are using its new customization capability that allows you to create your own custom threat templates, include all kind of stencil and threat logic that are specific to your organization. I highly recommend to make this effort because the existing logic is rather limited.

If you feel that some threats identified by this tool make no sense, just look at the threat logic within the template and perhaps change it if not suitable for your organization.

Besides automatically identifying threats from a DFD diagram, this tool has one great additional implicit use: Talking about interactions and data flows a system has with developers and architects often results in a lot of “aha” moments and the identification of security problems that were not aware to anyone.

Posted in Uncategorized | Leave a comment

Automating DAST Scans with Jenkins, Arachni & ThreadFix

I’m often asked how security tests can be automated with non-commercial tools, e.g. triggered by a Jenkins build. Therefore I decided to write this post, to give you a bit of understanding which tools you can use and what you have to do in order to accomplish this goal.

To not over complicate this, I will only focus on tools that finds vulnerabilities in custom code and application config, such as SQL Injection or Cross-site Scripting (XSS).

As with commercial tools we basically have three types of security test tools that we need to distinguish here: static code scanners (SAST), dynamic code scanners (IAST) as well as dynamic web scanners (DAST). Especially for latter, a couple of good and free tools exist that we can use here. The most popular ones are at the moment most likely OWASP ZAP and Arachni. I worked with both tools and personally find Arachni to be the better suitable tool, especially for automated scans. I will therefore focus only Arachni here. Although my examples are based on integrating Arachni into Jenkins, I tried to only use functionality that should be available in any other CI as well.

Architecture

The following diagram visualizes the components and its interactions described in this post. We have a Jenkins CI, a git repository (could be SVN or any other code repository as well), a tomcat as well as the both tools this post is about: Arachni for scanning and ThreadFix as a database where the results are stored and analyzed.arachni arch
Of course you may also be implement Arachni differently or use other components.

Preparations

First we need a vulnerable demo app so that we can scan with Archni and see whether its working or not. I’ve created a rather simple Java-based WebApp that basically has one HTTP form with reflected Cross-site Scriptings (XSS) in each form field that can be exploited via HTTP POST parameters “age” and “name”:

insecurewebapp1

The corresponding HTTP request looks like this:

Then we of course need a Jenkins installation set-up, that build our web app and deploys it to a app server. In this case I created a job called “insecure-webapp” for our demo app and used Jenkins Tomcat Plugin for its automatic deployment.

Installing Arachni

The installation of Arachni is pretty simple. You just need pick the right version here download it on the System where your Jenkins (or other CI) is running and extract it there. That’s it.

Integrating Arachni into Jenkins

Arachni provides a couple of different interfaces that we can use for automation. Besides a Web GUI there is also a command line interface (CLI) as well as REST and an RPC service that we can trigger. Although one of the latter two seems best suitable for automation, I find the CLI to be the most comprehensive one that is also very easy to integrate. The CLI can be very simply integrated by a shell as a post build step (no Arachni Jenkins plugin exist anyway):

jenkins-arachni-start-1-768x162
In this case I just told Arachni to crawl the provided URL but only scan for XSS vulnerabilities. This configuration is good starting point for using Arachni. It is of course not a sufficient configuration for identifying all common Web vulnerabilities, especially not for an enterprise app! The CLI provides a lot of options, that you would perhaps need, especially when you want to scan a large application, selecting and configuring test cases, including/excluding certain URLs and providing authentication credentials.

Running Jenkins with Arachni

The next time the build is executed, Jenkins automatically grabs the source code from the repository, builds it, deploys it on the Tomcat and scans it with Arachni as we can see it on the following console output (stripped for demonstration purposes):

As we can see above, Arachni actually finds XSS vulnerabilities in both vulnerable HTTP parameters (“name” and “age”) and does this within every build! However, the build still succeeds, since we do not do anything with the Arachni results.

Breaking the Build

If we want to flag a build as unstable if Arachni finds a security problem, we need to do a little bit of extra work. As we can see in the console output, in case Arachni didn’t find anything, it outputs “0 issues were detected”. We can now easily parse the output for this string with Jenkins Text Finder Plugin that is executed as another post build action

If this string is not present, we assume that Arachni found something and tell the Finder Plugin to mark build as unstable. The result is the following output for a positive security finding:

.. resulting in an unstable build:

jenkins-build-failed

Sending Findings to ThreadFix

Regardless of whether you want to have your builds automatically failed when certain vulnerabilities has been found or you just want to monitor existing findings in your applications, ThreadFix is a great tool for that.

ThreadFix is a web-based tool for collecting findings from different tools such as Arachni. There is a Jenkins plugin available that can be integrated via an additional post build action step very easily so that findings are automatically send to ThreadFix where thex can be monitored and assessed via an Web interface.

threadfix screenshot

To be able to parse Arachni scan output, you must use the Arachni report command to convert the .asf files to .xml files via an additional build step though. I use an additional conditional post build step for this that checks if an Arachni report file exists and runs shell command

arachni_reporter ${BUILD_TAG}.afr –reporter=xml:outfile=${BUILD_TAG}.xml

to get the file format that we can upload via ThreadFix Jenkins plugin into our ThreadFix vulnerability database as shown in the screenshot above.

There is a community edition of ThreadFix that lacks of some enterprise features (such as ACL based on users / teams, SSOs, etc.) but can be uses without costs, even in an commercial environment.

Advanced Configuration

We can of course scan much more vulnerabilities besides just XSS. And we should. When you do not specific test cases Arachni will automatically scan for everything, including platform fingerprinting and SSL checks. Be careful with that as well, because this will most likely produce a lot of false positive. Instead, try to find out which test cases are useful for the tested technology stack (e.g. no SQL injection tests when you are shure that you have a Mongo DB). Start with a simple set-up and include more step by step.

Also, Arachni provides a number of ways to login into applications to perform deep scans. Do not scan authenticated sessions in production though since this can have a lot of problematic side effects.

Limitations & Other Tools

As mentioned, even with a highly customized Arachni configuration, the approach described here is only supposed to be a cheap and efficient approach for identifying low hanging fruits within the custom code and application config of an Web application, nothing more. If you want to cover more vulnerabilities (or identify them ealier in your SDLC) with free tools, you should also concider using dependency checkers such as OWASP Dependency Checker (for Java only) as well as static code scanners such as Findbug Security Plugin or similar tools.

Especially in this tool categories, both scan quality and integration capabilities of free tools are still very limited at the moment and far behind commercial tools such as Contrast IAST or Checkmarx SAST.

Posted in DAST, Java, Security Test Automation, Uncategorized | Tagged , , , | 2 Comments

IAST: A New Approach for Agile Security Testing

Static Application Security Testing (SAST) tools such as Fortify, Veracode, Checkmarx or IBM App Scan Source Edition have been available on the market now for a while. All of them have their specific pros and cons. But there are certain problems that leak all of these static scanning technologies. Here are three important ones:

  • False Positives: No matter what vendors might say, static code scans will lead to a number of false positives, especially the first scan that is performed on an application
  • Ownership: Who should be in charge of performing tests? Static code scanning often results in a large number of findings (not all of them false positives of course). Therefore, there need to be at least one (internal) tool expert if he/she actively performing the tests or will helping others with it.
  • Context: It is often hard to map a specific finding from a static code scan to application context (e.g. a specific url) where this could be exploited.

These might not be problems for all companies, and in fact SAST tools do a very good job in many organizations. Others do struggle a lot using such technologies though. Especially when such an tool expert is missing (e.g. when a security scanning tool should be operated within QA by non-security personal), implementing SAST technologies often doesn’t lead to the expected (or promised) situation.

What IAST is

Therefore, a I while ago, a new type of very promising technology has been emerged that could these problems: IAST. IAST stands for Interactive Application Security Testing and is another product group term that has been invented by Gartner.

IAST can be easily described as dynamic code scanning tools, whereas SAST are always static code scanning tools that are performed against either source, byte or binary code. It usually works by instrumenting (weaving) the deployed bytecode (in case of a Java application) or IL code (in case of a .NET application) during runtime and on the application server. The advantage of this: It allows you to analyze applications during runtime. All code that is executed by the application server will be analyzed and can be linked to the context (e.g. the Url as well as the relevant SQL statement).

Especially when it comes to agile security testing where continuous security testing becomes more and more important, the IAST approach offers huge advantages.

The technology itself is much older and has been widely known in the QA market for a while now. The first product in the security market was to my knowledge Fortify PTA (Program Trace Analyzer) which was available at least in 2008. It was a very exciting technology, but perhaps a bit early back then, so it was taken from the market in 2012. What changed a lot in the recent years was that products become much more mature and “enterprise ready” if you want.

And so it is that almost every SAST / IAST vendor is currently working on building or acquiring a IAST solution or has already one in its portfolio. Since it is not clearly defined what functionality an IAST tool has to offer, the differences between those solutions can be huge. For instance, some DAST products may just be extended by an additional server-side agent that improves the results of a DAST (Dynamic Application Security Scanning) scan. A couple of vendors now offers solutions with “IAST technologie”. When we look a bit deeper into them, it becomes clear, that we basically must distinguish two approaches:

IAST Light (Active)

The first approach are basically DAST solutions that have an additional agent installed on the application server to improve test results. The architecture more or less looks like this:

iast light architecture

Many vendors such as HP, IBM and Accunetix extended there tools with this function. The most sophisticated implementation of this approach is to my knowledge Seeker that has been recently acquired by Synopsys (Coveritry) from Quotium. Seeker is basically an enterprise scanning solution that integrates both DAST and IAST capability. It thereby actively runs continuous security tests (“attacks”) such as SQL Injection against a Web application and identifies (that different to a classic DAST solution) potential vulnerabilities with its agent that observes the application from within the application server.

Seeker

Seeker is very easy to use (it even casts videos of vulnerabilities it identifies) and allows to be integrated in automatic functional testing tools such as Selenium or HP QTP and offers management reporting and dashboard functionality as well as the integration into existing systems such as Sonar.

Full IAST (Passive)

The only “full” IAST tool on the market is to my knowledge currently Contrast IAST from Contrast Inc. The approach of Contrast IAST is a bit different to tools like Seeker. The main differnce is that Contrast does not actively performs attacks against a web application but analyzes instrumented code purely passively. This is in fact a huge advantage, since it will not affect other testing activities that run at the same time and only business testing (manual or automatic) are required to trigger security tests:

iast architecture

I therefore call this a full IAST approach.

The integration and execution of Constrast IAST is extremely simple: It just need to be activated on the application servers that are used for testing once. After that you assign a license to the application you want to have tested on the Contrast management console and you are good to go. Whenever someone (or some tools) run tests against this application contrast will analyze the data flows for potential security problems and reports them on the central security console, to a central system such as Sonar or by custom alerts directly to an assigned mail address.

The dashboard view looks like this:
Contrast WebGoat Vulnerabilities

As we can see, the reported results are pretty comprehensive and can be easily tested and linked to the vulnerable code.

Summary

First of all, we see that there are major differnces between different IAST tools on the market. Some are more or less just an improvement of a DAST Tool (“IAST Light”), whereas “Full IAST” tools not only just provide just an alternative to SAST testing but a solution to major problems the industry currently stuggles with a lot (e.g. false positives and required security experts). Especially when it comes to testing security in an agile or even DevOPS environment.

However, in my opionion, there is a market for both technologies: IAST tools are (at least at the moment) more expensive reagarding licensing compared to SAST tools, they require the application to be executable and access to the runtime environment, provide less test cases then commercial SAST tools to name just a few reasons for this.

Posted in IAST, SAST, Security Test Automation | 1 Comment

Additional Object Security with UUIDs

One of the most critical vulnerabilities a Web application can have is an insecure direct object reference. Such vulnerability normaly exists due to an (usually database) object id that an user can directly access and manipulate (and!) that is not authorized correctly.

An example that we often see is something like this:

http(s)://www.example.com?id=123

or within a path parameter:

http(s)://www.example.com/123/

If such an direct object id is not correctly authorized by the application, an attacker can very easily identify it by incrementing / decrementing the id that is provided to him. Tools such as Burp Proxy offer a functionality to do this basically automatically. The result would be, that confidential data can be accessed and stolen.

Besides performing programmatic authorization checks (within the code), there are a couple of very effective countermeasures to prevent this type of vulnerability by design:

  • Perform identify propagation and check permissions on backend system
  • Implement an indirection layer that maps (database) object ids to user specific ids.

Well, bot are very good primary measures, that are, unfortunately, not very easy to implement. Which is why so many applications gets compromised by such a vulnerability.

One very nice additional measure to not fix such vulnerability but makes its identification much harder for an attacker is to not use incremented object ids but to use a scheme that is harder to guess by an atacker. One way to do this is using Universally Unique Identified (UUIDs), that is based on hashing. Instead of an “123” the object would be addressed by something like “a07e9037-e80f-49e6-802c-fc20cc6afbe8”.

The implementation of UUIDs requires persistance technology (ORM) to support it though. Hibernate offers a special generator that must be activated with the following annotation:

Thats basically all you need. The only thing that must be changed in addition is the data type of the id column in the database (instead of int just use a binary data type with a length of 16 bytes). After this, newly created obects are automatically enumerated with UUIDs and instead of the urls shown above, users would acces objects in the following way:

http(s)://www.example.com/a07e9037-e80f-49e6-802c-fc20cc6afbe8/

If this object identfier leaks proper authorization an attacker can of course stil access it. Identification is now much harder though since an attacker cannot find them by incrementing ids anymore.

Posted in Java, Secure Software Development | Leave a comment

Gartner’s Magic Quadrant for Application Security Testing 2014

magic-quadrant-gartner-application-security-testing-AST-vendors-leadersOne publication that usually became a lot of attention in the application security market is of course Gartner’s magic quadrant. A new one for Application Security Testing (that is confusingly abbreviated with “AST”, a term that in software anylysis usually stands for Abstract Syntax Tree).

A couple of years ago, Gartner created two new quadrants in respect of application security, one for static tools (SAST) and the other one for dynamic ones (DAST). As long as integration of both technologies does practically not exist a well thought idea. Due to what reasons ever, Gartner merged those two into one quadrant, the Magic Quadrant for Application Security Testing.

Since we still don’t have this kind of combined technologies in practice (SAST and DAST tools are still practically separate technologies), this approach is unfortunately very much misleading. It seems that Gartner primarily focuses on the existance of product features such as the existence of a WAF (Web Application Firewalls) integration (who needs this BTW?), IAST, RAST and other things that I have never seen applied in a large company. Pretty much like with the “Learning Mode” in the context of WAF.

As for SAST, there are a number of important aspects that should be taken into account, and that I do not see sufficiently considered (or consideres at all):

  • Maturity of code scanning engine (false positives / negatives)!
  • Scanning model: Source scanning vs. byte code (3rd party lib scanning support)
  • Ease of use (do I need an expert or can my developers use it by their own?)
  • Language & technology support
  • Rule update cycles & custom rule support
  • Definition off custom scan policies & metrics
  • APIs and build integration
  • Bugtracking & IDE integration (are common IDEs supported? Is this for free or do I have to pay?)
  • Privacy controls of a cloud based solution
  • Integration with DAST scanning engine
  • Roles and pribileges model (e.g. possibility for implementing sign-offs)
  • License model, support and existence of professional services

I could name at least the same amount of aspects that are relevant for a DAST solution. But as mentioned, we have to separate them. Also, some vendors have a really good SAST but a crappy DAST technology, now we know why. In practice, companies are not evaluating a SAST and a DAST solution at the same time and even in that case: What is my benefit in buying both tools from the same vendor if they are not integrated well (or at all)?

My advice: Use what the Gartner analysts have written as input but do not focus solely on their criteria’s and especially not on the quadrant where a specific product has been put. Create a list of aspects that are important to you and your Organization, make a shortlist and run a vendor presentation with internal applications consisting the most important technology stacks. The Static Analysis Tool Evaluation Criteria (SATEC), will help you here.

Posted in DAST, SAST, Security Test Automation | Tagged , , | Leave a comment

Automatic Testing for Security Headers

Today, performing unit tests has become a standard in many development teams for automatically performing various tests on code (e.g. as a compulsory part of the build process). Especially in agile development, the existence, completeness and quality of such tests is a critical aspect for ensuring that the application is still working after each commited change.

Although unit tests are very often created and used, security relevated tests (aka security unit tests), are still performed very rarely. Such tests can be used to ensure, that code-based security controls (e.g. for authentication, access controls, validation or crypto) are working correctly. Especially for security relevant APIs the existence of proper security unit tests are critical. Also, we can use a unit tests to integrate external security code scanners (e.g. Fortify, Veracode, etc.) and combine it with a proper scan policy for the environment or type of application.

This is not a new concept. Stephen de Vries outlined the need for security unit tests in an OWASP speach titled “Security Testing through Automated Software Tests” in the year 2006.

But we can do even more. We can use the unit testing framework (e.g. JUnit for Java, NUnit for .NET or PHPUnit for PHP) to perform security integration tests as well. These are perhaps not executed with every build, but at least before the code is deployed in production. The following snippied shows an example for a JUnit test based un Apache HTTP Client that performs a test that checks the existence and correct value of a X-Frame-Options response header of a specific Web server:

We can now execute this test case within our development UI (e.g. Eclipse) or on the command line using mavens surefire plugin. Let’s see how it works with www.google.com (instead of “www.example.com”). Google does set a X-Frame-Options header, but uses the value “SAMEORIGIN” instead of “DENY”. So our second assert should fail:

Pretty much what we expected. Besides this use case, we could use such integration tests for a large number of additional security checks:

  • Verifying input validation controls
  • Verifying output validation controls< (e.g. an “XSS Detector”)
  • Verifying password policy/complexity checks
  • Verifying session handling and session id lifecycle
  • Verifying access controls
  • Checking for inseure configuration (e.g. error handling)

In the next couple of weeks I will give a few practical examples for such tests and will show how we can integrated various testing frameworks such as Selenium or Watir for that.

Posted in Java, Security Test Automation | Tagged , , , , | Leave a comment

Code Scanning Models: Factory vs. Self Service

A few months ago, Gary McGraw wrote an interesting article on SAST deployments in the field. In it, he basically differentiates two service models:

  1. Code Scanning Factory (actually he called it “centralized code review scanning factory for code review”)
  2. Self Service

The main idea behind both models is the most fundamental question when it comes to integrating a SAST solution into a company: “who should take the ownership?”.

The first model, the Code Scanning Factory, can be seen as the classic approach: A central department (often within IT security, not so oft quality assurance) installs this tool on a separate scanning workstation and performs spot checks or periodic assessments on internally or externally developed code.

The deployment for this service model can be really easy (e.g. developer sends a bunch of code to the analyst; the analyst responds scan results within a generated PDF report attachet to mail). But the scanning factory can also be integrated into the development systems as shown in the following diagram.

scanning factory sast deployment

The advantage of this service model is, that in such a way, the assessments can be performed by a specialized. Also: Deployment costs (especially license costs) are at a minimum. The downsides are that scans normally cannot be executed very often, the specialized does generally only have a very restricted understanding of the code and application itself and the development often only gets a PDF report or ticket but not the whole scanning results.

The second approach, the Self Service, therefore transfers ownership of security code scanning to the development. Due to IDE plugins that most SAST vendors (such as Fortify, Veracode or Checkmarx) offer, scans can be triggered by the developer himself (e.g. from within Eclipse or Visual Studio). That gives direct and details feedback that lead to a much greater value for the development and the reached code quality. Here are two screenshots that show such integrations for both veracode as well as fortify’s Visual Studio integration:

sast ide integration

Since the second approach requires a whole SAST infrastructure and is therefore in general much more expensive than the first one.

Complete transfer of ownership to the development lead to a clear conflict of interests snce the developer is in this case both tester and tested in one person. Hence this model should only be choosen with an additional central check by security or quality assurance (e.g. within a quality gate). In that case, development has the possibility to check its code beforehand, learn from the results of the tool and even build custom rules. Hereby not only the later quality gate can easily be passed, the code quality itself will benefit massively.

However, tool or security specialized should exists that a developer can contact in case of a security finding he/she does not understand.

Modern tool suites such as Fortify, Veracode or Checkmarx also offer integration into build systems (including continuous integration systems) that allows as to set up a continuous security scanning framework. This is perhaps nothing to start with, but a second or third step.

Posted in SAST | Tagged , , , , , | Leave a comment