Additional Object Security with UUIDs

One of the most critical vulnerabilities a Web application can have is an insecure direct object reference. Such vulnerability normaly exists due to an (usually database) object id that an user can directly access and manipulate (and!) that is not authorized correctly.

An example that we often see is something like this:

http(s)://www.example.com?id=123

or within a path parameter:

http(s)://www.example.com/123/

If such an direct object id is not correctly authorized by the application, an attacker can very easily identify it by incrementing / decrementing the id that is provided to him. Tools such as Burp Proxy offer a functionality to do this basically automatically. The result would be, that confidential data can be accessed and stolen.

Besides performing programmatic authorization checks (within the code), there are a couple of very effective countermeasures to prevent this type of vulnerability by design:

  • Perform identify propagation and check permissions on backend system
  • Implement an indirection layer that maps (database) object ids to user specific ids.

Well, bot are very good primary measures, that are, unfortunately, not very easy to implement. Which is why so many applications gets compromised by such a vulnerability.

One very nice additional measure to not fix such vulnerability but makes its identification much harder for an attacker is to not use incremented object ids but to use a scheme that is harder to guess by an atacker. One way to do this is using Universally Unique Identified (UUIDs), that is based on hashing. Instead of an “123” the object would be addressed by something like “a07e9037-e80f-49e6-802c-fc20cc6afbe8”.

The implementation of UUIDs requires persistance technology (ORM) to support it though. Hibernate offers a special generator that must be activated with the following annotation:

Thats basically all you need. The only thing that must be changed in addition is the data type of the id column in the database (instead of int just use a binary data type with a length of 16 bytes). After this, newly created obects are automatically enumerated with UUIDs and instead of the urls shown above, users would acces objects in the following way:

http(s)://www.example.com/a07e9037-e80f-49e6-802c-fc20cc6afbe8/

If this object identfier leaks proper authorization an attacker can of course stil access it. Identification is now much harder though since an attacker cannot find them by incrementing ids anymore.

Posted in Java, Secure Software Development | Leave a comment

Gartner’s Magic Quadrant for Application Security Testing 2014

magic-quadrant-gartner-application-security-testing-AST-vendors-leadersOne publication that usually became a lot of attention in the application security market is of course Gartner’s magic quadrant. A new one for Application Security Testing (that is confusingly abbreviated with “AST”, a term that in software anylysis usually stands for Abstract Syntax Tree).

A couple of years ago, Gartner created two new quadrants in respect of application security, one for static tools (SAST) and the other one for dynamic ones (DAST). As long as integration of both technologies does practically not exist a well thought idea. Due to what reasons ever, Gartner merged those two into one quadrant, the Magic Quadrant for Application Security Testing.

Since we still don’t have this kind of combined technologies in practice (SAST and DAST tools are still practically separate technologies), this approach is unfortunately very much misleading. It seems that Gartner primarily focuses on the existance of product features such as the existence of a WAF (Web Application Firewalls) integration (who needs this BTW?), IAST, RASP and other things that I have never seen applied in a large company. Pretty much like with the “Learning Mode” in the context of WAF.

As for SAST, there are a number of important aspects that should be taken into account, and that I do not see sufficiently considered (or consideres at all):

  • Maturity of code scanning engine (false positives / negatives)!
  • Scanning model: Source scanning vs. byte code (3rd party lib scanning support)
  • Ease of use (do I need an expert or can my developers use it by their own?)
  • Language & technology support
  • Rule update cycles & custom rule support
  • Definition off custom scan policies & metrics
  • APIs and build integration
  • Bugtracking & IDE integration (are common IDEs supported? Is this for free or do I have to pay?)
  • Privacy controls of a cloud based solution
  • Integration with DAST scanning engine
  • Roles and pribileges model (e.g. possibility for implementing sign-offs)
  • License model, support and existence of professional services

I could name at least the same amount of aspects that are relevant for a DAST solution. But as mentioned, we have to separate them. Also, some vendors have a really good SAST but a crappy DAST technology, now we know why. In practice, companies are not evaluating a SAST and a DAST solution at the same time and even in that case: What is my benefit in buying both tools from the same vendor if they are not integrated well (or at all)?

My advice: Use what the Gartner analysts have written as input but do not focus solely on their criteria’s and especially not on the quadrant where a specific product has been put. Create a list of aspects that are important to you and your Organization, make a shortlist and run a vendor presentation with internal applications consisting the most important technology stacks. The Static Analysis Tool Evaluation Criteria (SATEC), will help you here.

Posted in DAST, SAST, Security Test Automation | Tagged , , , | 1 Comment

Automatic Testing for Security Headers

Today, performing unit tests has become a standard in many development teams for automatically performing various tests on code (e.g. as a compulsory part of the build process). Especially in agile development, the existence, completeness and quality of such tests is a critical aspect for ensuring that the application is still working after each commited change.

Although unit tests are very often created and used, security relevated tests (aka security unit tests), are still performed very rarely. Such tests can be used to ensure, that code-based security controls (e.g. for authentication, access controls, validation or crypto) are working correctly. Especially for security relevant APIs the existence of proper security unit tests are critical. Also, we can use a unit tests to integrate external security code scanners (e.g. Fortify, Veracode, etc.) and combine it with a proper scan policy for the environment or type of application.

This is not a new concept. Stephen de Vries outlined the need for security unit tests in an OWASP speach titled “Security Testing through Automated Software Tests” in the year 2006.

But we can do even more. We can use the unit testing framework (e.g. JUnit for Java, NUnit for .NET or PHPUnit for PHP) to perform security integration tests as well. These are perhaps not executed with every build, but at least before the code is deployed in production. The following snippied shows an example for a JUnit test based un Apache HTTP Client that performs a test that checks the existence and correct value of a X-Frame-Options response header of a specific Web server:

We can now execute this test case within our development UI (e.g. Eclipse) or on the command line using mavens surefire plugin. Let’s see how it works with www.google.com (instead of “www.example.com”). Google does set a X-Frame-Options header, but uses the value “SAMEORIGIN” instead of “DENY”. So our second assert should fail:

Pretty much what we expected. Besides this use case, we could use such integration tests for a large number of additional security checks:

  • Verifying input validation controls
  • Verifying output validation controls< (e.g. an “XSS Detector”)
  • Verifying password policy/complexity checks
  • Verifying session handling and session id lifecycle
  • Verifying access controls
  • Checking for inseure configuration (e.g. error handling)

In the next couple of weeks I will give a few practical examples for such tests and will show how we can integrated various testing frameworks such as Selenium or Watir for that.

Posted in Java, Security Test Automation | Tagged , | 1 Comment

Code Scanning Models: Factory vs. Self Service

A few months ago, Gary McGraw wrote an interesting article on SAST deployments in the field. In it, he basically differentiates two service models:

  1. Code Scanning Factory (actually he called it “centralized code review scanning factory for code review”)
  2. Self Service

The main idea behind both models is the most fundamental question when it comes to integrating a SAST solution into a company: “who should take the ownership?”.

The first model, the Code Scanning Factory, can be seen as the classic approach: A central department (often within IT security, not so oft quality assurance) installs this tool on a separate scanning workstation and performs spot checks or periodic assessments on internally or externally developed code.

The deployment for this service model can be really easy (e.g. developer sends a bunch of code to the analyst; the analyst responds scan results within a generated PDF report attachet to mail). But the scanning factory can also be integrated into the development systems as shown in the following diagram.

scanning factory sast deployment

The advantage of this service model is, that in such a way, the assessments can be performed by a specialized. Also: Deployment costs (especially license costs) are at a minimum. The downsides are that scans normally cannot be executed very often, the specialized does generally only have a very restricted understanding of the code and application itself and the development often only gets a PDF report or ticket but not the whole scanning results.

The second approach, the Self Service, therefore transfers ownership of security code scanning to the development. Due to IDE plugins that most SAST vendors (such as Fortify, Veracode or Checkmarx) offer, scans can be triggered by the developer himself (e.g. from within Eclipse or Visual Studio). That gives direct and details feedback that lead to a much greater value for the development and the reached code quality. Here are two screenshots that show such integrations for both veracode as well as fortify’s Visual Studio integration:

sast ide integration

Since the second approach requires a whole SAST infrastructure and is therefore in general much more expensive than the first one.

Complete transfer of ownership to the development lead to a clear conflict of interests snce the developer is in this case both tester and tested in one person. Hence this model should only be choosen with an additional central check by security or quality assurance (e.g. within a quality gate). In that case, development has the possibility to check its code beforehand, learn from the results of the tool and even build custom rules. Hereby not only the later quality gate can easily be passed, the code quality itself will benefit massively.

However, tool or security specialized should exists that a developer can contact in case of a security finding he/she does not understand.

Modern tool suites such as Fortify, Veracode or Checkmarx also offer integration into build systems (including continuous integration systems) that allows as to set up a continuous security scanning framework. This is perhaps nothing to start with, but a second or third step.

Posted in SAST | Tagged , | Leave a comment

10 Reasons why we need Application Security Testing Tools

Despite the fact that there are quite a few reservations concerning the use of application security scanning technologies (e.g. false positives, false negatives, usability and of course the price), there are also a couple of good reasons for using such tools:

1. Applications are becoming bigger and bigger

Enterprise applications can be quite big: 100,000 lines of code (100 KLOC) are some sort of lower boundary. Larger applications can easily have millions of lines of code. The same goes for the number of unique sites and business functions such a application can have. On the same hand, even a code reviewer will only be manage to analyses 1,000 lines of code (1 KLOC) a day.

2. Applications do often change

Especially agile developed (Web) applications are periodically changed every sprint (e.g. every two weeks). But changes can also occur in the environment or integration of a productive application. For instance, many Web applications are dynamically linked with other sites. Where some changes are trivial with no security impact, others will make a security review of the whole application necessary (e.g. due to architectural changes). Manual pentesting or code review cannot solve that problem. According to WhiteHat, the average amount of serious vulnerabilities introduced per website was 56 in 2012 (in the years before even much higher).

3. Tools can be executed by non-experts

Despite the well-known phrase “a fool with a tool is still a fool”, the required for performing a security assessment with a tool is much lower than without. Especially where more and more application-based threats are emerging, a pentester or code reviewer has to know a lot more than a few years ago. It still need some training to perform a decent tool-based assessment though.

4. Tools can be executed continuously

A tool-based assessment is not comparable with a manual one (e.g. a pentest or code review) in respect of the level of assurance it can reach. This is very well stated by the OWASP ASVS Standard and other publications. Although the reached assurance is lower it can be ensured due to periodic (even daily) scans.

5. Tools can make manual analysis much more efficient

Manual assessments (code reviews or pentests) based on tool-based scans can be much more efficient. Especially large code bases can be analyzed pretty well using such an approach. Many Low Hanging Fruits, that are often the point of attack for criminals, can be identified very efficiently with tool scan, before a manual assessments has even started. Also, tools will often give us indicators for possible vulnerabilities or weaknesses that can then be analyzed manually.

6. Tools can check specific policies and other requirements

Many tools allow the definition of very specific policies and can therefore be used to ensure that an application is compliant to a secure coding guideline.

7. Tools allow us standardized, objective and repeatable analysis

As long as the rule set does not change, a tool scan will always execute the same comprehensible tests cases on all applications. No test is left out my mistake or other reason.

8. Tools provide us metrics

Tools can provide us metrics – such as the security to quality defect ratio or the cyclomatic complexity that we can use to measure security and observe trending. In addition, results of scanning tools such as HP Fortify can be displayed in quality dashboard such as sonar, leading to a much more visible (in)security:
Fortify Plugin for Sonar

9. Tools are the foundation of sustainable and continuous improved security

Tool scans are based on policies and rules. Both can be customized and tuned. The more we work with such a tool we can improve its results.

10. Tools can be integrated in existing tools suites (e.g. in quality assurance)

We can integrate a security scanning tool in tool suites that are already in place and periodically executed by the quality assurance. This integration capability is yet only at its beginning but will surely be improved in the future.

All in all, we should be aware of tool limitations in general (and tool-specific ones too), but shouldn’t deny their capabilities (and benefit by using them) too.

Posted in SAST | Tagged , , | 2 Comments