Code Scanning Models: Factory vs. Self Service

A few months ago, Gary McGraw wrote an interesting article on SAST deployments in the field. In it, he basically differentiates two service models:

  1. Code Scanning Factory (actually he called it “centralized code review scanning factory for code review”)
  2. Self Service

The main idea behind both models is the most fundamental question when it comes to integrating a SAST solution into a company: “who should take the ownership?”.

The first model, the Code Scanning Factory, can be seen as the classic approach: A central department (often within IT security, not so oft quality assurance) installs this tool on a separate scanning workstation and performs spot checks or periodic assessments on internally or externally developed code.

The deployment for this service model can be really easy (e.g. developer sends a bunch of code to the analyst; the analyst responds scan results within a generated PDF report attachet to mail). But the scanning factory can also be integrated into the development systems as shown in the following diagram.

scanning factory sast deployment

The advantage of this service model is, that in such a way, the assessments can be performed by a specialized. Also: Deployment costs (especially license costs) are at a minimum. The downsides are that scans normally cannot be executed very often, the specialized does generally only have a very restricted understanding of the code and application itself and the development often only gets a PDF report or ticket but not the whole scanning results.

The second approach, the Self Service, therefore transfers ownership of security code scanning to the development. Due to IDE plugins that most SAST vendors (such as Fortify, Veracode or Checkmarx) offer, scans can be triggered by the developer himself (e.g. from within Eclipse or Visual Studio). That gives direct and details feedback that lead to a much greater value for the development and the reached code quality. Here are two screenshots that show such integrations for both veracode as well as fortify’s Visual Studio integration:

sast ide integration

Since the second approach requires a whole SAST infrastructure and is therefore in general much more expensive than the first one.

Complete transfer of ownership to the development lead to a clear conflict of interests snce the developer is in this case both tester and tested in one person. Hence this model should only be choosen with an additional central check by security or quality assurance (e.g. within a quality gate). In that case, development has the possibility to check its code beforehand, learn from the results of the tool and even build custom rules. Hereby not only the later quality gate can easily be passed, the code quality itself will benefit massively.

However, tool or security specialized should exists that a developer can contact in case of a security finding he/she does not understand.

Modern tool suites such as Fortify, Veracode or Checkmarx also offer integration into build systems (including continuous integration systems) that allows as to set up a continuous security scanning framework. This is perhaps nothing to start with, but a second or third step.

Posted in SAST | Tagged , , , , , | Leave a comment

10 Reasons why we need Application Security Testing Tools

Despite the fact that there are quite a few reservations concerning the use of application security scanning technologies (e.g. false positives, false negatives, usability and of course the price), there are also a couple of good reasons for using such tools:

1. Applications are becoming bigger and bigger

Enterprise applications can be quite big: 100,000 lines of code (100 KLOC) are some sort of lower boundary. Larger applications can easily have millions of lines of code. The same goes for the number of unique sites and business functions such a application can have. On the same hand, even a code reviewer will only be manage to analyses 1,000 lines of code (1 KLOC) a day.

2. Applications do often change

Especially agile developed (Web) applications are periodically changed every sprint (e.g. every two weeks). But changes can also occur in the environment or integration of a productive application. For instance, many Web applications are dynamically linked with other sites. Where some changes are trivial with no security impact, others will make a security review of the whole application necessary (e.g. due to architectural changes). Manual pentesting or code review cannot solve that problem. According to WhiteHat, the average amount of serious vulnerabilities introduced per website was 56 in 2012 (in the years before even much higher).

3. Tools can be executed by non-experts

Despite the well-known phrase “a fool with a tool is still a fool”, the required for performing a security assessment with a tool is much lower than without. Especially where more and more application-based threats are emerging, a pentester or code reviewer has to know a lot more than a few years ago. It still need some training to perform a decent tool-based assessment though.

4. Tools can be executed continuously

A tool-based assessment is not comparable with a manual one (e.g. a pentest or code review) in respect of the level of assurance it can reach. This is very well stated by the OWASP ASVS Standard and other publications. Although the reached assurance is lower it can be ensured due to periodic (even daily) scans.

5. Tools can make manual analysis much more efficient

Manual assessments (code reviews or pentests) based on tool-based scans can be much more efficient. Especially large code bases can be analyzed pretty well using such an approach. Many Low Hanging Fruits, that are often the point of attack for criminals, can be identified very efficiently with tool scan, before a manual assessments has even started. Also, tools will often give us indicators for possible vulnerabilities or weaknesses that can then be analyzed manually.

6. Tools can check specific policies and other requirements

Many tools allow the definition of very specific policies and can therefore be used to ensure that an application is compliant to a secure coding guideline.

7. Tools allow us standardized, objective and repeatable analysis

As long as the rule set does not change, a tool scan will always execute the same comprehensible tests cases on all applications. No test is left out my mistake or other reason.

8. Tools provide us metrics

Tools can provide us metrics – such as the security to quality defect ratio or the cyclomatic complexity that we can use to measure security and observe trending. In addition, results of scanning tools such as HP Fortify can be displayed in quality dashboard such as sonar, leading to a much more visible (in)security:
Fortify Plugin for Sonar

9. Tools are the foundation of sustainable and continuous improved security

Tool scans are based on policies and rules. Both can be customized and tuned. The more we work with such a tool we can improve its results.

10. Tools can be integrated in existing tools suites (e.g. in quality assurance)

We can integrate a security scanning tool in tool suites that are already in place and periodically executed by the quality assurance. This integration capability is yet only at its beginning but will surely be improved in the future.

All in all, we should be aware of tool limitations in general (and tool-specific ones too), but shouldn’t deny their capabilities (and benefit by using them) too.

Posted in SAST | Tagged , , , , , | Leave a comment