Secure Coding Best Practices

In this two-part series, I cover some of my best practices for developing secure systems from design to testing. In part one, I’ll cover the basics of the threat landscape and development best practices. This material comes from my new book due out later this year: Exploiting Modern C++: Writing Secure Code For An Insecure World.

Note: I originally wrote this series for the WhitseSource Virtual Summit.


There are three lies we tell ourselves when it comes to enterprise security:

#1 We have perimeter security.

While this is true, every company that’s been hacked in past 20 years has also had perimeter security. Equifax had perimeter security. Home Depot had perimeter security. That company-you’ve-never-heard-of had perimeter security. The problem with perimeter security is that the other side has the best perimeter security other people’s money can buy and they use that to figure out how to penetrate your perimeter security. And then there’s forgetting to patch your systems…

The truth is that we’ve lost the battle on keeping the enemy out. Now it’s about ex-filtration.


#2 It’s been code reviewed and tested.

This is also likely to be true. It’s also irrelevant. Most engineers are trained to develop working software, not secure working software. They understand algorithms, data structures, the finer points of Agile and the language of their choice. But ask any developer if they know what to look for during a security code review and you’re likely to get a blank look. Ask any quality engineer if they know how to find and execute a SQL injection attack on a running system and you’ll likely hear, “a SQL what?”

These are all highly trained, seasoned, dedicated professionals. But we send them into battle unarmed against a heavily armed enemy that knows how to steal, kill and destroy.


#3 We’re too big, too small, too something to be a target.

No, not really. Large companies have valuable technology which makes them a target. Small companies have weak security which makes them a target. Everyone has cash which makes them a target and software scales really well.

If your company is in any way connected to the outside world, you’re a target tonight.


When we talk about penetrations into a system, we have to define three terms, two of which are: Attack Vectors and Attack Surfaces. Attack vectors are the way in which a system is attacked. A virus is an attack vector. Injecting malicious data into an interface is an attack vector. Attack surfaces are the part of a system that is being exploited. Inter Process Communication (IPC) interfaces are often unprotected and are prime targets for data injection attacks. Websites that accept data and use SQL are often vulnerable to SQL injection attacks which expose sensitive information.

The third term is: Critical System. A critical system is the system that has the job of protecting whatever we’re protecting. This can be ICBMs, the national power grid, Personally Identifying Information (PII), cash, intellectual property or the plans to the Death Star. But a critical system is also any other system that is capable of interacting with that system. This can be unrelated processes in the OS, hardware such as printers or external, unrelated systems capable of touching that system. When we look at the security of a system, we have to consider everything that touches the system no matter how trivial or seemingly low risk.

And when we think about security we usually stop at the perimeter. Everything inside the perimeter is considered safe, everything outside is considered dangerous. But we have to assume that the perimeter will be breached – because it always is – which means that there needs to be layers of security behind the perimeter to slow the attackers down and give us time to react. This is why we practice Defense in Depth. Each layer that’s breached leads to another layer of security. It’s turtles all the way down.

Security is built in layers and the last layer is the code itself, which is the focus of this article.


So, what are some of the best practices for secure software development:

Maintain Situational Awareness

Maintaining situational awareness is about validation and verification. We validate the data we’re operating on before we operate on it. And we verify the identity of who is sending us that data. All Denial-of-Service (DoS) attacks are a failure to validate the data we’re operating on and all penetrations are a failure to verify who we’re doing business with.

A buffer overflow is a common exploit that takes advantage of a loss of situational awareness. This exploit works when we are given more data than can be stored in a fixed size buffer. If the data isn’t checked against the buffer it overflows and overwrites stack pointers allowing the execution of arbitrary code. Most operating systems use Address Space Layout Randomization (ASLR) to relocate vital libraries in memory, stack canaries to protect stack frames from tampering and tamper resistant memory to guard against malicious changes. These safeguards are not perfect, though, and there are ways to work around them.

This is why maintaining situational awareness in your software is your best defense.


Study the Standard

Every programming language from C++ to Rust to JavaScript to C# has a standard that defines the language. Writing secure code begins with understanding the language and this becomes more important as the language increases in complexity over time. For example, the C++ standard is fifteen hundred pages long and has almost three hundred instances of what is known as undefined behavior. Undefined behavior is where the standard leaves the implementation details up to the compiler writers, which generally leads to a terminate call. Those instances are little land mines for the uninitiated.

You may be working with a language that is straight forward, today. C++ in 1990 was a very straight forward language that has dramatically increased its surface area complexity in the intervening years. This makes it one of the most challenging languages to master even for engineers with decades of experience and can often produce unexpected results when compiled, a significant source of security vulnerabilities.

It is in the nature of all programming languages to begin simply and then grow rapidly in surface area complexity as designers seek to satisfy everyone’s favorite feature. This creates opportunities for developers to create unexpected security vulnerabilities. Your language of choice is no different in this respect and knowing its standard will help you to avoid creating security vulnerabilities.


Warnings Are Errors

In the same way that pain in our bodies tells us something is broken, warnings in our code tells us something is inconsistent. Warnings are the compiler’s way of telling you that, while it can compile your code, it won’t work in the way you expect. Warnings are future vulnerabilities written today.

For most mature systems, simply failing the build by turning all warnings into errors would be impractical. But there are ways to deal with your warning backlog. Enforcing discipline in the development team where commits cannot increase the warning counts and adding stories into your technical debt to eliminate a specific percentage of warnings with each release are two ways of getting control of this hidden threat.


Complexity Is the Enemy

As engineers we love complexity. Complexity makes us feel powerful, it makes us feel like we’ve accomplished something, conquered a hard problem. And yet complexity is one of the greatest sources of security vulnerabilities and architectural failures. In aviation, the skin of the aircraft is constantly expanding and contracting. As this happens the metal begins to fatigue. If you were to look at the energy patterns along the skin, you would see that the energy concentrates at the areas of greatest stress, the place where the fatigue is greatest. This only makes the problem worse.

As with metal fatigue, security vulnerabilities also concentrate in areas of greatest complexity. It’s not that they move. It’s that the security vulnerabilities are easier to spot and eradicate in the areas of your design that are the simplest. What is left, is in the areas of greatest complexity.

Consider, for example, Dirty COW (CVE 2016-5195), a vulnerability in the Linux kernel introduced by Linus Torvalds while he was trying to fix another bug. Dirty COW was a copy-on-write vulnerability that allowed unprivileged attackers to write to protected files as root, a classic privilege escalation attack. It lay undiscovered for nine years and was actively exploited before it was found and fixed.

So how did Dirty COW go undetected? The copy-on-write function is hundreds of lines long and a highly complicated feature. Complex designs are hard to reason about and a fix for one defect can lead to others. In this case, the engineer knew the code well but failed to understand the implications of the fix due to the code’s complexity. The reviewers missed the bug for the same reason and none of the testing caught the problem because of the nature of the vulnerability and the complexity of the feature.

Occam’s Razor says that, “All other things being equal, the simplest solution is usually the right one.” Practicing simplicity in your designs, architectures and code will go a long way to helping you build secure systems and eliminate vulnerabilities.


Grow Bug Bounty Hunters

Few engineers know how to test their code for security. They’re just not trained that way. But they have an intimate knowledge of their systems and how they’re put together. Pen testers, on the other hand, know how to test systems for security but they lack the intimate knowledge of the systems and their construction. Training developers to be internal pen testers, or bug bounty hunters, is an invaluable tool in dealing with security vulnerabilities before they are released.

Once engineers are trained in what to look for, they now have the combined knowledge of testing for security and an insider’s knowledge of the code. Adding a financial incentive to find security vulnerabilities gives them the incentive to increase their skills which, in turn, teaches them how to make safe design and coding choices.

In the end, the money spent in rewarding engineers for finding exploitable vulnerabilities is far outweighed by the savings from having found and fixed vulnerabilities that never made it into the wild.


In the next installment, I’ll cover architectural design choices and testing strategies for writing secure code.