I recently went through an exercise over the use of Modern C++’s Alternative Operator Representations with a software company based in Silicon Valley. I proposed this change to the company’s C++ Coding Standards:
The Alternative Operator Representations were introduced before C++11 to deal with developer errors where &&, || and ! are coded incorrectly. &&, for example, gets mis-coded as & which turns a logical and operation into a bitwise and operation. This happened last year to Google:
Google pushed a one-character typo to production, bricking Chrome OS devices
This line bricked thousands of devices because of an escaped defect that slipped through code review, tooling and testing:
if (key_data_.has_value() & !key_data_->label().empty())
The compiler will not complain about this line. Instead, the compiler promotes both boolean responses to integers and performs a bitwise and operation before evaluating the if(). To the compiler, this is not an error but a necessary conversion. This mistake made it through code review, static analysis and testing to become an escaped defect.
The Alternative Operator Representations eliminate an entire class of vulnerabilities in code by eliminating this mistake.
The above line then becomes:
if (key_data_.has_value() and not key_data_->label().empty())
This is both much more readable and eliminates the possibility of creating these types of vulnerabilities. Most of the C++ IDEs, including Visual Studio, have highlighting for AORs.
We need to make the use of these mandatory on all new or revised code. Specifically, this is limited to the following operators:
&& (and)
& (bitand)
|| (or)
| (bitor)
! (not)
|= (or_eq)
!= (not_eq)
We can also discuss the following (although these don’t have the same problems as the first group):
^ (xor)
~ (compl)
^= (xor_eq)
&= (and_eq)
The rest do not create security vulnerabilities.
The response from most engineers was mild interest. They had never heard of AOR and it looked weird but it did actually solve a real problem that we’ve all had from time to time. And we did get used to lambdas and R-value references, so…
One small group of engineers, though, wasn’t having it. They were violently opposed to the proposal. I got every argument against it ranging from the ridiculous (this is a one in a billion mistake) to the absurd (people will quit if they have to use these). They just hated AOR and weren’t going to use them no matter what. This went on for weeks and I got every argument imaginable, except technical ones.
Because there aren’t any.
AOR is a simple way to eliminate an entire class of bugs that become security vulnerabilities the closer they get to an attack surface. They’re so powerful, in fact, that C has it’s own version or AOR called: Alternative Tokens. They’re based on the same header as C++, iso646.h.
But, what I didn’t tell them was that my proposal was deliberately over broad. Not all of these make sense to use. If fact, most don’t.
What I was trying to do was to get them to think critically about the code they write. It makes no sense to go in with a bible full of “thou shalts” and “thou shalt nots”. That doesn’t teach an engineer anything and they’ll only follow the commandments at the point of a gun.
I’m looking for willing participants, not hostages.
The point was for them to analyze the list and find technical reasons to eliminate most of them. But in order to do that, they had think about how they write code and the nature of risk. Security is always about assessing risk and negotiating solutions. There are absolutes, like encryption, to be sure. But most security decisions involve trade-offs. And to do this, they had to ask some fundamental questions:
- How do we get escaped defects?
- How could Google brick thousands of devices just by making this small error?
- Why didn’t testing catch this problem before it escaped?
- Why didn’t static analysis catch it?
- What about the ^@$^%(@$& compiler!?!?!?
But, they didn’t ask any of these questions. Instead, they sentenced themselves to The Prison of Two Ideas. Either we do all of AOR or we do none of them. And that’s the booby trap: either/or. But life isn’t that way and neither is systems security.
Securing anything is about assessing risk and making smart choices. It’s about having just enough security to make you not worth the effort in the eyes of an opportunistic attacker. No decision in life comes down to either/or. There is always a range of options. The C++ standard itself is a 2000 page Swiss army knife of options and yet engineers get stuck in one paradigm or another and never ask if there is a better, safer way. I see this every day.
It finally took a principat engineer, who I have enormous respect for, to ask these questions and work the list down to: or, and, not and maybe not_eq. That’s it. With these three (or four) you eliminate 90% of all the risk that confusing symbols brings. And that’s good enough to eliminate this class of defects.
The way we generally analyze vulnerabilities is to ask:
- How close is the vulnerability to an attack surface?
- How easy is it to exploit?
- How much damage can be done if the vulnerability is exploited?
From the list above, here is my analysis:
Use and and or because && and & can be swapped without the compiler complaining as can || and |. Don’t use bitand or bitor because they’re unlikely to create a vulnerability. For example, Ranges uses | but using bitor doesn’t help anything here since || is illegal. Using & and | also helps to distinguish intent from and and or.
Use not because ! is hard to distinguish from | and often ! is just hard to see on the screen especially when it’s up against other text, like !label.my_function(). not makes it more clear (not label.my_function()) and flipping logic can create vulnerabilities if you do things the logic would ordinarily prevent, like writing debug information to a log.
not_eq is borderline. If the left hand side is a function or a value, it cannot be swapped with |= without the compiler complaining. But if the left hand side is a variable, it can be swapped without you knowing it. |= can be swapped with != in all cases. There is a low probability of these occurring so it could go either way depending on your risk appetite.
The rest don’t tilt the security table much or at all. You can try these yourself using: Compiler Explorer
And that’s the problem with The Prison of Two Ideas. When it’s either C++ or Rust (instead of using C++ safely), use AOR or not (instead of using some of it), Asserts or not (instead of using them in some contexts and using something else in others), preconditions checking or not (instead of focusing on the interfaces closest to the attack surfaces), you necessarily eliminate entire ranges of options which could have made your system safer. And that makes your system much less safe.
I proposed using AOR this way because I wanted the engineers to think about their code differently. I wanted them to think about security and the choices they make. I know they’re highly intelligent engineers. And they’re very good at their jobs, they prove that every day. But, they don’t see the world the way I do and the way an attacker does – the way they need to. This is why I do threat modeling with engineering teams. It forces engineers to think about threats along with performance. Once they do this, they see their designs in a completely different light.
Security is about understanding the nature of risk – where it lurks, how it moves and how it changes with each decision we make. This means exploring all of your options whether it’s design options, language choices or coding standards.
It’s not about our feelings. It’s a cold blooded analysis of the world in which we live and a ruthless approach to systems design. It’s about making smart choices even when your emotions don’t want to. It’s about accepting the world as it is, not how we wish it would be. It’s about understanding that attackers don’t care if you had a bad day when you wrote that code or have a blind spot. They’ll exploit your mistakes and laugh at you when you make the evening news for all the wrong reasons. They’re the ultimate cold warriors of our day and they see us and our systems as prey. That’s why emotion has no place in security, it’s about cold, calculated decision making where our feelings get mugged by reality.
That’s why doing hard time in The Prison of Two Ideas is ultimately self defeating.