At the end of the month, I will be traveling to China to attend the C++ Development Technology Summit in Shanghai. The conference is the weekend of November 2-3 and I will be giving two talks:
My first talk is If You Can’t Open It, You Don’t Own It.
For the past 30 years, we have dealt with penetrations into secure systems almost exclusively from the software layer: applications and operating systems. With the advent of side channel exploits like Spectre, Meltdown and Foreshadow, hardware designs are now battlefields.
In this talk, we’ll look at four real-world hardware attacks that changed the way we think about secure systems and see how hardware exploit strategies drive software exploit strategies.
And what that means for the future of Modern C++.
We’ll explore four lines of attack:
- Roots of Trust,
- Side channels exploits,
- How physical access creates opportunities, and
- How our supply chains often create our greatest vulnerabilities.
As the Standards Committee puts the final touches on C++20 this year, we’ll use these as the framework to get an inside look at the committee’s efforts to build a safer, more resilient language. We’ll see:
- How new language features, like Concepts, Contracts and Ranges, help (or hurt) our ability to write secure software.
- Which proposals coming for C++23, like Zero-overhead deterministic exceptions and secure_clear, will help address some of the worst vulnerabilities in the language.
This talk is about how our language and design choices affect our system’s ability to withstand attack. It’s also about how the evolution of the language is addressing the insecure world it operates in and the places where it still falls short.
My second talk is What Air Disasters Tell Us About Safety Critical Designs.
If there is one industry that is obsessed with building safety critical systems, it’s the airlines industry. And yet, things don’t always go as planned in spite of the efforts of manufacturers, airlines and crash investigators.
In this talk, we’ll look at three airline disasters:
- Air France 447,
- Japan Airlines 123, and
- Boeing 737 MAX
Then we’ll analyze them for how the safest travel system in modern history can fail:
- How complex user interfaces affect safety outcomes,
- How error messages often make emergencies worse,
- How the rise of automation often works against safety,
- How complexity breeds emergent behavior, and
- How fail-safe designs are sometimes not fail-safe.
The lessons learned apply not just to airlines, but to the automotive and medical industries, nuclear power plants, evidence collection devices, and any other system that human’s use in safety critical applications.
And finally, we look at what that means for Modern C++ software designed for safety critical applications and how the Standards Committee is moving to make the language and our systems safer.
I hope to see you there!