Losing the Arms Race

Another month, another headling-grabbing bug:

  • "Patch ASAP: Tons of Linux apps can be hijacked by evil DNS servers, man-in-the-middle miscreants" - The Register
  • "Patch now! Unix bug puts Linux systems at risk" - InfoWorld
  • "Extremely severe bug leaves dizzying number of software and devices vulnerable" - Ars Technica

Dan Kaminsky has an in-depth take on it.

TL;DR: The glibc DNS bug (CVE-2015-7547) is unusually bad. Even Shellshock and Heartbleed tended to affect things we knew were on the network and knew we had to defend. This affects a universally used library (glibc) at a universally used protocol (DNS). Generic tools that we didn’t even know had network surface (sudo) are thus exposed, as is software written in programming languages designed explicitly to be safe.

This is scary. Almost all pieces of Linux software call glibc eventually - it's the main way to call the kernel. Practically any Linux software that talks over the network will link against glibc and use its DNS resolving functions. The attack surface for this bug is much wider than these other headline-making bugs we've seen recently.

I know of one language that avoids glibc, issuing its own system calls instead: Go. Unfortunately even Go will sometimes fall back to using the glibc DNS resolver!

Clearly frustrated, Dan rails against the incomplete attempts to make C safer (emphasis mine):

ASLR, NX, Control Flow Guard – all of these technologies are greatly impressive, at showing us who our greatly impressive hackers are. They’re not actually stopping code execution from being possible. They’re just not.

Somewhere between base arithmetic and x86 is a sandbox people can’t just walk in and out of. To put it bluntly, if this code had been written in JavaScript – yes, really – it wouldn’t have been vulnerable. Even if this network exposed code remained in C, and was just compiled to JavaScript via Emscripten, it still would not have been vulnerable. Efficiently microsandboxing individual codepaths is a thing we should start exploring. What can we do to the software we deploy, at what cost, to actually make exploitation of software flaws actually impossible, as opposed to merely difficult?

I'm also frustrated by the lack of progress. We've seen a lot of effort to make C safer, and it's not working. We deploy NX bits, attackers deploy ROP. We deploy ASLR, hackers counter with function-location leaks and NOP slides. We're losing the arms race.

The industry must move away from memory-unsafe languages like C and C++. Computers aren't just academic toys like they were in the 70's, when C was born. Now computers safeguard the global economy, people's information, people's lives.

We continue writing software using memory-unsafe languages, mostly because there aren't credible alternatives to C/C++.

  • Java, C#, and Go are memory-safe but unsuitable in low-latency systems due to their unpredictable garbage collection pause times.
  • D has been touted as the alternative for a while, but (alas!) it doesn't have much momentum.

I'm holding out hope that Rust will develop traction. Rust is a low-level, memory-safe alternative language developed by Mozilla.

If Rust manages to unseat the dangerous languages and prevent a few of these superbugs in the future, it will have done us all a huge favour.

But what if Rust fails to take off? What if Mozilla decides they have other priorities, or the borrow-checker is so annoying that people give up? I'm genuinely curious: do we have any other alternatives on the way, or are all our eggs in this basket? Today, it seems that successful languages require big companies behind them, and I don't see any other credible efforts to fit this niche. Please comment or email if you know of any.

Will our core infrastructure still be written in unsandboxed C in another decade? Will we still be finding superbugs every few months?

Thanks to Vicki Lowe for comments and corrections.