In the debate over “selling exploits” people haven’t defined what, precisely, an “exploit” is. The only definition is that they “know it when they see it”. In this post I’m going to describe something that isn’t clearly an exploit.
Back in 1998, I created one of the first “personal firewalls”, known as “BlackICE Defender”. We designed it to run on both Windows 95 and Windows NT. Win95 was the dead-end 16-bit operating that is no longer in use, WinNT is the progenitor of today’s Win7.
The problem was that neither Win95 nor WinNT had a method for intercepting packets and filtering them. We therefore had to “hack” the operating system in order to get our firewall functionality to work.
We did this by reverse engineering Microsoft’s network stack. We found some unused space in kernel memory to store our code (kernel code contains lots of introns). We then found an appropriate place in the network stack and inserted a “jump” instruction. As packets arrived from the network, the kernel would jump to our code, where we’d process the packet, then we’d jump back to the original code.
Most people would do this by modifying the files on the disk, telling the user to reboot after installation. We hated rebooting computers, so we decided to do this live, as the system was running, as network traffic was occurring.
This patching of live running code is hard and likely to crash the system. Our solution to avoid crashing was really clever, we even got a patent for it. I mention this only because a lot of rootkits out there are actually in violation of our patent. I’ve long thought it would be amusing to go sue malware authors for violating our patent.
The ability to patch the kernel and install/uninstall without a reboot was effectively a social engineering hack. It made the product look like just-another-application rather than the fairly invasive product that it really was. Even when you knew the evil things our product was doing (and you did, because we bragged about them), you still perceived the product as just-another-application. This was actually dangerous. An early beta custom was eBay, the dominant dot-com of the time. Because it looked so easy, they just rolled it out to all their servers. This gave me a heart attack, because it was still beta, and if a bug took down eBay, the bad press could kill our company before it got started.
This patching the kernel worked well for several years until Microsoft came out with WinXP SP2, their first major cybersecurity release. They changed their stack enough that our product no longer worked. Since we were on the list of “must work” products, they reverse engineered our product to figure out why it was breaking, only to discover the evil things we’d done by reverse engineering their product. This delayed WinXP SP2 by a month, and there are people still at Microsoft who complain about this to me.
This lead to legal threats, because, in theory, we’d possibly violated some anti-reverse-engineering clause in a EULA. This is one of the things that made me STOP fearing Microsoft as an all-powerful corporation, because even though some of their employees wanted to sue us, as a corporation, Microsoft was powerless to do it. It just wasn’t in their best interest. Indeed, everyone reverse engineers their products, and nobody gets sued. And, of course, there would be the humor of pointing out that they’d violated our EULA in order to discover this.
In the end, they had to add a little bit of nonsense assembly to their code that their stack would jump to, and jump right back. This allowed us to find the code and patch it to get our software to work. (Today, of course, Windows 7 has proper APIs so all this nonsense has gone away.)
The point of this story is to describe a “hacker exploit” as we exploited Microsoft to create essentially rootkit functionality. But of course, we did this in order to secure the system, not to cause a problem. The more general lesson here is that this is what the entire security industry is based on: we must “exploit” in order to “protect”. Arbitrary rules trying to regulate “bad code” will inevitably regulate “good code”.
Update: I forgot to mention. Vista/Win7 checksum kernel code. If something changes it, like BlackICE or a rootkit, it refreshes the kernel code from signed files from the disk. Microsoft engineers have confirmed that part of the reason they did this was because they were so annoyed with BlackICE.
Update: I forget the exact techniques, but the basic problem was that to jump to our code you needed a 5 byte instruction, and the fact that instruction decoders are weird (which is why you need to avoid self-modifying code). The solution was to use the 2-byte jump instruction.
One to do this was to write a "jump self" or "eb fe" into the code you want to patch. This creates a sort of involuntary spin lock, so that any multi-threaded conflict will cause the other thread to go into an infinite loop. Then what we do is write the full 5 byte jump sequence behind it.
Another way to do this is to find a nearby (within 128 bytes) intron area (such as the padding between functions to align on 16-byte boundaries) that we can write a 5 byte jump. Then, write the 2-byte jump instruction that jumps to the longer 5 byte instruction, which finally jumps to our code.
Another possibility was to use the "int x" to generate an interrupt. It's a one-byte instruction, but it means we have to find a free interrupt handler.