So the DNS patch rolled out on 8 July. But you knew that already. Vendor after vendor started popping out the woodwork, and Ubuntu mirrors slowly started updating worldwide to rollout updates to bind9, the glibc stub resolver etc… The original CERT announcement didn’t had many ‘no response’s from a wide variety of providers (having been alerted only a few days earlier)… When asked how the issue was addressed in his application, dnsmasq’s author Simon Kelly, for example, had this reaction:
Good question. I wasn't contacted in advance about this, and no patch for dnsmasq has been released. Since the exact nature of the new vulnerability has not (as far as I know) been announced, I don't know if dnsmasq is vulnerable. My current plan is to implement query-port randomization, and I'm working on that right now. If all goes well, it will go into 2.43, and be released ASAP. To help with this, I'd like to gather as many testers as possible. The changes are quite intrusive, and to be confident about releasing them quickly, I'd like to get as many people as I can testing. Since query-port randomisation is potentially quite resource-heavy (it needs a socket per query), and will break many firewall configs, the current plan is to make it optional, and not the default behaviour. Cheers, Simon.
Microsoft also came to the party with MS08-37, their Windows patch of the problem. However, other vendors are going nuts now, having to issue repatched versions (think ZoneAlarm – ok, that is, if you’re using it) of their own proprietary software. Depending on predictable port? Mal-implemented patch(es) by the upstream provider?
The fix (not the attack vectors) are described as such, eg in DSA-1603-1:
“Dan Kaminsky discovered that properties inherent to the DNS protocol
lead to practical DNS cache poisoning attacks. Among other things,
successful attacks can lead to misdirected web traffic and email
This update changes Debian’s BIND 9 packages to implement the
recommended countermeasure: UDP query source port randomization. This
change increases the size of the space from which an attacker has to
guess values in a backwards-compatible fashion and makes successful
attacks significantly more difficult.” – Ubuntueque
Dan Kaminsky went on at length about the need and success of CERT in the roll-out of the patches across vendors. In South Africa, DNS went down to a crawl as SAIX, Verizon, IS etc patched their servers (OK, Verizon also switched its ATM link to SAIX to GigEthernet, so after veeeery slow, thing started to fly…) SAIX’s DNS servers took 2 days after release to go live and randomise and not publish the port.
Bind9, dnsmasq all are “affected” by this…
The problem with all this is the lack of information and uncertainty. OK, argue that this was in the interest of security. Don’t make the weakness know to protect the web. But *no-one* knew what the technical details were (and verify the patch, peer-reviewing the process) until he privately finally disclosed to Ptacek and Zovi… who agreed that all was above board…
I guess that leaves three concurrent steps to take:
- Wait for BlackHat 2008
- Patch whatever servers are out there in anticipation
Kaminsky put it nicely in his blog:
So here’s the bottom line. I think people don’t have enough information right now, to determine whether there indeed exists any context in which a huge press rush should occur with so few deep technical details. When everything is on the table, I leave it to the community to judge whether we have gained or lost credibility through this effort.
But it’s clear that, in lieu of details, to not even have respected and completely independent members of the community vouching for your work cannot stand, no matter how respected you are in the community, no matter how many vendors are behind you, no matter what. OK. So that’s a fairly big lesson learned, in a process I’ve sort of been making up as I’ve gone along. Thanks to Dino and Thomas for setting me straight.
Let’s see where this takes us…