if you haven't watched it yet, i encourage you to check out the video of chris soghoian's talk at personal democracy forum 2012. the TL;DR version is that, because it compromised the microsoft update channel, the flame worm damaged our trust in automatic updates and that's a bad thing because automatic updates have done so much good for consumer security. mikko hypponen is even reported to be planning to write a letter to barack obama to ask him to stop the US government from doing this sort of thing again.
unfortunately, i think this is short-sighted, and not just because you can't put the genie back in the bottle.
inherent in the idea of automatic software updates is this little thing called automatic execution. i've written before about how problematic automatic execution can be. it all comes down to delegating a security decision (to execute or not to execute) to an automaton, and fooling an automaton has never been considered difficult. this particular example might be one of the most difficult cases of pulling the wool over a machine's eyes there is, and yet it was still done and done in a big, headline-making way. an automaton may be more consistent about how it does things than ordinary people are, but that may not necessarily be a good thing for security. being consistent and predictable is, no doubt, part of what makes an automaton easier to fool than a person.
the trust we placed in automatic updates was, if not completely misplaced, then at least partially misplaced. microsoft may have made it harder to fool their code again, but i doubt every other software vendor in the world has put an equivalent amount of time and engineering effort into their own update security - some (many?) are probably within the realm of what more traditional cybercriminals can exploit.
we placed trust in microsoft's code, in the automaton they designed, not because it was trustworthy, but because it was more convenient than being forced to make the equivalent decisions ourselves. furthermore, we relied on it for protecting consumers because it's easier than educating them (in fact many still don't believe this can or should be done). it can certainly be argued that we can't rely on consumers to make good security decisions all the time, but clearly we can't rely on automatons to do it either. a lot of effort has been put into developing controls to prevent bad things from getting through, but what has been done with regards to detecting when those preventative controls fail? not a heck of a lot, and i don't have a lot of confidence in the idea of creating a second automaton to spot the failings of the first.
if the trust we had in automatic updates is fading then let it fade. we never should have been trusting it as much as we were in the first place. maybe, with more reasonable limits on that trust, we can begin to develop more meaningful countermeasures for attacks exploiting this particular brand of automatic execution (and it's important that we do so, because attacks only ever get better).