Saturday, August 03, 2019

the disclosure debate is the wrong argument

prompted by an exchange on twitter this morning i began spending mental resources on an old favourite that everyone loves (hates) to go over again and again - the disclosure debate.

the disclosure debate is a debate over what the right answer is to the question  "i found a problem, now what do i do with it?". it's a natural place to come from for many in the security community and the question surely needs an answer. but the disclosure debate tries to raise the answer into a principle and each side of the debate believes that their side is more principled, more virtuous than the other. you have to pick a side and in so doing you also pick a tribe and signal your allegiance.

the disclosure debate has been raging in the security community for a long time and the two major sides right now are full disclosure which is generally favoured by security researchers, and responsible disclosure which is generally favoured by vendors. when you pick a side you're choosing to align with one of these groups.

but is that really it? is it just a choice between researchers and vendors? are they the only ones with real skin in the game? hell no.

so then let's catalog the groups of people involved:
  1. vendors of vulnerable products/systems/services. these are the people who, when faced with discovery of a security vulnerability in their product, are expected to act quickly and create a fix for their product to eliminate the risk posed by by that vulnerability
  2. security researchers. these are the people who find the vulnerabilities. often a great deal of work goes into finding vulnerabilities and consequently researchers often want and even deserve some kind of reward, whether simply adding to their reputation which they can parlay into better paying job or a more direct financial reward like a bug bounty.
  3. attackers. these are the people who would actually exploit the vulnerability to victimize people, whether for money, ideology, or a laundry list of other reasons
  4. users of vulnerable products/systems/services, be they businesses, institutions, individuals, etc. these are the people who are actually affected by vulnerabilities and arguably should be the most important consideration
now attackers and users don't figure all that much in the disclosure debate. sure the ostensible reason we have the debate is because we say we want to help the users, and the reason for doing the vulnerability research and disclosing the results at all is supposed to be to thwart attackers, but when arguing over which side of the disclosure debate is best attackers and users are treated as constants, as monolithic homogeneous groups that always present the same issues in every case and that's just not an accurate reflection of reality. 

not all attackers would be able to find the vulnerabilities on their own. those that can are just as much a minority as the security researchers themselves are. not all attackers are interested in any given vulnerability. those that target foreign states may be more interested in PLC vulnerabilities than VLC vulnerabilities. some may not have the connections or resources to exploit a particular vulnerability. some may specialize in attacks against a completely different kind of technology. some are simply not motivated enough to add a new exploit to their arsenal.

meanwhile, not all users pay attention to the channels where disclosures are communicated, in large part because most users can't understand or meaningfully act on the information in a disclosure. filtering out attack traffic, using IOCs, deploying mitigations, etc. are all the domain of a very select group. additionally not all users are aware of all patches when they become available, or are willing or able to apply them.

these are only a fraction of the variations among the two groups that can affect if/how they'll use a vulnerability and if/how they might be affected by the vulnerability. the reason we keep having the disclosure debate is because that debate has never adequately addressed these variables, because they have the potential to make the answer change depending on the vulnerability, which would mean there is no single right answer or right side in the debate. the debate focuses on principles to the exclusion of nuance, only paying lip service to the people who would use the vulnerabilities and the people those vulnerabilities would be used against. 

those who find a problem in some kind of technology want an easy answer to the question of what they should do with it - that's why the disclosure debate exists, to try and arrive at an easy answer so people can just follow the one right principle for handling the situation and be done with it. it follows a trend that has become all too common in the technology sector of trying to abstract people out of the equation because people are messy and complicated. but people are part of the equation whether we like it or not so the argument shouldn't be about which principle is right, but rather whether our actions should be governed by principle or by considerations for people.

in the watchmen, the character rorschach was governed by principles to the exclusion of virtually everything else ("never compromise. not even in the face of armageddon"). defending truth and justice so stridently made him arguably the most heroic, in the comic book sense of the word. but the more realistic portrayal of individuals and circumstances and society as a whole in the watchmen highlighted how ill-fitting such unwavering adherence to principles can be in the real world.

we need to stop talking about the disclosure debate, not because it should be settled already, but because it can never be settled, because it's parameters don't fit the real world. it's the wrong argument. if we really want to help protect one group of people from another group of people then we need to spend a lot more time thinking about those people each and every time the subject comes up. there are no shortcuts when it comes to doing the right thing.

Saturday, July 28, 2018

winblows update

so i'm using my computer thursday night when i get a notification that updates have been applied that require a reboot. ok, whatever, i've mostly come to terms with the fact that Windows 10 updates itself without asking me, at least it asks me when is a good time to reboot. that wasn't a good time so i said later.

well, later came in the wee hours of the morning when i was done working and ready to head to bed. i go to shut off the computer and the new options in the shutdown menu remind me that there was an update to take care of, so i choose the update and reboot option.

i chose that option rather than update and shut down because, from past experience, the update process has not actually completed by the time the system shuts down. there's a bunch of stuff it needs to do on the next restart and that takes up time i could be doing something else, so no shutdown just yet, just an update and reboot and i head off to take care of my evening oral hygiene routine thinking that the process would be done by the time i get back.

when i come back the computer is still in the process of rebooting? what the heck? oh no, i've seen this before (or at least i think i have). is the computer stuck in a reboot loop? powering off for a few moments usually breaks out of the loop, but this time i discover it wasn't a reboot loop at all. when i let it power up again i see a screen saying that it's applying updates and that several reboots will be required.

are you kidding me? this is not what i want to deal with in the wee hours of the morning when i still have to go to work the next day. this is not convenient, and frankly "several reboots" for an update is bullshit. i understand the need to perform a reboot during an update; files and other resources that need to be changed may be locked by running processes and rebooting eliminates that impediment, but several reboots? Microsoft has been at this update business for decades now, you'd think their little minions would have figured out how to coordinate their efforts so that each part of the update could make use of the same reboot, but no, apparently that kind of unified effort is beyond them and in fact they seem to be moving in the opposite direction where every bit and piece of their updates (and the operating system itself) is becoming more separate and isolated from the others.

so i did what i hate doing. i left the computer on completely unattended overnight so that hopefully by morning the update would be done. and it was, but that's not the end of the update related problems. you see, Microsoft's updates aren't just for security fixes. those are important, yes, and the fact that people were taking too long to apply them and leaving their systems to become part of massive botnets is part of the reason the user's control over updates was taken away from them. however, Microsoft has re-imagined how versioning of their operating system will work so those updates now also come with feature changes, which (due to the increasingly isolated approach units within Microsoft are taking nowadays) means new binaries with new behaviours.

how the hell is anyone supposed to develop a behavioural baseline for their system with this never ending parade of new binaries and new behaviours? this morning's culprit? BackgroundTransferHost.exe. what does it do? who the hell knows? not only does Microsoft give us less agency now that we can't control if/when updates occur, but there's also less transparency now too because the number of separate/isolated binaries they're introducing to the system has far, far outpaced anyone's efforts to document them.

maybe BackgroundTransferHost.exe isn't even Microsoft's. maybe it's malware. if i were going to make a downloader trojan, that sounds like just the sort of name i'd use - but what do i know, i'm not a malware writer. i suppose they expect me to trust it because it's signed, but that's not how that works. being signed (and passing the signature validation procedure) just means it hasn't been modified after getting signed, not that it's legitimate, not that it's safe, not that it's trustworthy. signing certificates get stolen. there's plenty of signed malware out there.

oh, and the cherry on top is now VMware is non-functional.

what the actual f#$% Microsoft. stop making alternative security approaches so much harder than they have to be. i'm regretting moving on from Window XP. at least there i could perform application and behavioural whitelisting with relative ease.

Sunday, February 04, 2018

thoughts on attack automation

axiom: there is no perfect security

because there is no perfect security we can say with certainty that systems will never be perfectly secure. if we close one vulnerability there will always be another to take it's place. we can spend an unending amount of money/time/effort on closing vulnerabilities and still remain vulnerable. in the process of doing this we would go bankrupt because no one, not even the largest companies in the world, have unlimited resources to spend on security.

therefore, eliminating all the vulnerabilities is not a viable strategy. instead a promising alternative is to approach the problem of security from an economic standpoint. while we can't eliminate all the vulnerabilities, we can eliminate some, and if we eliminate the ones that are easiest to exploit then an attacker's job becomes harder and more expensive to carry out. if we make the attacker's job hard enough then the value/benefits they derive from succeeding in their attack (success will always be possible) would no longer cover the cost of launching that attack.

attack automation doesn't make an attacker's job harder. quite the opposite in fact, it makes it easier. attack automation is carried out by attack tools. as a general rule, attack tools reduce the complexity of performing an attack. tools automate the fiddly bits to save an attacker time and effort but in so doing also save the attacker from needing to know how to do the fiddly bits themselves. this means that a larger population of attackers will become capable of carrying out a particular attack because the technical complexity of performing the attack is reduced. it also means there is a larger pool of targets to victimize because the lower cost of performing the attack makes attacking lower value targets economically viable.

additional automation to save the attacker time and effort when selecting targets and launching attacks is also possible, as the recent release of autosploit has highlighted. this lowers the cost of scaling up the attack so that a single attacker can attack a larger group of victims at a lower cost.

the argument can be made that attackers are entirely capable of making these automated attack tools themselves so it doesn't matter if security researchers do it as well. however, when researchers make the automated attack tools, not only do attackers enjoy cost savings with respect to launching attacks, they also enjoy cost savings with respect to developing the tools to launch attacks.

all of these cost savings for the attacker work against defenders. when the cost of performing an attack is reduced it means that attacks that didn't need to be defended against before (because they were too expensive to launch relative to their payoff) must now be defended against, and that increases the costs for defenders because it requires more to be done.

these cost savings are also permanent. attacks don't get harder, they only ever get easier. offensive security researchers are permanently changing the economics of mounting various attacks in the attackers' favour in an effort to incentivize defenders to do what the researchers think should be done to chase after the zero-vuln goal (a goal which we already know to be unattainable) without regard to the economic realities those defenders face.

enforcing their will, their vision of what security should be on others is misguided and damaging. there is no one-size-fits-all approach to security, and where the fit is bad there will be undue burdens with respect to cost and/or unfortunate breaches that might legitimately have been avoided if attackers had not been given a helping hand.

if attackers can build the tools that make their lives easier by themselves, then let them do it. make them pay the cost of doing it. stop subsidizing attackers in the name of security research. years ago the security research community embraced the idea that there should be no more free bugs - why then are the cybercriminals still getting bugs, and exploits, and frameworks, and more for free after all this time?

Friday, December 30, 2016

thoughts on "In Search of Evidence-Based IT-Security"

Christopher Soghoian brought to my attention a video of a talk by Hanno Böck at the 33rd Chaos Communication Congress. in it Hanno puts forward the claim that IT security is largely science-free, so let's follow a staple of the scientific process - peer review.

Hanno introduces himself as a journalist and hacker and says that he prefers to avoid the term "security researcher" and that he hopes the audience will see why.  for those who are relatively well versed in the field of anti-malware it should definitely become obvious why he prefers to avoid that term and i'll return to this near the end.

Hanno is a skeptic, and far from the only one, his talk ultimately expresses the same sentiments that are now common-place in the perennially misinformed information security community. the difference is that Hanno has found a novel way of expressing them, couched in scientific jargon and easily mistaken for insight. he spends altogether too long and dives too deeply into the medical analogy upon which computer viruses and by extension anti-virus software is named. the analogy has long been recognized as deeply imperfect and limited. that's why, in reality, there are relatively few references to this analogy in the anti-malware field other than "computer virus", "anti-virus", and "infection" (all three of which date back virtually to the beginning of the field). his call towards the end of his talk for blinded or even double blinded studies, aside from being prohibitively expensive to perform, seem to cling to this medical paradigm in spite of the fact that the subject of such experimentation (ie. the computer, since we're interested in whether AV can prevent computers from becoming compromised) cannot be psychologically influenced by knowledge of which (if any) anti-virus is being used.

when he FINALLY leaves the topic of medical science to return to security products (about 14 minutes into his half hour talk) he harps on the absence of one very particular kind of experiment being performed on security products - what he calls a randomized controlled trial. it turns out this is a hold-over from his preoccupation with medical science. when Hanno says that IT security is largely science-free it is the absence of this particular kind of scientific experiment that he is referring to, but that doesn't actually make it science-free because science has a variety of different ways to study and experiment on things that aren't people.

there is in fact good scientific evidence for the efficacy of anti-virus software and it's provided by none other than Microsoft:

now it's true that this is data is from an observational study and that it only shows correlation rather than causation, but that's not the end of the world. observational studies are still science. showing correlation may not be definitive evidence but it's still strong evidence, especially considering the scope of the study (hundreds of millions of computers around the world out of a total estimated population of 1.25 billion windows PCs). in this particular case A may not be causing B but B definitely can't cause A and if anyone can think of a confounding variable that might be present on hundreds of millions of systems then maybe let Microsoft know so that they can try to account for it in the future.

another source of scientific evidence (oft derided in information security circles because the results don't match experts' anecdata) are the independent testing labs like or they eliminate the influence of confounding variables and so are capable of showing causation rather than just correlation. unfortunately Hanno believes their methodology is "extremely flawed". let's look at his complaints:
  • "If a software detects a malware it does not mean it would've caused harm if undetected."
    • this is trivially false. anyone who actually reads the testing methodology at av-comparatives (for example) can find right at the beginning a statement about first testing the malware without the AV present and eliminating any that don't work in that scenario. therefore every sample that is detected by AV in their tests would have caused harm if it had gone undetected.
  • "Alternatives to Antivirus software are not considered." (the talk gives "regular updates" and "application whitelisting" as examples)
    • the example of "regular updates" is frankly a little bit bizarre given Hanno's earlier references to confounders. not controlling for this scenario would actually introduce a confounding variable and make it more difficult to show a causal relationship between the use of a particular AV and the prevention of malware incidents.
    • the example of "application whitelisting" underscores a serious problem in Hanno's understanding of what he's critiquing. application whitelisting isn't an alternative to AV, it's a part of AV. many products include this as a feature. Symantec's product, for example, has what they call a reputation engine which alerts when it encounters anything that doesn't have a known good reputation (which means new/unknown malware, traditionally the bane of known-malware scanning, will get alerted on because it hasn't been seen before and thus no reputation, good or bad).
  • "Antivirus software as a security risk is not considered."
    • when malware exploiting vulnerabilities in anti-virus software is found in the wild then perhaps the test methodologies should be updated to include this possibility. until then, changing the methodology to account for malware that doesn't seem to exist outside a lab has no real benefit.
  • "None of these tests are with real users."
    • again, this would introduce a confounding variable. maybe the lack malware incidents is because of something the user did rather than because of the AV. alternatively maybe the failure to stop malware incidents is because of something the user did rather than because of a failure of the AV. if you want to establish causation you have to control your variables (something our scientifically-minded speaker Hanno should know all too well). does the anti-virus prevent malware incidents? the tests say yes. can a user preempt or compromise that prevention? also yes. is there any prevention a user can't preempt or compromise? sadly (or perhaps thankfully) no. if you want a study that includes users and thus eliminates the ability to establish a causal link between AV use and prevention of malware incidents, see the study by Microsoft, but even with the inclusion of the users it still suggests AV prevents malware incidents.

when Hanno addressed the paucity of scientific papers dealing with security i found myself confused. using Google Scholar to find the most cited scientific papers? surely he doesn't think the realm of security is so narrowly focused that he'll find what he's looking for that way. security is in fact incredibly broad, covering many different quasi-related domains, and looking at a handful of the most popular scientific papers across all of security is in no way representative of the corpus of available works related to any one particular field (like security software). perhaps i'm biased, having previously (in the very distant past) maintained a reference library of papers related specifically to anti-virus, but it doesn't seem like Hanno showed much evidence that he knew how to find evidence-based security. is it really that hard to add the term "malware" to his search query? could he not find a few and then use them as a seed in an algorithm that crawls backwards and forwards through scientific papers by citation? did he even bother to look at Virus Bulletin? does he even know what that is?

security isn't the only thing that is incredibly broad - so too is the practice and discipline of science itself. there are many different fields and each one does things in their own particular way. we do not perform randomized controlled trials on the cosmos. as a general rule we do not intervene in volcano formation. the work being done at the large hadron collider does not follow exactly the same methodologies that are used in medical science. are we to judge cosmology, volcanology, or particle physics poorly because of this? no of course not. a question you might well ask is what kind of science should logically be used when it comes to studying computer security and, while i suspect multiple scientific disciplines could be useful, the one that springs immediately to mind is computer science. does computer science look anything like medical science? as someone with a degree in computer science i can tell you the answer is emphatically no. we do many things in computer science but randomized controlled trials are not among them (because computers are not people). while Hanno may style himself as "scientifically minded" he doesn't seem to demonstrate an appreciation for the breadth of valid scientific research methodologies and one is left to wonder if he's familiar with any kind of science outside of medicine.

when it comes right down to it, it's this apparent lack of familiarity with the subject matter he's talking about that i found most troubling about Hanno's talk.what is anti-virus software really? what is av testing methodology really? what does science really look like? where do you look for scientific research into malware and anti-malware? these all seem to be questions Hanno struggles with, which brings us back to the subject of why he likes to avoid the term "security researcher". if i had to venture a guess i'd say it's because he doesn't do research, even the basic research necessary to understand the subject matter. as such i would say avoiding the term "security researcher" is probably appropriate (for now).

i'm not sure what one can say in a talk about a subject one hasn't done one's homework on, but hopefully that can improve in the future. Hanno referenced Tavis Ormandy during his talk (as people who criticize AV like to do). Tavis' work on AV also suffered from a lack of understanding in the beginning, but he improved over time and, while he still has room for more improvement, now has arguably done some good work in finding vulnerabilities in AV and holding vendor's accountable for the quality of their software. i'm certain Hanno can also improve. i know there are real criticisms to be made of AV software and the industry behind it, but they have to be informed, they have to come from a place of real knowledge and understanding. i look forward to Hanno reaching that place.

Friday, October 21, 2016

highlights from #sector2016

i haven't posted about the sector conference in a number of years, in spite of attending. let's break that trend. here's some highlights from this year's conference.

  • edward snowden - he was surprisingly well prepared to talk about canadian policy and events
  • marketing gimmicks - hockey pucks are one thing, but give me a key and tell me that there's a lock that it might fit and you're darn right i'm going to go find out if it fits. i gather there was a prize i could have won if it did fit but i didn't pay much attention to that. a friend at work said he'd have bought their product on the strength of that gag alone.
  • ransomware - ransomware seemed to be the theme this year. i lost track of how many talks were about or mentioned ransomware. 2016 really does seem like the year of ransomware. i caught the tail end of talk from someone at sophos where they described a feature for rolling back encrypted files using a proprietary backup mechanism. if other vendors aren't doing something like this you're leaving money on the table. (that's right, i know how you vendors think)
  • mikko hypponen - great perspective on what protecting computers has evolved into: protecting society because it now runs on computers
  • the security problems of an 11 year old and how to solve them - this talk was given by an actual 11 year old who could probably put some professionals to shame. this is the talk i most look forward to sharing with people at work when the videos become available.
  • mikko's "virus" floppy disk that he left behind - this isn't a highlight because someone from the AV industry was careless with infectious materials but rather because when it was found people wanted to find a computer they could stick it into and see what was there. you'd think the difficulty in finding the hardware necessary to read a 5 1/4" floppy would make such a disk a relatively safe prop to use, even if there was a virus on it. leave it to infosec pros to try and find ways around such barriers. now you know where shadow IT comes from, folks. by the way, don't tell mikko.
there were other good talks and keynotes, of course, but i'm not going to detail every talk i attended and every person i met. these are the things that really stood out to me and if you want to know more you should have gone yourself.

Friday, September 02, 2016

the anti-virus harm balance

anti-virus software, like all software, has defects. sometimes those defects are functional and manifest in a failure to do something the software was supposed to do. some other times the defects manifest in the software doing something it was never supposed to do, which can have security implications so we classify them as software vulnerabilities. over the years the software vulnerabilities in anti-virus software has been gaining an increasing amount of attention by the security community and industry - so much so that these days there are people in those groups expressing the opinion that, due to the presence of those vulnerabilities, anti-virus software does more harm than good.

the reasoning behind that opinion goes something like this: if anti-virus software has vulnerabilities then it can be attacked, so having anti-virus software installed increases the attack surface of the system and makes it more vulnerable. worse still, anti-virus software is everywhere, in part because of well funded marketing campaigns but also because in some situations it's mandated by law. add to that the old but still very popular opinion that anti-virus software isn't effective anymore and it starts looking like a perfect storm of badness waiting to rain on everyone's parade.

there's a certain delicious irony in the idea that software intended to close avenues of attack actually opens them instead, but as appealing as that irony is, is it really true? certainly each vulnerability does open an avenue of attack, but is it really doing that instead of closing them or is it as well as closing them?

if an anti-virus program stops a particular piece of malware, it's hard to argue that it hasn't closed the avenue of attack that piece of malware represented. it's also hard to argue that anti-virus software doesn't stop any malware - i don't think anyone in the anti-AV camp would try to argue that because it's so demonstrably false (anyone with a malware collection can demonstrate anti-virus software stopping at least one piece of malware). indeed, the people who criticize anti-virus software usually complain not about set of malware stopped by AV being too small but rather that the set of malware stopped by AV doesn't include the malware that matters most (the new stuff).

so, since anti-virus does in fact close avenues of attack, that irony about opening avenues of attack instead of closing them isn't strictly true. but what about the idea that anti-virus software does more harm than good? well, for that to be true anti-virus software would have to open more avenues of attack than it closes. i don't know how many vulnerabilities any given anti-virus product has so i can't give an exact figure of how many avenues of attack are opened. i doubt anyone else can do so either (though i imagine there are some who could give statistical estimates based on the size of the code base). the other side of the coin, however, is one we have much better figures for. the number pieces of malware that better known anti-virus programs stop (and therefore the number of avenues of attack closed) is in the millions if not tens of millions and that number increases by thousands each day. can the number of vulnerabilities in anti-virus software really compare with that?

it's said that windows has 50 million lines of code. if an anti-virus product were comparable (i suspect in reality it would have fewer lines of code) and if that anti-virus product only stops 5 million pieces of malware (i suspect the real number would be higher) then in order for that anti-virus product to do more harm than good it would need to have at least one vulnerability for every 10 lines of code. that would be ridiculously bad software considering such metrics are usually estimated per 1000 lines of code.

now one might argue (in fact i'm sure many will) that those millions of pieces of malware that anti-virus software stops don't really represent actual avenues of attack because for the most part they aren't actually being used anymore. they've been abandoned. counting them as closed avenues of attack isn't realistic. the counter-argument to that, however, is to examine why they were abandoned in the first place. the reason is obvious, they were abandoned because anti-virus software was updated to stop them. the only reason why malware writers continue making new malware instead of resting on their laurels and using existing malware in perpetuity is because once anti-virus software can detect that existing malware it generally stops being a viable avenue of attack. so rather than the abandonment of that malware counting against anti-virus software's record of closing avenues of attack it's actually closer to being AV's figurative body count.

there is still malware out there that anti-virus software hasn't yet stopped, and as that set is continually replenished it's unlikely that anti-virus software will stop all the malware. it has stopped an awful lot so far, however, so the next time someone says anti-virus software does more harm than good (due to it's vulnerabilities) ask them for their figures on the number of vulnerabilities in anti-virus products and see how it compares with the number of things anti-virus software stops. i have a feeling you'll find those people are full of it.

Tuesday, September 01, 2015

there's a quality problem in the anti-malware industry

if you follow infosec news sources at all, by now you've probably heard about the claim made by an anonymous pair of ex-kaspersky employees that kaspersky labs weaponized false positives.

more specifically, the claim is that engineers at kaspersky labs were directed to reverse engineer competing products and use that knowledge to alter legitimate system files by inserting malicious looking code into them so that they would both seem like files that should be detected and be similar enough to the original file that the competing product will also act on the legitimate file and in so doing cause problems for users of those competing products.

i've heard this described as fake malware, but for the life of me i can't see why it should be called fake. the altered files may not do anything malicious when executed, but they're clearly designed to exploit those competing products. furthermore, there is clearly a damaging payload. this isn't fake malware, it's real malware. it may launch it's malicious payload in an unorthodox and admittedly indirect manner, but this is essentially an exploit.

some consider the detection of these altered files to be false positives because the files don't actually do anything themselves, but since they have malicious intent and indirectly harmful consequences, i think the only real false positives in play here are the original system files that are being mistaken for these modified files.

by all accounts, this type of attack on anti-malware products actually happened. what's new here is the claim that kaspersky labs was responsible at the direction of eugene kaspersky himself. there's a lot of room for doubt. the only data we have to go by so far, besides the historical fact of the attack's existence, is the word of anonymous sources (who potentially have an ax to grind) and some emails that, quite frankly, are easily forged. circumstantially there's also an experiment kaspersky participated in around the same time frame that has similar earmarks to what is being claimed except for the part about tricking competing products into detecting legitimate files as malware.

i don't expect we'll ever know for sure if kaspersky was behind the attacks. doubts have been expressed by members of the industry, but frankly i've seen too many things whitewashed or completely ignored (like partnerships with government malware writers) to take their publicly expressed statements at face value. there are certainly vendors i'd have a harder time believing capable of this but there just doesn't seem to be sufficient evidence that the claims are true. the problem is that i can't imagine any kind of evidence the anonymous sources are likely to have that isn't easy to repudiate. had they taken a stand at the time (like someone with actual scruples would have done) they would have been able to put their names behind their claims - they may have lost their jobs but they surely would have been able to find employment with a different vendor because hiring a whistle-blower would have been good PR.

however, as it stands now, the anonymous sources have to remain anonymous. if they're telling the truth then they are complicit in kaspersky's wrong-doing, and if they're lying they are throwing the entire industry under the bus for no good reason (because this claim fans the fires of that old conspiracy theory about AV vendors being the ones who write the viruses). Either way, to have this claim linked to their real identities now would make them radioactive in the industry. no one would touch them, and for good reason.

long ago it used to be that the industry only employed the highest calibre of researchers. people who were beyond reproach. naturally, in order to grow, the industry has had to hire ever increasing numbers of people and old safeguards against undesirable individuals joining the ranks don't scale very well. increasingly people who aren't beyond reproach are being found amongst the industry's ranks and there appears to be no scenario where these two anonymous sources don't fall into that category. the inclusivity that the general security community embraces (and that the anti-malware industry is increasingly mimicking) has the consequence that blackhats are included. the anti-malware industry is going to have to either figure out if they're ok with that or figure out a way to avoid what the general security community could not.