FUD is an acronym that stands for Fear, Uncertainty, and Doubt...
FUD is a form of deception/manipulation that is closely related to snake oil.. in fact, they're really a matched pair - where snake oil peddling is supposed to fraudulently make people believe something is good (or at least better than it is), spreading FUD is supposed to fraudulently make people believe something else is bad (or at least worse than it really is)...
FUD is often used by a representative of a company to disparage a competing product/technology/idea but it can also be used by people who are simply proponents of a particular product/technology/idea and want to hurt the competing product/technology/idea...
back to index
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Saturday, April 29, 2006
Friday, April 28, 2006
vulnerability escrow
while engaged in a discussion about a vulnerability disclosure over at spire security an idea came to me...
one of the reasons people disclose vulnerabilities without making any effort to first work with the affected vendor(s) to correct the problem is that they want to be known as the person who discovered the vulnerability - they want to be able to prove they were the first to discovered it and the way to do that is to blurt it out in a public forum so that it becomes trivial to verify who was first to post the vulnerability...
ignoring the fact that putting the advancement of your reputation ahead the security of others is an entirely self-serving thing to do, i think there's a relatively easy solution to this... put the details of the vulnerability into escrow with a trusted 3rd party such that the 3rd party timestamps the details you give them and keeps them secret until the researcher who submitted the vulnerability unlocks it for public scrutiny... that way the researcher can prove when s/he discovered the vulnerability and get the credit s/he's due and still work with the vendor to correct the problem...
ideally we'd want to fix the vulnerability prior to public disclosure so that the window of exposure is more or less closed when information that might have otherwise lead to exploitation in the wild is released... of course things don't always go as we'd hope and sometimes vendors dick researchers around instead of taking them seriously so full public disclosure before the vulnerability gets corrected may still be necessary some of the time, but hopefully only as a last resort...
of course this idea of having a vulnerability escrow system only addresses the problem of letting people motivated by personal advancement follow responsible disclosure and still get the credit they were seeking... it doesn't address other motivations like spite, where they try to punish the vendor by releasing vulnerability information - that hurts the vendor by hurting their customers and that kind of collateral damage clearly doesn't work towards the greater good...
now that i've written this, it occurs to me that vulnerability escrow can't be a new idea... and google says i'm right, it's not... so then this is my voice supporting the idea...
one of the reasons people disclose vulnerabilities without making any effort to first work with the affected vendor(s) to correct the problem is that they want to be known as the person who discovered the vulnerability - they want to be able to prove they were the first to discovered it and the way to do that is to blurt it out in a public forum so that it becomes trivial to verify who was first to post the vulnerability...
ignoring the fact that putting the advancement of your reputation ahead the security of others is an entirely self-serving thing to do, i think there's a relatively easy solution to this... put the details of the vulnerability into escrow with a trusted 3rd party such that the 3rd party timestamps the details you give them and keeps them secret until the researcher who submitted the vulnerability unlocks it for public scrutiny... that way the researcher can prove when s/he discovered the vulnerability and get the credit s/he's due and still work with the vendor to correct the problem...
ideally we'd want to fix the vulnerability prior to public disclosure so that the window of exposure is more or less closed when information that might have otherwise lead to exploitation in the wild is released... of course things don't always go as we'd hope and sometimes vendors dick researchers around instead of taking them seriously so full public disclosure before the vulnerability gets corrected may still be necessary some of the time, but hopefully only as a last resort...
of course this idea of having a vulnerability escrow system only addresses the problem of letting people motivated by personal advancement follow responsible disclosure and still get the credit they were seeking... it doesn't address other motivations like spite, where they try to punish the vendor by releasing vulnerability information - that hurts the vendor by hurting their customers and that kind of collateral damage clearly doesn't work towards the greater good...
now that i've written this, it occurs to me that vulnerability escrow can't be a new idea... and google says i'm right, it's not... so then this is my voice supporting the idea...
Thursday, April 27, 2006
what's a stealthkit?
a stealthkit is a collection of one or more programs (a software toolkit -> a toolkit -> a kit) that hides the processes, data, and activity of itself and/or some other application(s) it may be packaged with...
a careful reader will notice that this is basically the same definition that much of the IT community is currently using for 'rootkits'... this is to distinguish this new type of malware from the classical form of rootkits... the nouveau 'rootkits' have a fundamentally different function and focus from the classical rootkits (even though classical rootkits were often also stealthkits) so creating a new term for them is reasonable... and there really isn't any reason to recycle an old term (rootkit) instead of coming up with a new term for something that is legitimately new - it's not like our word bag is empty...
normally i would never add my own terminology to a glossary, not even one on my own blog, but i have become so frustrated trying to keep track of whether i mean the new 'rootkit' or the classical rootkit when i say "rootkit" that i've decided i have to do this...
back to index
a careful reader will notice that this is basically the same definition that much of the IT community is currently using for 'rootkits'... this is to distinguish this new type of malware from the classical form of rootkits... the nouveau 'rootkits' have a fundamentally different function and focus from the classical rootkits (even though classical rootkits were often also stealthkits) so creating a new term for them is reasonable... and there really isn't any reason to recycle an old term (rootkit) instead of coming up with a new term for something that is legitimately new - it's not like our word bag is empty...
normally i would never add my own terminology to a glossary, not even one on my own blog, but i have become so frustrated trying to keep track of whether i mean the new 'rootkit' or the classical rootkit when i say "rootkit" that i've decided i have to do this...
back to index
Tags:
definition,
malware,
rootkit,
stealthkit
Wednesday, April 26, 2006
ethical conflict in the anti-'rootkit' domain
not long ago i blogged about how anti-virus companies don't hire virus writers... now of course there was a somewhat high profile exception involving low profile company (ie. small start-up so far removed from the av industry they didn't know how badly they'd be shooting themselves in the foot) but this example is the exception that proves the rule...
but now i'd like to shed some light on a similar example in the so-called 'rootkit' domain... in a securityfocus.com article from last year we have the following:
did you catch that? jamie butler (aka fuzen) created what became one of the most widely deployed 'rootkits' in the world (not the only 'rootkit' he's authored, by the way)... and then he co-authored what some have described as the only book on 'rootkits'...
so? how does that relate to av companies who hire or don't hire virus writers? well, how about this - the same jamie butler is the CTO of komoku, a company that has received $2.4 million from various branches of the US government to develop a solution to the 'rootkit' problem in the form of CoPilot and Gamma... Gamma, being a software product, is intended to be priced similarly to anti-virus products but won't be as successful against 'rootkits' because it's software only... CoPilot, on the other hand, is a hardware product that will probably be going for about $1000 a pop so they'll be pulling down some major cash when they finally get to market... oh, and did i mention they've partnered with symantec?
but wait, there's more - while working for hoglund's security company (hbgary) butler developed a 'rootkit detection technology called VICE and now he and peter silberman (who has also worked at hbgary and authored the FUTo 'rootkit' which is also freely available on hoglund's site) have developed yet another 'rootkit' detection technology called RAIDE - it's still too new for a business to be made out of it yet, we'll have to wait and see...
let me boil that down for you - jamie butler created the FU 'rootkit'... jamie butler and greg hoglund made FU available for free download on greg hoglund's 'rootkit' site... jamie butler and greg hoglund wrote the only book on this new threat, marking them both as experts in the field... FU, a 'rootkit' written by one of the foremost experts in the 'rootkit' field became one of the most popular 'rootkits' amongst those who deploy them, in fact the exact binaries that jamie butler and greg hoglund make available are the same ones getting deployed and greg hoglund admits it... jamie butler and greg hoglund get money for a book about a problem they actively contribute to... jamie butler, while working for greg hoglund, develops anti-'rootkit' technolog - free to use but still it draws people to hoglund's other products and so they both get money as a result of technology meant to thwart a problem they actively contribute to... jamie butler, being an expert in the field, is hired and made CTO by an anti-'rootkit' company called komoku which in turn gets millions from the government and so jamie butler gets still more money for working on technology meant to thwart a problem he actively contributes to... symantec, who correctly refuses to hire virus writers, partners with a company that has a 'rootkit' writer and distributor as their CTO... jamie butler and peter silberman, both 'rootkit' authors whose works are freely downloadable from hoglund's site, develop yet another technology meant to thwart a problem they both actively contribute to ....
so there you have it - people creating, distributing, advertizing, and evangelizing 'rootkits' - basically contributing to the problem by creating the tools, popularizing the threat, and arming the bad guys... and then some write a book about the problem or actually create products to solve the problem and in so doing rake in lots of dough... if this were going on in the anti-virus industry people would be raising a big stink, and it's not like anyone would trust an anti-spam company that hired spammers, so what the heck is going on here? how are these people getting away with this?
VICE? RAIDE? CoPilot or Gamma? i certainly won't be touching any of those with a 10' barge pole - give me sysinternals rootkit revealer or f-secure's blacklight, thanks... i'd rather not support people who are actively part of the problem... it's a shame the US government doesn't feel the same way... also a shame symantec doesn't feel that way - not sure why they seem to have less problems with 'rootkit' creators than with virus writers...
but now i'd like to shed some light on a similar example in the so-called 'rootkit' domain... in a securityfocus.com article from last year we have the following:
Jamie Butler: I am a kernel developer and contributor at rootkit.com, where I go by the name Fuzen. I really enjoyed working with Greg on the book Rootkits: Subverting the Windows Kernel. Most of my time is spent at the bit and byte level.and
Jamie Butler: I am not sure of the exact time frame that I learned about rootkits. I guess I knew of their existence since the early UNIX Trojan system file replacements. However, I did not become active in the research until 2001 when I was working on my Master's degree. At that time, I was looking for modifications to the operating system in memory used to hide things. After that research, I realized that you could alter data structures directly in kernel memory to hide without modifying any operating system code. That is what the FU rootkit, which I wrote, is intended to demonstrate. It is not malicious but more proof of a premise.and finally
Greg Hoglund: Not really - I don't spend much time trying to detect rootkits. But, I do know that FU is one of the most widely deployed rootkits in the world. [It] seems to be the rootkit of choice for spyware and bot networks right now, and I've heard that they don't even bother recompiling the source - that the DLL's found in spyware match the checksum of the precompiled stuff available for download from rootkit.com. I think that is kinda lame, these spyware guys don't even bother to use the source.
did you catch that? jamie butler (aka fuzen) created what became one of the most widely deployed 'rootkits' in the world (not the only 'rootkit' he's authored, by the way)... and then he co-authored what some have described as the only book on 'rootkits'...
so? how does that relate to av companies who hire or don't hire virus writers? well, how about this - the same jamie butler is the CTO of komoku, a company that has received $2.4 million from various branches of the US government to develop a solution to the 'rootkit' problem in the form of CoPilot and Gamma... Gamma, being a software product, is intended to be priced similarly to anti-virus products but won't be as successful against 'rootkits' because it's software only... CoPilot, on the other hand, is a hardware product that will probably be going for about $1000 a pop so they'll be pulling down some major cash when they finally get to market... oh, and did i mention they've partnered with symantec?
but wait, there's more - while working for hoglund's security company (hbgary) butler developed a 'rootkit detection technology called VICE and now he and peter silberman (who has also worked at hbgary and authored the FUTo 'rootkit' which is also freely available on hoglund's site) have developed yet another 'rootkit' detection technology called RAIDE - it's still too new for a business to be made out of it yet, we'll have to wait and see...
let me boil that down for you - jamie butler created the FU 'rootkit'... jamie butler and greg hoglund made FU available for free download on greg hoglund's 'rootkit' site... jamie butler and greg hoglund wrote the only book on this new threat, marking them both as experts in the field... FU, a 'rootkit' written by one of the foremost experts in the 'rootkit' field became one of the most popular 'rootkits' amongst those who deploy them, in fact the exact binaries that jamie butler and greg hoglund make available are the same ones getting deployed and greg hoglund admits it... jamie butler and greg hoglund get money for a book about a problem they actively contribute to... jamie butler, while working for greg hoglund, develops anti-'rootkit' technolog - free to use but still it draws people to hoglund's other products and so they both get money as a result of technology meant to thwart a problem they actively contribute to... jamie butler, being an expert in the field, is hired and made CTO by an anti-'rootkit' company called komoku which in turn gets millions from the government and so jamie butler gets still more money for working on technology meant to thwart a problem he actively contributes to... symantec, who correctly refuses to hire virus writers, partners with a company that has a 'rootkit' writer and distributor as their CTO... jamie butler and peter silberman, both 'rootkit' authors whose works are freely downloadable from hoglund's site, develop yet another technology meant to thwart a problem they both actively contribute to ....
so there you have it - people creating, distributing, advertizing, and evangelizing 'rootkits' - basically contributing to the problem by creating the tools, popularizing the threat, and arming the bad guys... and then some write a book about the problem or actually create products to solve the problem and in so doing rake in lots of dough... if this were going on in the anti-virus industry people would be raising a big stink, and it's not like anyone would trust an anti-spam company that hired spammers, so what the heck is going on here? how are these people getting away with this?
VICE? RAIDE? CoPilot or Gamma? i certainly won't be touching any of those with a 10' barge pole - give me sysinternals rootkit revealer or f-secure's blacklight, thanks... i'd rather not support people who are actively part of the problem... it's a shame the US government doesn't feel the same way... also a shame symantec doesn't feel that way - not sure why they seem to have less problems with 'rootkit' creators than with virus writers...
Tags:
copilot,
ethics,
gamma,
greg hoglund,
hbgary,
jamie butler,
komoku,
malware,
peter silberman,
raide,
rootkit,
rootkitdotcom,
symantec,
vice
Sunday, April 23, 2006
linus torvalds is not a villian
linux creator linus torvalds recently fixed a bug in the linux kernel that was revealed by the failure of the new cross-platform virus to operate properly on newer versions of the linux kernel...
but some folks, like the normally bright eddy willems, take offense to this... they seem to be thinking that mr. torvalds made the patch so that the virus would operate - that's a ridiculous notion...
there was a BUG in the OS... linux was not operating the way it was supposed to... i don't care if it was a virus or satan himself who revealed the fact, the kernel needed to be fixed... sure fixing the OS so that it behaves as intended helps the virus operate - making sure the OS behaves as intended has the potential to help ALL software for that platform...
unbelievable and unforgivable? hardly... i'd reserve those terms to someone who'd keep their OS broken just to spite a particular proof of concept virus... talk about cutting off your nose in spite of your face... i think some folks are missing some perspective here...
i'll grant you that mr. torvalds is a little off on his understanding of what viruses are if he thinks this thing isn't a virus, but fixing the OS was still the right thing to do...
but some folks, like the normally bright eddy willems, take offense to this... they seem to be thinking that mr. torvalds made the patch so that the virus would operate - that's a ridiculous notion...
there was a BUG in the OS... linux was not operating the way it was supposed to... i don't care if it was a virus or satan himself who revealed the fact, the kernel needed to be fixed... sure fixing the OS so that it behaves as intended helps the virus operate - making sure the OS behaves as intended has the potential to help ALL software for that platform...
unbelievable and unforgivable? hardly... i'd reserve those terms to someone who'd keep their OS broken just to spite a particular proof of concept virus... talk about cutting off your nose in spite of your face... i think some folks are missing some perspective here...
i'll grant you that mr. torvalds is a little off on his understanding of what viruses are if he thinks this thing isn't a virus, but fixing the OS was still the right thing to do...
Tags:
cross platform,
eddy willems,
linus torvalds,
linux,
malware,
patch,
virus
what is snake oil?
snake oil is a derogatory term used to imply that something is fraudulently ineffective... in the malware domain, any product, feature, service, or advice that gives the user a false sense of security is snake oil...
sometimes snake oil is easy to spot, for example when an anti-virus product claims to detect all known and unknown viruses past, present, and future - or when a product claims 100% protection... sometimes it's a little less clear, like notices that "this message is certified virus free"... and sometimes it's downright murky, heaped with misconducted tests, FUD, and hyperbole...
whatever the form, the peddling of snake oil is lowest and most self-serving form of so-called aid that a person or organization can give, and it should never be tolerated under any circumstances (even when it's free)...
back to index
sometimes snake oil is easy to spot, for example when an anti-virus product claims to detect all known and unknown viruses past, present, and future - or when a product claims 100% protection... sometimes it's a little less clear, like notices that "this message is certified virus free"... and sometimes it's downright murky, heaped with misconducted tests, FUD, and hyperbole...
whatever the form, the peddling of snake oil is lowest and most self-serving form of so-called aid that a person or organization can give, and it should never be tolerated under any circumstances (even when it's free)...
back to index
Tags:
definition,
snake oil
Saturday, April 22, 2006
mcafee's rootkit report: an alternative interpretation
the internet has been relatively a-buzz this week over a report (well, part of a report, actually - don't ask me why they couldn't release the whole thing) out of mcafee on the growth of the 'rootkit' threat... a lot of attention has been paid to mcafee's rather poor choice of words when the key findings blamed the "open source environment" for the increased proliferation and complexity of rootkits...
first, yes it really has nothing to do with "open source"... the concept that the author was probably struggling for when he settled on "open source" was public or full disclosure... a software licensing paradigm has nothing to do with 'rootkits' getting more complex or being deployed more frequently, but public disclosure is an enabling practice... coming out against full disclosure or anything like it isn't a popular thing to do in security circles these days, though, which may be why the author tried to find some other term...
also, mcafee named and shamed rootkitDOTcom as a possible leading cause of the worsening 'rootkit' problem... a lot of people took issue with this, mostly they're the folks who feel rootkitDOTcom is doing nothing more than practicing full disclosure and that full disclosure can do no wrong... some even try to quote bruce schneier with the old "security by obscurity is no security at all" line and to those people i'll direct you to what bruce schneier has actually written about full disclosure, particularly about irresponsible (and perhaps even criminal) disclosure...
now mcafee has it partially right, the public disclosure taking place on rootkitDOTcom and various collaboration sites and blogs IS at least partially responsible for the increase in complexity of 'rootkits'... by sharing malware source code and even compiled binaries with literally everyone (as rootkitDOTcom does) the supposed good guys are adding their voices, their knowledge, and their skills to the collaborative efforts of the bad guys...
that said, mcafee's figures show something in the growth patterns that availability of information cannot explain... collaboration sites in general and rootkitDOTcom in particular have been around for years so why then does the growth rate change so dramatically when it gets to 2005? availability of source and binaries are enabling factors, not causative ones - they don't drive the innovation or deployment of this type of malware...
something changed, something happened in 2005 that made 'rootkits' a whole lot more popular - i'll give you 3 guesses as to what that was and the first 2 don't count... that's right, sony bmg / first4internet / xcp... the media circus surrounding that debacle kept 'rootkits' in the public eye for a long time... it made them mainstream, not only in the eyes of the general internet public but in the malware creators' eyes as well... the attackers interest was piqued, their imagination was sparked, the tools they needed were easy to find and so now we are witnessing something not that unlike the slashdot effect except in malware development... throngs of people who weren't making 'rootkits' before, who probably weren't even aware of the kind of stealth technology that was available or how it could be used to their advantage, now are and that's what mcafee's numbers represent...
does that mean the media is to blame for building all this interest in 'rootkits'? maybe - i'd certainly like to blame some of the ills of the world on their sensationalism... the bad guys are still the ones responsible for doing bad things, but the media pointed a whole bunch of them in a new direction... that doesn't mean that greg hoglund and co. of rootkitDOTcom are off the hook, though... for all protestations of helping people learn about threats, learning for the sake of learning doesn't improve security - they aren't helping to close the window of exposure for this class of threat but they are helping to arm the bad guys... that's called being part of the problem rather than part of the solution...
first, yes it really has nothing to do with "open source"... the concept that the author was probably struggling for when he settled on "open source" was public or full disclosure... a software licensing paradigm has nothing to do with 'rootkits' getting more complex or being deployed more frequently, but public disclosure is an enabling practice... coming out against full disclosure or anything like it isn't a popular thing to do in security circles these days, though, which may be why the author tried to find some other term...
also, mcafee named and shamed rootkitDOTcom as a possible leading cause of the worsening 'rootkit' problem... a lot of people took issue with this, mostly they're the folks who feel rootkitDOTcom is doing nothing more than practicing full disclosure and that full disclosure can do no wrong... some even try to quote bruce schneier with the old "security by obscurity is no security at all" line and to those people i'll direct you to what bruce schneier has actually written about full disclosure, particularly about irresponsible (and perhaps even criminal) disclosure...
now mcafee has it partially right, the public disclosure taking place on rootkitDOTcom and various collaboration sites and blogs IS at least partially responsible for the increase in complexity of 'rootkits'... by sharing malware source code and even compiled binaries with literally everyone (as rootkitDOTcom does) the supposed good guys are adding their voices, their knowledge, and their skills to the collaborative efforts of the bad guys...
that said, mcafee's figures show something in the growth patterns that availability of information cannot explain... collaboration sites in general and rootkitDOTcom in particular have been around for years so why then does the growth rate change so dramatically when it gets to 2005? availability of source and binaries are enabling factors, not causative ones - they don't drive the innovation or deployment of this type of malware...
something changed, something happened in 2005 that made 'rootkits' a whole lot more popular - i'll give you 3 guesses as to what that was and the first 2 don't count... that's right, sony bmg / first4internet / xcp... the media circus surrounding that debacle kept 'rootkits' in the public eye for a long time... it made them mainstream, not only in the eyes of the general internet public but in the malware creators' eyes as well... the attackers interest was piqued, their imagination was sparked, the tools they needed were easy to find and so now we are witnessing something not that unlike the slashdot effect except in malware development... throngs of people who weren't making 'rootkits' before, who probably weren't even aware of the kind of stealth technology that was available or how it could be used to their advantage, now are and that's what mcafee's numbers represent...
does that mean the media is to blame for building all this interest in 'rootkits'? maybe - i'd certainly like to blame some of the ills of the world on their sensationalism... the bad guys are still the ones responsible for doing bad things, but the media pointed a whole bunch of them in a new direction... that doesn't mean that greg hoglund and co. of rootkitDOTcom are off the hook, though... for all protestations of helping people learn about threats, learning for the sake of learning doesn't improve security - they aren't helping to close the window of exposure for this class of threat but they are helping to arm the bad guys... that's called being part of the problem rather than part of the solution...
Tags:
first4internet,
full disclosure,
greg hoglund,
malware,
mcafee,
rootkit,
rootkitdotcom,
sony bmg,
stealth,
xcp
Friday, April 21, 2006
what is a password stealer?
a password stealer is a program that collects chunks of data that are likely to be account names and their associated passwords so that an attacker can use those credentials to pose as the person they were stolen from...
password stealers can be implemented in a number of different ways, most of them involve running on a machine where the owner/user of the machine is unaware of the password stealer's presence/nature (thus making it a trojan horse program)...
some password stealing trojans can monitor keystrokes, like a specialized form of keylogger... others might collect data from files or registry keys that are known to contain passwords... another type can pose as a window where the user would normally enter his/her password and record what the user enters... and yet another type can monitor network traffic (a network sniffer) looking for passwords...
back to index
password stealers can be implemented in a number of different ways, most of them involve running on a machine where the owner/user of the machine is unaware of the password stealer's presence/nature (thus making it a trojan horse program)...
some password stealing trojans can monitor keystrokes, like a specialized form of keylogger... others might collect data from files or registry keys that are known to contain passwords... another type can pose as a window where the user would normally enter his/her password and record what the user enters... and yet another type can monitor network traffic (a network sniffer) looking for passwords...
back to index
Tags:
definition,
malware,
password stealer,
trojan
Thursday, April 20, 2006
what is a black hat/white hat?
a black hat hacker is a hacker who is interested in doing or actually does bad things... black hats are the bad guys, the attackers, the ones who make things worse for everyone else by breaching the security or corrupting the integrity and reliability of systems... crackers are a type of black hat...
a white hat hacker is the opposite of a black hat... it's a hacker who tries to prevent bad things from happening or from being done... white hats are the good guys, the ones who try to help vendors make their products more secure and users/administrators make their systems more secure... most of the security researchers on the various security lists are white hats...
the two terms come from the old movie stereotype where the bad guy always wears a black hat and twirls his mustache and the good guy always wears a white hat... the colours denote a moral polarity and so as one might expect the concept of shades of grey was introduced with the term grey hat, which is someone who sometimes acts as a black hat and sometimes as a white hat and basically shows no significant alignment with either side...
more recently, microsoft introduced the term blue hat... blue hat doesn't fit into the existing moral color code framework of good/bad/indifferent, and will probably lead to the dilution of the concept because after asking what a blue hat is a person would most likely ask what a red hat is...
back to index
a white hat hacker is the opposite of a black hat... it's a hacker who tries to prevent bad things from happening or from being done... white hats are the good guys, the ones who try to help vendors make their products more secure and users/administrators make their systems more secure... most of the security researchers on the various security lists are white hats...
the two terms come from the old movie stereotype where the bad guy always wears a black hat and twirls his mustache and the good guy always wears a white hat... the colours denote a moral polarity and so as one might expect the concept of shades of grey was introduced with the term grey hat, which is someone who sometimes acts as a black hat and sometimes as a white hat and basically shows no significant alignment with either side...
more recently, microsoft introduced the term blue hat... blue hat doesn't fit into the existing moral color code framework of good/bad/indifferent, and will probably lead to the dilution of the concept because after asking what a blue hat is a person would most likely ask what a red hat is...
back to index
Tags:
blackhat,
definition,
hacker,
whitehat
Wednesday, April 19, 2006
what is a keylogger?
a keylogger is a program or piece of hardware that records keystrokes, often so that they can be sent out to or collected by a 3rd party but sometimes for use by the computer's owner for system monitoring... the fundamental functionality of keylogging is also present in keyboard/mouse playback utilities...
in the malware domain keyloggers are generally installed by the user without the user knowing what it will do (therefore qualifying it as a trojan horse program)... this can occur by the user running the keylogger directly, or by running a dropper or downloader trojan that installs the keylogger...
keylogger trojans are often used to steal sensitive information like credit card numbers, banking information, even passwords (those designed specifically for passwords are password stealers)... they can also be used for more general electronic surveillance, monitoring email and/or instant message composition, monitoring search terms typed into a search engine (potentially useful for adware), or even monitoring ordinary web-form input for the purpose of identity theft...
hardware keyloggers require the attacker to have physical access to your computer or at least some part of your computer involved in keyboard input (like the keboard itself)... hardware keyloggers obviously don't qualify as trojan horse programs (since they aren't programs) but they are often disguised or otherwise obscured to prevent the victim from becoming aware of their presence...
back to index
in the malware domain keyloggers are generally installed by the user without the user knowing what it will do (therefore qualifying it as a trojan horse program)... this can occur by the user running the keylogger directly, or by running a dropper or downloader trojan that installs the keylogger...
keylogger trojans are often used to steal sensitive information like credit card numbers, banking information, even passwords (those designed specifically for passwords are password stealers)... they can also be used for more general electronic surveillance, monitoring email and/or instant message composition, monitoring search terms typed into a search engine (potentially useful for adware), or even monitoring ordinary web-form input for the purpose of identity theft...
hardware keyloggers require the attacker to have physical access to your computer or at least some part of your computer involved in keyboard input (like the keboard itself)... hardware keyloggers obviously don't qualify as trojan horse programs (since they aren't programs) but they are often disguised or otherwise obscured to prevent the victim from becoming aware of their presence...
back to index
Tags:
definition,
keylogger,
malware,
trojan
Tuesday, April 18, 2006
full disclosure
considering the tone of some of my past articles, some people could get the impression that i'm against full disclosure... nothing could be further from the truth...
i am in fact a proponent of full disclosure, but i don't approach the issue on simple blind faith as most people do... i've examined the arguments for and against full disclosure, i've looked at the underlying assumptions of those arguments, and i've made up my own mind about when full disclosure is appropriate and when it is not...
which is to say i support selective full disclosure... the fact is that those arguments against full disclosure have one thing right - making information on how to exploit vulnerabilities available to the public puts tools into the hands of bad guys... the reason full disclosure works is not because that doesn't happen, but because (theoretically at least) putting that information out there also hastens the correction of whatever mistake(s) lead to the vulnerability and therefore closes the window of exposure for the vulnerability... it's a security trade-off, and being able to close the hole once and for all tips the balance in favour of this practice because it winds up doing more good than harm...
if the vulnerability is not correctable, however, and some genuinely aren't, then the good that full disclosure is supposed to do can't be done...
take for example the following vulnerability... if an attacker knows your name, address, and credit card number, your credit card is vulnerable to unauthorized purchasing by the attacker... this is obvious, and i hope it's also obvious that the credit card purchasing system can't really be corrected to prevent this - this is one of those cases where full disclosure of the information needed to perform the attack (your credit card info) cannot result in this the attacked system being fixed to prevent that type of attack... the harm part of the equation is still present, of course...
many people don't consider the possibilitty that some vulnerabilities can't be fixed, however... they follow full disclosure blindly, with unwavering faith that it's going to improve the situation... they don't do that in my credit card example above, of course, because in reality everyone follows selective full disclosure without even realizing it... it seems to just be computer related vulnerabilities that people assume can always be corrected, which shows a regretable but understandable ignorance of the finer points of computer science... serious computer security researchers should know better though...
full disclosure is a good thing, when applied appropriately - but just as free speech has limits on when and where it's appropriate (ex. don't yell fire in a crowded theatre), full disclosure has limits too... that's why i'm selective in my practice and support of full disclosure...
i am in fact a proponent of full disclosure, but i don't approach the issue on simple blind faith as most people do... i've examined the arguments for and against full disclosure, i've looked at the underlying assumptions of those arguments, and i've made up my own mind about when full disclosure is appropriate and when it is not...
which is to say i support selective full disclosure... the fact is that those arguments against full disclosure have one thing right - making information on how to exploit vulnerabilities available to the public puts tools into the hands of bad guys... the reason full disclosure works is not because that doesn't happen, but because (theoretically at least) putting that information out there also hastens the correction of whatever mistake(s) lead to the vulnerability and therefore closes the window of exposure for the vulnerability... it's a security trade-off, and being able to close the hole once and for all tips the balance in favour of this practice because it winds up doing more good than harm...
if the vulnerability is not correctable, however, and some genuinely aren't, then the good that full disclosure is supposed to do can't be done...
take for example the following vulnerability... if an attacker knows your name, address, and credit card number, your credit card is vulnerable to unauthorized purchasing by the attacker... this is obvious, and i hope it's also obvious that the credit card purchasing system can't really be corrected to prevent this - this is one of those cases where full disclosure of the information needed to perform the attack (your credit card info) cannot result in this the attacked system being fixed to prevent that type of attack... the harm part of the equation is still present, of course...
many people don't consider the possibilitty that some vulnerabilities can't be fixed, however... they follow full disclosure blindly, with unwavering faith that it's going to improve the situation... they don't do that in my credit card example above, of course, because in reality everyone follows selective full disclosure without even realizing it... it seems to just be computer related vulnerabilities that people assume can always be corrected, which shows a regretable but understandable ignorance of the finer points of computer science... serious computer security researchers should know better though...
full disclosure is a good thing, when applied appropriately - but just as free speech has limits on when and where it's appropriate (ex. don't yell fire in a crowded theatre), full disclosure has limits too... that's why i'm selective in my practice and support of full disclosure...
Tags:
disclosure,
full disclosure
malware and disclosure
some time ago i made the argument that it was wrong and irresponsible to publicly disclose viral materials or to share them in any lesser capacity with those you don't know and trust... the argument was essentially that making those materials more widely available increases everyone's risk of being exposed to viral materials while failing to hasten the closure of the window of exposure because the window of exposure for something inherent to the general purpose computing platform cannot be closed... basically, since the problem cannot be fixed, telling people how to exploit it and giving them the tools to do so has very significant dangers and no real benefits....
the question then arises - do my arguments against virus disclosure apply to other forms of malware?
while we don't (as far as i know) have the benefit of mathematical proofs that other forms of malware are just as inherent to general purpose computers, there is strong reason to believe they are and that the arguments against virus disclosure are just as applicable to other forms of malware...
take, for example, trojan horse programs... fundamentally they just do undesirable things that violate user expectations... since it is not possible for a computer to be able to examine any arbitrary program and determine all it's functions (thanks to the halting problem) it is therefore not possible for a computer to guarantee that any arbitrary program functions in a way the user would expect or that the user is adequately informed of the program's actual function... further, since all undesirable actions are only undesirable in certain contexts, and desire is beyond the computer's ability to accurately measure (we ourselves often don't know that something is undesirable until after we're exposed to it), so a computer cannot prevent programs from taking undesirable actions... therefore the ability to support trojan horse programs is inherent to general purpose computers and are not 'fixable'... therefore the window of exposure for trojans, in general, cannot be closed...
now the observant reader might have noticed that i classify most malware under the trojan category, so this result is fairly broad in it's scope, but maybe individual subcategories of trojans can be 'fixed'...
let's see - any system that accepts input (which is basically any computer worth considering and certainly all general purpose computers) can support rogue programs accepting that input... throw in the ability to store data and you have instant support for some kinds of spyware... throw in the ability to communicate with other computers over any kind of network and you cover all the rest of spyware as well as all remote control software like RATs and botnets... if the computer can produce some kind of human usable output then you have inherent support for adware...
the recent results from microsoft concerning virtual machine 'rootkits' are particularly damning... if the machine's hardware layer is capable of running virtualization software (and i see no reason why any general purpose computer wouldn't be capable of this) then any protection mechanism built into an operating system can be subverted, any function (high or low level) that the operating system can perform can also be used for ill gain, and the operating system itself can be forced to lie about all of it to the user...
and none of these things represent mistakes... they are all parts of fundamental limitations and intractable problems, things that cannot be overcome under our current models of computation... the conventional wisdom is that creating and publishing these types of malware helps to fix the problems that allow these types of malware to exist but it is done without trying to identify what that underlying problem is or trying to figure out whether that underlying problem is one that can be solved... many people in the computer industry are apparently ignorant of the fact that some problems are theoretically unsolvable, and while many instances of malware do indeed utilize genuinely fixable and avoidable mistakes those mistakes can be publicly disclosed by themselves without making malware out of them...
so, since the problems that allow malware to exist cannot be fixed, and the problems that can be fixed can be publicized without making malware out of them, the sharing of malware materials with those you don't know you can trust cannot possibly help to advance the security goals of the population and quite obviously (by putting attack tools where criminals can get them) increases the risk of exposure for everyone and is therefore wrong-headed/unethical/irresponsible...
not all vulnerabilities were created equal - people shouldn't be surprised that some aren't as susceptible as others to the security process known as full disclosure...
the question then arises - do my arguments against virus disclosure apply to other forms of malware?
while we don't (as far as i know) have the benefit of mathematical proofs that other forms of malware are just as inherent to general purpose computers, there is strong reason to believe they are and that the arguments against virus disclosure are just as applicable to other forms of malware...
take, for example, trojan horse programs... fundamentally they just do undesirable things that violate user expectations... since it is not possible for a computer to be able to examine any arbitrary program and determine all it's functions (thanks to the halting problem) it is therefore not possible for a computer to guarantee that any arbitrary program functions in a way the user would expect or that the user is adequately informed of the program's actual function... further, since all undesirable actions are only undesirable in certain contexts, and desire is beyond the computer's ability to accurately measure (we ourselves often don't know that something is undesirable until after we're exposed to it), so a computer cannot prevent programs from taking undesirable actions... therefore the ability to support trojan horse programs is inherent to general purpose computers and are not 'fixable'... therefore the window of exposure for trojans, in general, cannot be closed...
now the observant reader might have noticed that i classify most malware under the trojan category, so this result is fairly broad in it's scope, but maybe individual subcategories of trojans can be 'fixed'...
let's see - any system that accepts input (which is basically any computer worth considering and certainly all general purpose computers) can support rogue programs accepting that input... throw in the ability to store data and you have instant support for some kinds of spyware... throw in the ability to communicate with other computers over any kind of network and you cover all the rest of spyware as well as all remote control software like RATs and botnets... if the computer can produce some kind of human usable output then you have inherent support for adware...
the recent results from microsoft concerning virtual machine 'rootkits' are particularly damning... if the machine's hardware layer is capable of running virtualization software (and i see no reason why any general purpose computer wouldn't be capable of this) then any protection mechanism built into an operating system can be subverted, any function (high or low level) that the operating system can perform can also be used for ill gain, and the operating system itself can be forced to lie about all of it to the user...
and none of these things represent mistakes... they are all parts of fundamental limitations and intractable problems, things that cannot be overcome under our current models of computation... the conventional wisdom is that creating and publishing these types of malware helps to fix the problems that allow these types of malware to exist but it is done without trying to identify what that underlying problem is or trying to figure out whether that underlying problem is one that can be solved... many people in the computer industry are apparently ignorant of the fact that some problems are theoretically unsolvable, and while many instances of malware do indeed utilize genuinely fixable and avoidable mistakes those mistakes can be publicly disclosed by themselves without making malware out of them...
so, since the problems that allow malware to exist cannot be fixed, and the problems that can be fixed can be publicized without making malware out of them, the sharing of malware materials with those you don't know you can trust cannot possibly help to advance the security goals of the population and quite obviously (by putting attack tools where criminals can get them) increases the risk of exposure for everyone and is therefore wrong-headed/unethical/irresponsible...
not all vulnerabilities were created equal - people shouldn't be surprised that some aren't as susceptible as others to the security process known as full disclosure...
Tags:
disclosure,
full disclosure,
malware
Sunday, April 16, 2006
all virus prevention methods fail
i blogged some time ago about how all anti-virus products fail, noting that no anti-virus product can ever have perfect detection and that even if one did part of the reason for the failure is often that the user has been careless and not kept the product up to date or used the product incorrectly or simply exposed themselves to too much risk...
the implication was that through judicious computing practices (often referred to as safe hex) one could drastically reduce the risk of computer virus infection... while this is true, there is a dangerous oversimplification that can be made - safe hex does not (can not) reduce the risk to zero... it's something i tried to touch on before but i'll be more straightforward about it now...
no virus prevention method is 100% effective... no combination of virus prevention methods is 100% effective... they all fail... you can follow safe hex to the letter, you can do all the right things and still get a computer virus... the only sure way to avoid a computer virus is to not have a computer, or if you get one don't turn it on...
this may sound like doom and gloom but it's a reality of security in general - preventing bad things from happening is an ideal we strive for but we can't achieve it all the time, that's just not a realistic expectation... even if we were perfect (which we aren't) and followed the best prevention strategies out there (which we often don't) some of us would still get hit...
what that means is that prevention is only part of how one must address the problem... the other part is planning for your preventative measures to fail - figuring out how to detect when a failure has happened (ideally using some kind of known-clean environment) and how to recover from that failure (everything from removing the offending item to restoring from backups)... any anti-virus (or anti-malware for that matter) strategy is incomplete if it doesn't include these elements...
safe hex, for all it's benefits, does not deal with these things... safe hex is a set of best practices for prevention... detection of preventative failures is a much trickier proposition and recovery is as big (if not bigger) a topic on it's own as safe hex... the oversimplification of 'just follow safe hex and you'll be fine' or 'you got hit so you need to learn safe hex' is dangerous because it gives the impression that there's no need to plan for failure and so people don't... people are lazy, they won't plan for failure if they don't think they need to and the oversimplification makes people believe they don't need to... in this way, sometimes safe hex advice can create a false sense of security and thus can be a kind of snake oil...
safe hex is still a good thing, though... it's just not the whole answer... prevention, detection of preventative failures, and recovery from failure - you need to keep all three in mind when developing your anti-virus strategy...
the implication was that through judicious computing practices (often referred to as safe hex) one could drastically reduce the risk of computer virus infection... while this is true, there is a dangerous oversimplification that can be made - safe hex does not (can not) reduce the risk to zero... it's something i tried to touch on before but i'll be more straightforward about it now...
no virus prevention method is 100% effective... no combination of virus prevention methods is 100% effective... they all fail... you can follow safe hex to the letter, you can do all the right things and still get a computer virus... the only sure way to avoid a computer virus is to not have a computer, or if you get one don't turn it on...
this may sound like doom and gloom but it's a reality of security in general - preventing bad things from happening is an ideal we strive for but we can't achieve it all the time, that's just not a realistic expectation... even if we were perfect (which we aren't) and followed the best prevention strategies out there (which we often don't) some of us would still get hit...
what that means is that prevention is only part of how one must address the problem... the other part is planning for your preventative measures to fail - figuring out how to detect when a failure has happened (ideally using some kind of known-clean environment) and how to recover from that failure (everything from removing the offending item to restoring from backups)... any anti-virus (or anti-malware for that matter) strategy is incomplete if it doesn't include these elements...
safe hex, for all it's benefits, does not deal with these things... safe hex is a set of best practices for prevention... detection of preventative failures is a much trickier proposition and recovery is as big (if not bigger) a topic on it's own as safe hex... the oversimplification of 'just follow safe hex and you'll be fine' or 'you got hit so you need to learn safe hex' is dangerous because it gives the impression that there's no need to plan for failure and so people don't... people are lazy, they won't plan for failure if they don't think they need to and the oversimplification makes people believe they don't need to... in this way, sometimes safe hex advice can create a false sense of security and thus can be a kind of snake oil...
safe hex is still a good thing, though... it's just not the whole answer... prevention, detection of preventative failures, and recovery from failure - you need to keep all three in mind when developing your anti-virus strategy...
Tags:
anti-virus,
safe hex,
snake oil
Friday, April 14, 2006
anti-malware linking policy
linking policy? what's a linking policy?... well, it's basically the rules i follow (or should be following) when including links in my blog posts (or elsewhere)...
it occurred to me that i need to take a look at this question because i've approached the issue in a very ad hoc sort of way in the past and on occasion i've had to go back and remove links when i realized what i'd done...
why does that even matter? well it's like this - i'm an anti-malware sort of guy and so links that lead people to malware or otherwise make it easier for people to find malware goes against my principles... i'm really not big on helping people find malware online, i'm more interested in helping people avoid malware, so links to malware are kinda counter-productive... also such links raise the search profile of the malware and make it easier to find with search engines like google... avoiding this kind of link really is not a problem, in fact it's quite easy to not link directly the malware...
on the other hand, sometimes a site has information on it that i really want to reference/cite in one of my posts, but the site also hosts malware... this is a much harder case to figure out the right solution... i could just be hardcore about my principles and say 'nothing doing' and maybe copy the information verbatim with author attribution...
how about if a site with interesting info only links directly to malware that's hosted elsewhere?.. you see where this is going, the shades of gray are a big headache... search engines caches of sites hosting malware would fall under this category, am i going to avoid using google's cache entirely? or simply not use google's cache as a way to get around my principles when they're inconvenient...
how about if a site with interesting info only links to other sites that have malware but not to the malware directly? even if the site is also anti-malware and just doesn't follow as strict a linking policy as me? that would be verging on the absurd...
after much deliberation i've come up with the following:
it occurred to me that i need to take a look at this question because i've approached the issue in a very ad hoc sort of way in the past and on occasion i've had to go back and remove links when i realized what i'd done...
why does that even matter? well it's like this - i'm an anti-malware sort of guy and so links that lead people to malware or otherwise make it easier for people to find malware goes against my principles... i'm really not big on helping people find malware online, i'm more interested in helping people avoid malware, so links to malware are kinda counter-productive... also such links raise the search profile of the malware and make it easier to find with search engines like google... avoiding this kind of link really is not a problem, in fact it's quite easy to not link directly the malware...
on the other hand, sometimes a site has information on it that i really want to reference/cite in one of my posts, but the site also hosts malware... this is a much harder case to figure out the right solution... i could just be hardcore about my principles and say 'nothing doing' and maybe copy the information verbatim with author attribution...
how about if a site with interesting info only links directly to malware that's hosted elsewhere?.. you see where this is going, the shades of gray are a big headache... search engines caches of sites hosting malware would fall under this category, am i going to avoid using google's cache entirely? or simply not use google's cache as a way to get around my principles when they're inconvenient...
how about if a site with interesting info only links to other sites that have malware but not to the malware directly? even if the site is also anti-malware and just doesn't follow as strict a linking policy as me? that would be verging on the absurd...
after much deliberation i've come up with the following:
- i will not link directly to malware
- i will not link to pages (or caches of those pages) with links to malware
- i will not link to domains (or caches of those domains) whose sole purpose is to distribute malware (like commercial malware vendors) or which make malware distribution a major part of their purpose (so vx sites are out, but most sans.org pages are ok)
- unless the page i'm interested in linking to has no links that could lead back to malware (and just to be on the safe side sticking with no links at all is generally best for this exception)
- i will correct any non-compliant linking when i find out about it
Tags:
administrivia,
anti-malware,
ethics
Wednesday, April 12, 2006
what is a popup?
a popup is a (usually small) window that opens (pops up) overtop of other windows...
popups are generally associated with advertizing however windows that open up overtop of other windows are common place in other contexts such as error messages, notification messages, etc..
there are a number of different popup types:
windows messaging service popups are popups that come in through the windows messaging service (not to be confused with the msn messenger client)... these are simple looking, text-only dialogs... these can be completely disabled by disabling the windows messaging service or blocking (or simply not forwarding) ports 135-139, and 445 at the firewall (or not allowing the windows messaging service to act as a server if you're using a software firewall)...
web browser content popups are additional (and sometimes frameless) browser windows that are launched by your browser running scripts or other active content like flash or activex on web pages that you visit... modern browsers like firefox have built-in popup blocking capability meant to deal with these types of popups but depending on the sophistication the built-in measures may not work against all of them...
3rd party software popups are popups generated by some software installed on your system (often some form of adware)... often the popups will be browser windows in order to show web-based ads with all the graphics and multimedia we've seen time and again on the web... these can be avoided by not installing the 3rd party software in the first place, but if you do install then to stop the popups you'll have to remove the 3rd party software and in the case of 3rd party advertising software that's not always an easy task and often requires specialized removal software...
notification message popups are small, often text-only windows that pop up from the lower right hand corner (just above the system tray)... these are often to notify the user of some event, like a new email arriving or a instant messaging contact coming online... they can also be used to notify the user that there's a new advertisement for him/her to look at... this is a second, somewhat less obtrusive form of 3rd party software popup and must be dealt with in the same way...
the problem with popups is that they take up part of the screen, covering part or all of the work area of other windows/applications (even the notification message popups cover part of the screen when they appear)... this has the potential to interfere with the work the user is doing by covering an area s/he was using... when the popup is about something important, some option the user has to choose to continue a process they're involved in, some error condition, some event the user is actually interested in, etc. then the fact that it's covering up part of the user's work area is accepted - however when it's for advertising, the covering up of (even a small amount of) work area is almost universally not accepted because advertisements are just not important enough to justify interrupting what the user would have otherwise been doing...
back to index
popups are generally associated with advertizing however windows that open up overtop of other windows are common place in other contexts such as error messages, notification messages, etc..
there are a number of different popup types:
windows messaging service popups are popups that come in through the windows messaging service (not to be confused with the msn messenger client)... these are simple looking, text-only dialogs... these can be completely disabled by disabling the windows messaging service or blocking (or simply not forwarding) ports 135-139, and 445 at the firewall (or not allowing the windows messaging service to act as a server if you're using a software firewall)...
web browser content popups are additional (and sometimes frameless) browser windows that are launched by your browser running scripts or other active content like flash or activex on web pages that you visit... modern browsers like firefox have built-in popup blocking capability meant to deal with these types of popups but depending on the sophistication the built-in measures may not work against all of them...
3rd party software popups are popups generated by some software installed on your system (often some form of adware)... often the popups will be browser windows in order to show web-based ads with all the graphics and multimedia we've seen time and again on the web... these can be avoided by not installing the 3rd party software in the first place, but if you do install then to stop the popups you'll have to remove the 3rd party software and in the case of 3rd party advertising software that's not always an easy task and often requires specialized removal software...
notification message popups are small, often text-only windows that pop up from the lower right hand corner (just above the system tray)... these are often to notify the user of some event, like a new email arriving or a instant messaging contact coming online... they can also be used to notify the user that there's a new advertisement for him/her to look at... this is a second, somewhat less obtrusive form of 3rd party software popup and must be dealt with in the same way...
the problem with popups is that they take up part of the screen, covering part or all of the work area of other windows/applications (even the notification message popups cover part of the screen when they appear)... this has the potential to interfere with the work the user is doing by covering an area s/he was using... when the popup is about something important, some option the user has to choose to continue a process they're involved in, some error condition, some event the user is actually interested in, etc. then the fact that it's covering up part of the user's work area is accepted - however when it's for advertising, the covering up of (even a small amount of) work area is almost universally not accepted because advertisements are just not important enough to justify interrupting what the user would have otherwise been doing...
back to index
Tags:
adware,
definition,
malware,
popup
what is false authority syndrome
false authority syndrome is where someone thinks they know more about a subject than they actually do, or where people think the person knows more about a subject than that person actually does, simply because the person has expertise in a related field...
the most common example of this that i know of is where a person with some computer expertise (say a tech support specialist, or a computer scientist, or software developer) believes they have advanced knowledge of the computer virus field in spite of being largely ignorant of it...
back to index
the most common example of this that i know of is where a person with some computer expertise (say a tech support specialist, or a computer scientist, or software developer) believes they have advanced knowledge of the computer virus field in spite of being largely ignorant of it...
back to index
Tuesday, April 11, 2006
the case of the non-expert expert: a rebuttal to "the case of the non-viral virus"
how many people read the newsforge article titled "the case of the non-viral virus" by joe barr? if you did then you've just witnessed false authority syndrome in action...
let's deconstruct it, shall we?
for the record - Virus.Linux.Bi.a IS a real virus...
for completeness sake, here is the original weblog entry from the kaspersky folks about this cross-platform virus: http://www.viruslist.com/en/weblog?weblogid=183651915
you can plainly see from the description that the program in question both self-replicates and infects other programs - a virus by any reasonable definition and even by the wikipedia one the author chose to cite...
let's deconstruct it, shall we?
Thus, once and for all, there is an end to the notion that Linux is somehow immune to the viral infections that plague the Windows world.it is not the first linux virus, not by a long shot... linux viruses have been around for years now and it's really quite absurd that anyone would still be holding on to the notion that linux is immune...
One minor thing is that the alleged virus -- called Virus.Linux.Bi.a -- being trumpeted far and wide by Kaspersky Lab is not really a virus, but rather "proof of concept" code, designed to show that such a virus could be written.there is nothing that says something cannot be both a virus and a proof of concept... the fact is that the first virus for any platform is a proof of concept by default... the first virus that performs function X is a proof of concept... it's amazing that someone could pass themselves off as an authority on a subject and be so clueless about the terminology...
for the record - Virus.Linux.Bi.a IS a real virus...
A second caveat is that for it to work on Linux, a user has to download the program and then execute it, and even then, it can only "infect" files in the same directory the program is in.the same limitations are true for quite a few DOS and windows viruses... these limitations do not stop it from being a virus...
Exactly how the program gets write permissions even in that directory is not explained.unless linux has changed considerably since i last used it, individual programs do not require their own permissions... some special cases have special user accounts created just for running them, but by far most programs do not... even so, it is never the program that has permissions, but rather it is the user (whether the user is a real person or not), and in this case the viral program runs in the context of (and has the permissions of) the user who executes it...
And finally, it's not a virus at all. It can't replicate itself, which is one thing that makes a piece of malware a virus.the author seems to be talking out of his ass here... while it's true that self-replication is a requirement, the program in question DOES self-replicate... it creates copies of itself that it inserts into other programs - i'm not sure what the author of that article thinks self-replication is but making copies of itself is the very definition of self-replication...
for completeness sake, here is the original weblog entry from the kaspersky folks about this cross-platform virus: http://www.viruslist.com/en/weblog?weblogid=183651915
you can plainly see from the description that the program in question both self-replicates and infects other programs - a virus by any reasonable definition and even by the wikipedia one the author chose to cite...
Tags:
cross platform,
false authority syndrome,
joe barr,
linux,
malware,
virus,
windows
Sunday, April 09, 2006
the difference between security and risk
the concepts of security and risk are often used interchangably, in part because risk is often thought of as a security-related concept, but they are in fact very different things and using one to mean the other often leads to confusion and ultimately dilution of the terms themselves...
to start off with, security is a property of a system (and i mean system in the most general sense, not just a computer system or operating system)... some people will tell you security is a process and i myself have used that phrase, but that phrase is actually meant to convey the fact that achieving and maintaining a reasonable level of security is a (never ending) process...
security can be thought of as the relative absense of serious vulnerabilities... the fewer vulnerabilities there are and the less serious they are the more secure the system is... now it's important to realize that we don't and can't know all the vulnerabilities of any system so security is fundamentally unmeasurable and unquantifiable... instead we have to estimate, we have to make our best guess based on the number and severity of known vulnerabilities of the system, and by severity i mean the extent to which the value of the system is lost if the vulnerability in question is exploited... the more insecurities there are and/or the more severe they are, the less secure the system is believed to be... of course, as it is ultimately just a guess - arguments over which browser or which operating system is more secure is about as meaningful as arguing over which flavour of icecream is best...
risk, on the other hand, is the chance of a bad thing (or any one of a class of bad things) happening - in this context, the chance of a vulnerability getting exploited... unlike security which depends solely on the system, risk depends on the the time and effort available to mount an attack... time comes from the window of exposure - the longer that window is open the more time the attacker community has to find a workable exploit and to find YOU, and so the greater the chance they'll do both... effort comes from the attackers themselves... the risk of a security breach is directly affected by the value of that breach, not to the victim, but to the attacker community... even a low impact vulnerability can have a high risk of being found and exploited if it's a desirable target... the more desirable the breach, the more people there will be looking at it, the more man-hours worth of effort will go into trying to find a successful attack, the higher the chance of finding a successful attack while the target is still interesting and the wider the attack is likely to be deployed... what does the the attacker community value? they want the biggest bang (figuratively) for the buck they can get - the more popular the system, the more overall impact they can have with a given unit of effort...
now, while security and risk are quite different, they do tend to interact with one another... low risk, for example, can obscure the presence of insecurities from the public... the lower the risk, the less likely a real exploit will ever see the light of day, regardless of the severity of any vulnerabilities that may be present... sometimes obscurity is used to try to affect lower risk - this is where the myth of security through obscurity comes from... clearly obscurity can't do anything about the presence of vulnerabilities except make them less likely to be found... that's not to say it can actually prevent the attacker community from discovering and exploiting the vulnerabilities because it can't... obscurity is just a way of managing risk, however correcting the vulnerability (when possible) is by far a more effective means of managing that risk and should be the preferred method... unfortunately some vulnerabilities are not correctable, either by virtue of being inherent in the fundamental building blocks of the system (such as virus infectability being inherent to the general purpose computing platform) or the result of some social rather than technical problem, so sometimes alternative risk management techniques like obscurity are reasonable... i know some people don't want to hear that, some people worship at the altar of full-disclosure, but to them all i can say is that a foolish consistency is the hobgoblin of little minds... if the risk can't be managed by eliminating the vulnerability then another method has to be used...
high risk obviously can be very good at revealing the presence of insecurities (usually to the detriment of the victim) - as more new vulnerabilities are found in the system, the total number of known vulnerabilities goes up and the security estimate for that system goes down... the security estimate can go back up when those vulnerabilities are corrected, but it doesn't go all the way back since each new vulnerability reflects more and more poorly on the underlying quality of the system... it isn't until a highly attractive target endures long periods without a breach that significant improvements in the security estimate are made... high risk can also magnify the public perception of insecurities, even when they're low severity ones... because of the confusion between risk and security, the higher the rate and wider the deployment of attacks makes the severity of the vulnerability seem worse even though the number of attacks isn't really related to the number of vulnerabilities or their severity... this translates into a lower confidence in the security of the system, however our security estimates are not based on public opinion and therefore may not be significantly changed... take, for example, the LSASS vulnerability in windows - with each new piece of malware written to exploit it the security of windows remains the same and the severity of the vulnerability remains unchanged, but the chance of a windows system that exposes that vulnerability to the internet getting breached goes up...
low security can increase the risk of a breach dramatically when combined with popularity... the attacker community is made up of humans after all, and humans are essentially lazy creatures so a widely deployed (popular) highly insecure system represents exactly the kind of low-hanging fruit that is most attractive to them... the fact that a system is highly insecure often means there are a wide variety of vulnerabilities for the attacker to choose from and so it's easier to find one that suits his/her needs and the fact that the system is popular makes it more worth the effort to look for one... when not combined with populatity, low security can languish in obscurity never knowing a successful exploit... low security always increases overall risk to some extent, but how much is determined by other things...
as you'd expect, high security lowers the overall risk of a breach... as vulnerabilities are eliminated the total number of remaining vulnerabilities goes down and so does the remaining avenues of attack... the fewer of those there are, the harder it is to find one and thus the lower the chances that one will be found... the risks of finding any particular remaining remaining vulnerability doesn't necessarily change that much, of course, but the chances of finding one that is no longer there obviously drops to zero... it's important to note that this is only true for instances of the system that are up-to-date - after a vulnerability has been found and publicized through the availability of a security update it becomes quite easy for the attacker community to use it against those instances of a system which haven't yet eliminated it through updating/upgrading...
while it may seem at first glance that there's a relationship between a high number of attacks and low security, that's a correlation rather than a causal relationship... the fact that low security but unpopular (and therefore low risk) systems generally receive few attacks indicates that attacks are related more closely to high risk than to low security...
so now that you know all this, what can you do with it? well for one you can better evaluate claims of high/low security (we'll never really know which operating system or which browser is more secure)... you should also be able to recognize which issues actually pertain to risk and why/how they're important to you and/or your organization... clearly you want high security, you want there to be as few potential avenues for attack as possible, but you also want low risk - there can be no perfect security, there will always be a few avenues for attack and having a low number of them doesn't help much if those avenues are being used frequently...
to start off with, security is a property of a system (and i mean system in the most general sense, not just a computer system or operating system)... some people will tell you security is a process and i myself have used that phrase, but that phrase is actually meant to convey the fact that achieving and maintaining a reasonable level of security is a (never ending) process...
security can be thought of as the relative absense of serious vulnerabilities... the fewer vulnerabilities there are and the less serious they are the more secure the system is... now it's important to realize that we don't and can't know all the vulnerabilities of any system so security is fundamentally unmeasurable and unquantifiable... instead we have to estimate, we have to make our best guess based on the number and severity of known vulnerabilities of the system, and by severity i mean the extent to which the value of the system is lost if the vulnerability in question is exploited... the more insecurities there are and/or the more severe they are, the less secure the system is believed to be... of course, as it is ultimately just a guess - arguments over which browser or which operating system is more secure is about as meaningful as arguing over which flavour of icecream is best...
risk, on the other hand, is the chance of a bad thing (or any one of a class of bad things) happening - in this context, the chance of a vulnerability getting exploited... unlike security which depends solely on the system, risk depends on the the time and effort available to mount an attack... time comes from the window of exposure - the longer that window is open the more time the attacker community has to find a workable exploit and to find YOU, and so the greater the chance they'll do both... effort comes from the attackers themselves... the risk of a security breach is directly affected by the value of that breach, not to the victim, but to the attacker community... even a low impact vulnerability can have a high risk of being found and exploited if it's a desirable target... the more desirable the breach, the more people there will be looking at it, the more man-hours worth of effort will go into trying to find a successful attack, the higher the chance of finding a successful attack while the target is still interesting and the wider the attack is likely to be deployed... what does the the attacker community value? they want the biggest bang (figuratively) for the buck they can get - the more popular the system, the more overall impact they can have with a given unit of effort...
now, while security and risk are quite different, they do tend to interact with one another... low risk, for example, can obscure the presence of insecurities from the public... the lower the risk, the less likely a real exploit will ever see the light of day, regardless of the severity of any vulnerabilities that may be present... sometimes obscurity is used to try to affect lower risk - this is where the myth of security through obscurity comes from... clearly obscurity can't do anything about the presence of vulnerabilities except make them less likely to be found... that's not to say it can actually prevent the attacker community from discovering and exploiting the vulnerabilities because it can't... obscurity is just a way of managing risk, however correcting the vulnerability (when possible) is by far a more effective means of managing that risk and should be the preferred method... unfortunately some vulnerabilities are not correctable, either by virtue of being inherent in the fundamental building blocks of the system (such as virus infectability being inherent to the general purpose computing platform) or the result of some social rather than technical problem, so sometimes alternative risk management techniques like obscurity are reasonable... i know some people don't want to hear that, some people worship at the altar of full-disclosure, but to them all i can say is that a foolish consistency is the hobgoblin of little minds... if the risk can't be managed by eliminating the vulnerability then another method has to be used...
high risk obviously can be very good at revealing the presence of insecurities (usually to the detriment of the victim) - as more new vulnerabilities are found in the system, the total number of known vulnerabilities goes up and the security estimate for that system goes down... the security estimate can go back up when those vulnerabilities are corrected, but it doesn't go all the way back since each new vulnerability reflects more and more poorly on the underlying quality of the system... it isn't until a highly attractive target endures long periods without a breach that significant improvements in the security estimate are made... high risk can also magnify the public perception of insecurities, even when they're low severity ones... because of the confusion between risk and security, the higher the rate and wider the deployment of attacks makes the severity of the vulnerability seem worse even though the number of attacks isn't really related to the number of vulnerabilities or their severity... this translates into a lower confidence in the security of the system, however our security estimates are not based on public opinion and therefore may not be significantly changed... take, for example, the LSASS vulnerability in windows - with each new piece of malware written to exploit it the security of windows remains the same and the severity of the vulnerability remains unchanged, but the chance of a windows system that exposes that vulnerability to the internet getting breached goes up...
low security can increase the risk of a breach dramatically when combined with popularity... the attacker community is made up of humans after all, and humans are essentially lazy creatures so a widely deployed (popular) highly insecure system represents exactly the kind of low-hanging fruit that is most attractive to them... the fact that a system is highly insecure often means there are a wide variety of vulnerabilities for the attacker to choose from and so it's easier to find one that suits his/her needs and the fact that the system is popular makes it more worth the effort to look for one... when not combined with populatity, low security can languish in obscurity never knowing a successful exploit... low security always increases overall risk to some extent, but how much is determined by other things...
as you'd expect, high security lowers the overall risk of a breach... as vulnerabilities are eliminated the total number of remaining vulnerabilities goes down and so does the remaining avenues of attack... the fewer of those there are, the harder it is to find one and thus the lower the chances that one will be found... the risks of finding any particular remaining remaining vulnerability doesn't necessarily change that much, of course, but the chances of finding one that is no longer there obviously drops to zero... it's important to note that this is only true for instances of the system that are up-to-date - after a vulnerability has been found and publicized through the availability of a security update it becomes quite easy for the attacker community to use it against those instances of a system which haven't yet eliminated it through updating/upgrading...
while it may seem at first glance that there's a relationship between a high number of attacks and low security, that's a correlation rather than a causal relationship... the fact that low security but unpopular (and therefore low risk) systems generally receive few attacks indicates that attacks are related more closely to high risk than to low security...
so now that you know all this, what can you do with it? well for one you can better evaluate claims of high/low security (we'll never really know which operating system or which browser is more secure)... you should also be able to recognize which issues actually pertain to risk and why/how they're important to you and/or your organization... clearly you want high security, you want there to be as few potential avenues for attack as possible, but you also want low risk - there can be no perfect security, there will always be a few avenues for attack and having a low number of them doesn't help much if those avenues are being used frequently...
Saturday, April 08, 2006
attention visitors from subeltblog - please read this
i'm getting an influx of traffic from sunbeltblog readers over the adware comment thread to end all adware comment threads that i wrote about earlier...
i would like to ask you all, before you send me hate-mail for being an 'adware apologist' (which i'm not), please take some courses in abstract reasoning and critical analysis - because once you actually have a handle on those skills you will probably understand what i was talking about in that comment thread...
the synopsis goes like so:
an anti-spyware website criticized an anti-spyware application (labelled it rogue, actually) in part because it was bundled with adware...
the question is, does installing program B make program A bad? the answer is yes if:
while many adware clients are also spyware of one form or another, the anti-spyware website in question never established that the adware in question was also spyware so number 1 above is out... that just leaves us with number 2, but again the anti-spyware website in question failed to show that program B was bad, opting instead to simply indicate who created it...
being created by bad people (in this case very bad people) doesn't make software bad (john mcafee became a pariah in the av industry for his practices, but his software was still good)... furthermore, just because most of the software made by those bad people is bad doesn't mean this one in particular is bad... you can't prove they've never made good software (because you can't prove a negative), but you can prove that this one was bad if it really was - either indicate how the program was bad or indicate that it matches a program that was previously analyzed and determined to be bad...
does just being adware make it bad? no it does not.. not all adware installs in secret (and in this case the adware's presence was fully disclosed), not all adware has pop-ups, not all adware is also spyware... none of those really bad things most people associate with adware are actually inherent to adware, they're simply present in the vast majority of cases...
was this particular instance of adware bad? probably... most (possibly all) instances of adware produced by it's creators were spyware and those creators are now paying the price for it...
was the anti-spyware app it was bundled with really rogue? definitely... aside from bundling probable spyware with an anti-spyware app, it used deceptive practices to get users to install it and there is serious question as to whether it actually detects and/or removes any spyware...
unfortunately for me, i use a functional definition for adware that is unencumbered by emotional baggage related to it's association with other types of malware and so i create contraversy and get hate-mail... but using an adware defintion that is abstracted from other forms of malware doesn't make me an adware apologist, it makes me a computer scientist and anyone who doesn't like it can stick it where the sun don't shine...
i would like to ask you all, before you send me hate-mail for being an 'adware apologist' (which i'm not), please take some courses in abstract reasoning and critical analysis - because once you actually have a handle on those skills you will probably understand what i was talking about in that comment thread...
the synopsis goes like so:
an anti-spyware website criticized an anti-spyware application (labelled it rogue, actually) in part because it was bundled with adware...
the question is, does installing program B make program A bad? the answer is yes if:
1. program A (an anti-spyware app) was supposed to stop program B (an adware client)
or
2. program B can be shown to be bad itself
while many adware clients are also spyware of one form or another, the anti-spyware website in question never established that the adware in question was also spyware so number 1 above is out... that just leaves us with number 2, but again the anti-spyware website in question failed to show that program B was bad, opting instead to simply indicate who created it...
being created by bad people (in this case very bad people) doesn't make software bad (john mcafee became a pariah in the av industry for his practices, but his software was still good)... furthermore, just because most of the software made by those bad people is bad doesn't mean this one in particular is bad... you can't prove they've never made good software (because you can't prove a negative), but you can prove that this one was bad if it really was - either indicate how the program was bad or indicate that it matches a program that was previously analyzed and determined to be bad...
does just being adware make it bad? no it does not.. not all adware installs in secret (and in this case the adware's presence was fully disclosed), not all adware has pop-ups, not all adware is also spyware... none of those really bad things most people associate with adware are actually inherent to adware, they're simply present in the vast majority of cases...
was this particular instance of adware bad? probably... most (possibly all) instances of adware produced by it's creators were spyware and those creators are now paying the price for it...
was the anti-spyware app it was bundled with really rogue? definitely... aside from bundling probable spyware with an anti-spyware app, it used deceptive practices to get users to install it and there is serious question as to whether it actually detects and/or removes any spyware...
unfortunately for me, i use a functional definition for adware that is unencumbered by emotional baggage related to it's association with other types of malware and so i create contraversy and get hate-mail... but using an adware defintion that is abstracted from other forms of malware doesn't make me an adware apologist, it makes me a computer scientist and anyone who doesn't like it can stick it where the sun don't shine...
Tags:
adware,
malware,
spyware,
sunbeltblog
cellphone spyware part 2 1/2
ok so there's apparently some confusion over my previous post about neo-call... i'm going to try and make this as simple as possible, both for the benefit of the confused and as an object lesson for everyone else...
softWARE that can SPY on someone is SPYWARE... this shouldn't be rocket science to anyone... flexispy and neo-call's spyphone both enable spying and so are both spyware...
software that does something bad (like for example spying on people) without disclosing that fact is a trojan horse program... neither flexispy nor neo-call's spyphone let the person being spied on know that it's spying on them, they both do something bad without disclosing the fact and thus are both trojans...
the fact that neo-call's spyphone is several times more expensive than flexispy doesn't stop it from being spyware or a trojan, it just makes it an expensive spyware trojan...
the fact that the installer generated by neo-call, when you give them the code for the target phone, can only be used on that phone doesn't stop it from being a spyware trojan, it just makes it a more limited spyware trojan...
the fact that flexispy can be installed on many phones does not make it a virus... many programs can be installed on many phones, that has nothing to do with being a virus and being a virus has nothing to do with why f-secure added detection for flexispy...
the fact that flexispy has a hidden menu is not what makes it bad and the absense of a similar menu doesn't make neo-call's spyphone good... flexispy is bad because it hides it's menu and every other indication that it's installed (in other words, it doesn't disclose it's spying nature) - neo-call's spyphone has no menu to hide but still hides all the other indications that it's installed and therefore is equally bad...
in short - neo-call's spyphone is just as bad as flexispy, it is just as much a spyware trojan as flexispy... it's limitations (the high cost per installation) make it a much smaller risk to the public, however, and that is why anti-virus vendors don't take it as seriously as they do flexispy...
softWARE that can SPY on someone is SPYWARE... this shouldn't be rocket science to anyone... flexispy and neo-call's spyphone both enable spying and so are both spyware...
software that does something bad (like for example spying on people) without disclosing that fact is a trojan horse program... neither flexispy nor neo-call's spyphone let the person being spied on know that it's spying on them, they both do something bad without disclosing the fact and thus are both trojans...
the fact that neo-call's spyphone is several times more expensive than flexispy doesn't stop it from being spyware or a trojan, it just makes it an expensive spyware trojan...
the fact that the installer generated by neo-call, when you give them the code for the target phone, can only be used on that phone doesn't stop it from being a spyware trojan, it just makes it a more limited spyware trojan...
the fact that flexispy can be installed on many phones does not make it a virus... many programs can be installed on many phones, that has nothing to do with being a virus and being a virus has nothing to do with why f-secure added detection for flexispy...
the fact that flexispy has a hidden menu is not what makes it bad and the absense of a similar menu doesn't make neo-call's spyphone good... flexispy is bad because it hides it's menu and every other indication that it's installed (in other words, it doesn't disclose it's spying nature) - neo-call's spyphone has no menu to hide but still hides all the other indications that it's installed and therefore is equally bad...
in short - neo-call's spyphone is just as bad as flexispy, it is just as much a spyware trojan as flexispy... it's limitations (the high cost per installation) make it a much smaller risk to the public, however, and that is why anti-virus vendors don't take it as seriously as they do flexispy...
Friday, April 07, 2006
rebuttal to 'how the anti-virus industry is turning a white hat black'
read the original article here... i'm amazed at the shoddy analysis performed by the author...
first of all, just because someone calls themself a white hat doesn't make it so... the guy put a modified version of a so-called 'rootkit' (we won't go there this time) on the web where anyone can get it... modified in such a way that existing detection routines would fail to find it... how the hell does anyone mistake that for something a white hat would do?...
second, just because he put it on the web doesn't mean it's going to find it's way to the anti-virus developers or to anyone else for that matter... the idea that important people will sit up and take notice because you put something on your website is the height of egocentricity (i'm lucky if i get 10 hits a day here, i know i'm not reaching many people)...
third, it was a 'rootkit' not a virus... why do people insist on calling everything bad a virus?
fourth, the reality is that at most the anti-virus companies were given 1 month and 5 days notice (march 1 to april 5)... now that's still a pretty long time for so many companies to fail to add detection but it's not the 3 months the author was claiming...
should anti-virus companies add detection for this non-virus? probably, since it's now more a matter of defending against malware in general than viruses in particular... is it a high priority? probably not... at least not yet... non-replicating malware rarely becomes as big a threat as the self-replicating variety... i'm sure it'll get added eventually but i'm not too concerned if it has to prove itself as a threat first...
first of all, just because someone calls themself a white hat doesn't make it so... the guy put a modified version of a so-called 'rootkit' (we won't go there this time) on the web where anyone can get it... modified in such a way that existing detection routines would fail to find it... how the hell does anyone mistake that for something a white hat would do?...
second, just because he put it on the web doesn't mean it's going to find it's way to the anti-virus developers or to anyone else for that matter... the idea that important people will sit up and take notice because you put something on your website is the height of egocentricity (i'm lucky if i get 10 hits a day here, i know i'm not reaching many people)...
third, it was a 'rootkit' not a virus... why do people insist on calling everything bad a virus?
fourth, the reality is that at most the anti-virus companies were given 1 month and 5 days notice (march 1 to april 5)... now that's still a pretty long time for so many companies to fail to add detection but it's not the 3 months the author was claiming...
should anti-virus companies add detection for this non-virus? probably, since it's now more a matter of defending against malware in general than viruses in particular... is it a high priority? probably not... at least not yet... non-replicating malware rarely becomes as big a threat as the self-replicating variety... i'm sure it'll get added eventually but i'm not too concerned if it has to prove itself as a threat first...
Tags:
anti-virus,
blackhat,
malware,
rootkit,
whitehat
Wednesday, April 05, 2006
rob slade on rootkits
read it here... the one and only rob slade, doting grandpa of a gradually increasing number of people... wow, that takes me back... i remember when -
mr. peabody: please step out of the wayback machine
ok, ok... well, it's nice to see i'm not the only one who thinks the current redefinition of 'rootkit' is a step in the wrong direction... see, all that ranting i've been doing about it isn't just my own personal imbalance...
anyways, that there is a blog that includes contributions from the rob slade so it's definitely going on my blogroll...
mr. peabody: please step out of the wayback machine
ok, ok... well, it's nice to see i'm not the only one who thinks the current redefinition of 'rootkit' is a step in the wrong direction... see, all that ranting i've been doing about it isn't just my own personal imbalance...
anyways, that there is a blog that includes contributions from the rob slade so it's definitely going on my blogroll...
Tags:
malware,
rob slade,
rootkit,
terminology misuse
Tuesday, April 04, 2006
why microsoft hates 3rd party patches
it's not exactly news anymore, but it's happened again - a bug has cropped up in microsoft's software that is so serious that outside parties have been forced to create patches for it due to microsoft's glacial pace...
microsoft doesn't like this, it doesn't like this one bit... it makes them look like chumps... here they are with full access to the source code, tens of thousands of developers, and billions of dollars to throw around and some guy at some security firm somewhere makes a patch faster than they can? they do not want these kinds of comparisons to happen, they do not want people to see what kind of performance the microsoft security machine is really giving them...
one of the things they say is that only microsoft patches are guaranteed to work with everything... they have all these tests they need to perform to make sure they don't break anything and comparing their process to what 3rd party patchers do just isn't fair... problem is, not only are microsoft's patches NOT guaranteed (there have been multiple instances where their patches inadvertently broke things) but it also underlines the fact that compatibility rather than security is their first priority... and this comes from a program manager at microsoft's security response center....
microsoft's security efforts are a failure, a farce, and the public has lost confidence in them... and not just a little bit of confidence either; the WMF incident back in december and now the new IE vulnerability just a few months later, both with wide adoption of 3rd party patches, indicates that this is a watershed event... confidence in microsoft's security efforts has crossed a threshold...
the worst thing, for microsoft at least, is that they're losing control over a captive market... 3rd party patches open the door for competition for their paid support program(s), as well as eat away at the leverage they have for getting people to upgrade by ending support for older versions of their products... ending support has a lot less meaning when other people are willing to pick up the slack...
microsoft talks a good game about security, i saw one of them talk about it at RSA2002, they go to great lengths, but they seriously underestimate the significance of risk when they leave the window of exposure open as long as they do while trying to make the perfect patch... when there's a serious exploit in the wild there should be no question of whether or not to release a patch out of cycle, it should be released as soon as it's made and refined and updated after the fact... so long as a seasoned expert at writing secure code is making the patch (rather than some green code monkey they're trying to train), any problems that might arise from a rushed patch are almost certainly going to be less severe than than the one it's supposed to fix... further, new vulnerabilities from a rushed patch would require additional time and effort from the attacker community to exploit so as long as the development of the patch continues after the initial release so that updates to the patch can be pushed out if needed...
microsoft doesn't like this, it doesn't like this one bit... it makes them look like chumps... here they are with full access to the source code, tens of thousands of developers, and billions of dollars to throw around and some guy at some security firm somewhere makes a patch faster than they can? they do not want these kinds of comparisons to happen, they do not want people to see what kind of performance the microsoft security machine is really giving them...
one of the things they say is that only microsoft patches are guaranteed to work with everything... they have all these tests they need to perform to make sure they don't break anything and comparing their process to what 3rd party patchers do just isn't fair... problem is, not only are microsoft's patches NOT guaranteed (there have been multiple instances where their patches inadvertently broke things) but it also underlines the fact that compatibility rather than security is their first priority... and this comes from a program manager at microsoft's security response center....
microsoft's security efforts are a failure, a farce, and the public has lost confidence in them... and not just a little bit of confidence either; the WMF incident back in december and now the new IE vulnerability just a few months later, both with wide adoption of 3rd party patches, indicates that this is a watershed event... confidence in microsoft's security efforts has crossed a threshold...
the worst thing, for microsoft at least, is that they're losing control over a captive market... 3rd party patches open the door for competition for their paid support program(s), as well as eat away at the leverage they have for getting people to upgrade by ending support for older versions of their products... ending support has a lot less meaning when other people are willing to pick up the slack...
microsoft talks a good game about security, i saw one of them talk about it at RSA2002, they go to great lengths, but they seriously underestimate the significance of risk when they leave the window of exposure open as long as they do while trying to make the perfect patch... when there's a serious exploit in the wild there should be no question of whether or not to release a patch out of cycle, it should be released as soon as it's made and refined and updated after the fact... so long as a seasoned expert at writing secure code is making the patch (rather than some green code monkey they're trying to train), any problems that might arise from a rushed patch are almost certainly going to be less severe than than the one it's supposed to fix... further, new vulnerabilities from a rushed patch would require additional time and effort from the attacker community to exploit so as long as the development of the patch continues after the initial release so that updates to the patch can be pushed out if needed...
Tags:
3rd party patch,
microsoft,
patch,
risk,
security
microsoft says escape from wet paper bag becoming impossible
holy crap, have you heard the news? microsoft has proclaimed that it's becoming impossible to recover from malware..
yeah, of course you've heard the news... everyone and their grandmother seems to think it's a big deal that microsoft is saying security is too hard...
microsoft is basically citing windows rootkits, advanced spyware, and anything else that might hook the kernal as the reason why recovery from malware is going to supposedly become impossible... in the malware world those things all boil down to stealth techniques, and as i've already said - we solved the stealth problem over a decade ago... microsoft, being deaf, blind, and monumentally stupid, made that solution basically unusable by foisting NTFS on us without giving us a solution for booting from a known clean removable medium and parsing NTFS partitions (before NTFS we could just boot from a write protected, bootable floppy disk and access the drive from DOS without triggering any malware self-defense mechanisms and without allowing the malware's stealth capabilities to be activated)...
the really weird thing is that microsoft does have the technology... it's called a PE (Preinstalled Environment) disk and not only does microsoft not want to give it away for free or bundle it with the operating system to aid in maintenance and disaster recovery, but they actually got on the case of the maker(s) of the BartPE disk (a free alternative to microsoft's own PE disk) a couple years ago, forcing the product temporarily offline...
lots of folks are taking microsoft's proclaimation seriously - don't buy into their cop-out... the handful of years they've spent trying to catch up in the security field are apparently just not enough for them to realize they have the solution in their own grubby little hands... the malware problem is not as bad as those morons in redmond make it out to be...
yeah, of course you've heard the news... everyone and their grandmother seems to think it's a big deal that microsoft is saying security is too hard...
microsoft is basically citing windows rootkits, advanced spyware, and anything else that might hook the kernal as the reason why recovery from malware is going to supposedly become impossible... in the malware world those things all boil down to stealth techniques, and as i've already said - we solved the stealth problem over a decade ago... microsoft, being deaf, blind, and monumentally stupid, made that solution basically unusable by foisting NTFS on us without giving us a solution for booting from a known clean removable medium and parsing NTFS partitions (before NTFS we could just boot from a write protected, bootable floppy disk and access the drive from DOS without triggering any malware self-defense mechanisms and without allowing the malware's stealth capabilities to be activated)...
the really weird thing is that microsoft does have the technology... it's called a PE (Preinstalled Environment) disk and not only does microsoft not want to give it away for free or bundle it with the operating system to aid in maintenance and disaster recovery, but they actually got on the case of the maker(s) of the BartPE disk (a free alternative to microsoft's own PE disk) a couple years ago, forcing the product temporarily offline...
lots of folks are taking microsoft's proclaimation seriously - don't buy into their cop-out... the handful of years they've spent trying to catch up in the security field are apparently just not enough for them to realize they have the solution in their own grubby little hands... the malware problem is not as bad as those morons in redmond make it out to be...
Tags:
bartpe,
clean boot,
malware,
microsoft,
rootkit,
stealth,
stealthkit,
winpe
cellphone spyware part deux
so my last posting about cellphone spyware (flexispy) apparently was interesting enough to someone for them to send me my very first non-spam feedback... specifically, someone claiming to be affiliated with neo-call sent me a nice little email with their views on the flexispy story...
this is actually pretty amazing, because apparently the neo-call folks (being industry leaders) developed superior cellphone spying technology months ago and nobody paid any attention to them or talked about them (awwwww)... the email goes on to say that it's very interesting that f-secure picked up on the flexispy story a day after the software was released - and i agree... if it really was only a day after the software was released it would appear that vervata decided that getting a examined by anti-virus vendors would make for a good publicity stunt, and i suppose they may be right... on the other hand, f-secure now detects their product as a spyware trojan so i guess that kinda backfired on them since that detection is going to limit the marketability product (who wants to pay for spyware that an anti-virus product can already detect?)...
finally, the email finished off with a lament about how neo-call is an industry leader (in the field of cellphone spyware, apparently) and how it's a shame that nobody is paying any attention to them or talking about them at all.. clearly they're jealous of all the special lovin' the boys and girls and f-secure have been giving flexispy and they want in on some of that action - and it appears they may be deserving... their product forwards sms messages, lo-jack's the phone through GSM localization, and i gather there's even a bonus add-on for listening in on calls - definitely sounds spyware to me... oh, and get this, the FAQ clearly states that you can't tell the product is installed by examining the phone - yup, it fails to disclose it's true nature just like flexispy, isn't that wonderfully up-front of them to admit to it's trojan nature?
mikko and the rest of the gang at f-secure - these guys are obviously looking for some of your special attention, so go ahead and hook 'em up..
[edit april 7 2006: i don't know what i was thinking posting links to a malware distributor, i guess i must have been laughing too hard to realize what i was doing]
this is actually pretty amazing, because apparently the neo-call folks (being industry leaders) developed superior cellphone spying technology months ago and nobody paid any attention to them or talked about them (awwwww)... the email goes on to say that it's very interesting that f-secure picked up on the flexispy story a day after the software was released - and i agree... if it really was only a day after the software was released it would appear that vervata decided that getting a examined by anti-virus vendors would make for a good publicity stunt, and i suppose they may be right... on the other hand, f-secure now detects their product as a spyware trojan so i guess that kinda backfired on them since that detection is going to limit the marketability product (who wants to pay for spyware that an anti-virus product can already detect?)...
finally, the email finished off with a lament about how neo-call is an industry leader (in the field of cellphone spyware, apparently) and how it's a shame that nobody is paying any attention to them or talking about them at all.. clearly they're jealous of all the special lovin' the boys and girls and f-secure have been giving flexispy and they want in on some of that action - and it appears they may be deserving... their product forwards sms messages, lo-jack's the phone through GSM localization, and i gather there's even a bonus add-on for listening in on calls - definitely sounds spyware to me... oh, and get this, the FAQ clearly states that you can't tell the product is installed by examining the phone - yup, it fails to disclose it's true nature just like flexispy, isn't that wonderfully up-front of them to admit to it's trojan nature?
mikko and the rest of the gang at f-secure - these guys are obviously looking for some of your special attention, so go ahead and hook 'em up..
[edit april 7 2006: i don't know what i was thinking posting links to a malware distributor, i guess i must have been laughing too hard to realize what i was doing]
Subscribe to:
Posts (Atom)