Thursday, May 24, 2007

protecting data in use?

although lonervamp's blog entry on protecting data in use is not my first exposure to this concept, it does seem to be turning into a nucleation point for a discussion on the topic as both mike rothman and anton chuvakin have posted reactions to it...

for context's sake, usually when we're talking about protecting data we talk about protecting data at rest (which is when the data is residing in a data store/data repository/database) and protecting data in motion (which is when the data is traveling between the data store and an agent that consumes the data, be it a client application or a user of that client application, or potentially data traveling between 2 data stores)... when we're talking about protecting data in use then, we must necessarily be talking about data that has already reached the data consumer...

also, when we're talking about protecting data, we are at the very least talking about maintaining those properties of the data that are described in the CIA triad... but what specifically do we mean in the case of data in use? are we concerned about the data's availability to the data consumer once it's reached the data consumer? i would say no; in fact i would say that the data's availability at that point is not just a certainty, it's a tautology...

how about integrity then? surely we're interested in maintaining the integrity of the data once it's reached the data consumer? again i would say no... it is expected that the data consumer will transform the data in arbitrary ways (making integrity impossible to enforce) as part of the synthesis of new knowledge... we're interested in preventing the data consumer from writing corrupted data back into the data store, of course, but that isn't protecting the integrity of data in use, it's protecting the integrity of data at rest...

that leaves confidentiality... how many of you reading this think you can safeguard the confidentiality of data once it gets into someone else's hands?... we usually say that the genie is out of the bottle at that point, with the understanding that you can't put the genie back into the bottle... the technical preventative paradigm for protecting the confidentiality of data is access restriction, however you cannot restrict access to data in use as it is data to which access has already been granted... the notion that you can strictly control how the data is used by exclusively tying it to proprietary software and/or hardware is the very conceit at the root of DRM...

technical preventative controls that protect the confidentiality of data in use are impossible, but let me soften that a bit... i've heard it said that you shouldn't say "no", rather you should say "yes, but . . . ", and by being specific about technical preventative controls i have left myself room for a "yes, but . . ."... it is possible to protect the confidentiality of data in use by way of technical detective controls (ie. audit logging or perhaps some clever user-specific watermarking) and/or administrative preventative controls (ie. confidentiality clauses, termination policies, non-disclosure agreements, etc)...

people are generally not satisfied with this, of course, because they naively expect technology to be able to solve their problems completely (despite the fact that technology is notorious for only ever providing parts of solutions)... it's also unsatisfying because detection takes work and administrative controls have less of an air of perfection about them (not that any control is perfect, of course)... regardless of their satisfaction, however, data in use is fundamentally and irrevocably data that can be misused because it is data that is in someone else's hands...

12 comments:

Unknown said...

Really enjoyed your post here. I am with you, I really want to say this is ultimately impossible, but there are still things that can be done to help...from technology to the softer controls like the agreements you mentioned to detection and auditing.

One of my sub-points is we talk about data at rest and in motion a lot, but even despite all the cases lately of data disclosure by insiders (Coke, Sandia, others), none of these protections really address that part, especially if these people have legit access to data. Just me trying to pull the covers off a bit, even if I don't think it can be fixed. I'd rather have the risk accepted than pretend it's not there.

kurt wismer said...

when we're talking about the insider threat we really need to draw a distinction between the malicious insider and the unaware insider because they're remarkably different...

there is literally nothing technology can do to prevent a malicious insider from doing the wrong thing with the data s/he has legitimate access to, that's why detection is so important...

unaware insiders, on the other hand, would only be disclosing confidential data by accident so the obvious mitigation technique is to shrink the window of exposure for that to occur... data in use should be ephemeral, but all too often it isn't and because of that the more conventional spheres of data protection can become relevant...

consider how many data breaches are manifest by stolen laptops; why was the data stored on the laptop? when the data is being stored aren't we once again talking about data at rest? since the computer is a mobile device aren't we also talking about data in motion?

Ahmed Masud said...

Definitely enjoyed the post... I am not quite certain that it's all that impossible to control insiders. What one needs is the right set of separation enforcing technologies at the operating system level.

Multi-level Security is a step in that direction.


While traditional MLS has its limitations of being hard to implement. Today, there is at least one solution available that can be used to deliver MLS on COTS environments [possibilities of shameless plugs avoided here ;) ]

Digital separation (not encryption) of data is definitely possible. Caveat creation and need-to-know are definitely implementable. Also it is possible to enforce rules on users, devices, end-point accesses and to ensure that data is only used for intended purposes. If there is interest, I can post a few links (if it is appropriate for this forum) to white papers from my company upon request, which deals in this venue directly.

kurt wismer said...

@ahmed:
"I am not quite certain that it's all that impossible to control insiders. What one needs is the right set of separation enforcing technologies at the operating system level."

ok, lets say i'm a malicious insider working for a credit card company... i have access to the credit card information of tens of thousands of individuals... i view a record (which i'm allowed to do because it's part of my job at the credit card company) and then pick up my phone, call my accomplice, and read out the information to him... there is nothing that any operating system can ever do to stop that...

Anonymous said...

How about call monitoring? No personal phones in restricted area? Video cameras?

If the data is of value, lets do more than pretend that we have actual security shall we? Are the credit card companies outsourcing to home workers now? By the way, I think it IS alright for management to step on user's toes a bit to protect their data. If they don't like it, they should move on.

kurt wismer said...

@rob lewis:
"How about call monitoring? No personal phones in restricted area? Video cameras?"

how about i use a spycamera instead of a phone - or how about i use my memory to smuggle the data off the premises instead of a phone? bandwidth is probably an issue but i believe it's possible to improve memory recall with training... also, maybe i don't work at a credit card company, maybe i work somewhere where i can do a lot of damage by smuggling out a fairly small piece of data - a trade secret, for example, or an important password or encryption key...

"If the data is of value, lets do more than pretend that we have actual security shall we?"

if the data has value to an attacker, lets make the threat of getting caught outweigh the value of the data...

"By the way, I think it IS alright for management to step on user's toes a bit to protect their data. If they don't like it, they should move on."

i agree... i don't have a problem with an organization using draconian measures on their own people to protect the secrets they've been entrusted with...

however, the harder you squeeze people the more of them will slip through your fingers... you're going to do a better job of protecting data if your employees are cooperating with you than if they're being ruled over with an iron fist...

the insider threat is a people problem, not a technology problem... it can't be solved with technology, you need to address people, their awareness, their motivation, etc...

Anonymous said...

I have no problem with anything you have just said. The insider problem does involve the human element but again becomes a process involving risk management, a parallel but not exactly the same process as for external threats. Everthing that you suggested can happen already does. The first step is to eliminate low hanging fruit inside the network. because their are virtually no internal controls there.

Trustifier techology can be used to deter all but the most dedicated attacker, and even then, reduce the probability of success. In a layered defense, it is simply the core layer that has been missing up to now.

You can still give empowered employees all of the tools and access they need to perform their duties and they will cooperate when they understand the measures that are being taken on behalf of the company's welfare, and indirectly, their own. Those that are there to work will appreciate the security that comes with an organization proactive enough to protect everyone's interests.

Employee morale need not be centered on giving them carte blanche access to everything unrelated to their job duties, or resource access also unrelated to job performance.

kurt wismer said...

@rob lewis:
"The insider problem does involve the human element but again becomes a process involving risk management"

exactly, risk management, not risk elimination...

all too often people expect prevention to work perfectly so they don't bother with trying to detect preventative failures... this is exceptionally bad here because there are entire classes of data breaching insider attacks that technical preventative measures can't touch...

detection is a necessity, and so to are preventative measures that address the agent carrying out the breach (the person) rather than just the mechanism of the breach (which may or may not be technological in nature)...

"Trustifier techology can be used to deter all but the most dedicated attacker, and even then, reduce the probability of success."

with all due respect, simply remembering the confidential data and leaking it to a 3rd party doesn't seem like dedication to me - especially since often happens accidentally and is the mechanism behind the adage loose lips sink ships... trustifier can't do anything about that or most other low-tech attacks...

Anonymous said...

True, but Trustifier can allow role separation and least privilege to be exercised more easily, so that access to sensitive data by those with loose lips is less likely.

kurt wismer said...

@rob lewis:
you can make access controls more and more granular and restrictive till the cows come home... the insider threat does not go away if we construct perfect access controls...

what makes the insider threat so pernicious is that often times the data they're leaking is data they really are supposed to be able to see/access...

Anonymous said...

Once again, it is risk management. The idea is to limit the risk to those that absolutely have the need to access any data set for their job role, and not to allow acccess to data to just anyone inside the network.

The comments that you made could be applied to strictly manual process environments, with no computerization at all, so it would be impossible to reduce risk to zero unless you can remove the human element, as you say.

kurt wismer said...

@rob lewis:
fair enough, but ultimately you're talking about access restriction...

i maintain what i said before about access restriction not applying to data in use... access restriction is something you apply to data at rest and data in motion, but data in use is fundamentally data for which access has already been granted (otherwise it would never have been allowed to leave the data store and move to your client)...