framing the 3 preventative paradigms
in the ideal case:
- a blacklist blocks access to a protected resource for things/actions that are bad
- a whitelist blocks access to a protected resource for those things/actions that are not good
- a sandbox blocks access to a protected resource unconditionally, offering a comparable low-value alternative resource as a surrogate
balancing the 3 preventative paradigms
the world is not an ideal place, however, and those cases cannot be implemented perfectly:
- the halting problem prevents us from implementing a perfect blacklist and limits us to only blocking things/actions that are known to be bad, thus leading to the so-called reactive nature of blacklists
- while the halting problem does affect whitelists in the same way, whitelists actually benefit in a way from being limited to only known things/actions. however the generality of interpretation prevents us from implementing perfect whitelists (as the security world is bound to discover when whitelisting finally crosses the chasm) because we will only ever be able to apply application whitelisting to known program types and thus will have to react whenever we discover a new program type being abused.
- while the generality of interpretation may make it difficult to know when a sandbox needs to be used, what prevents sandboxes from being perfect when they are used (baring implementation failures that lead to unaided sandbox escape) is the need to share data (across the barrier erected by the sandbox) that is inherent to the division of labour, not to mention a variety of other useful and interesting things we do with computers.
3 comments:
Well at least someone is thinking about it in terms of the big picture.
The most overlooked aspect of the whitelisting discussion is that it is not granular enough in terms of access controls. Occasionally, someone in infosec hits on the idea that end behaviors count for something and I occasionally hint that apps and devices don't steal data, people do. So relating to your very correct point:
"what prevents sandboxes from being perfect when they are used (baring implementation failures that lead to unaided sandbox escape) is the need to share data (across the barrier erected by the sandbox) that is inherent to the division of labour"
there must also be access controls that govern end behaviors of users whether they be in the form or role base access controls or rule based access controls, neither of which tend to scale across apps well. So, for whitelisting to really fly, this level/layer of controls must be able to punch through the app layer.
Most often overlooked is that without a high assurance foundation, there will always be work arounds, so as you state, all of this amounts to some speed bumps, but no real defense.
PS Accidentally posted my comment before I could identify my self.
Rob Lewis
no worries about the anonymous comment, we know who you are now.
at the end of the post i mentioned that prevention is only the beginning of what a defender must consider - and i think you'll agree that access controls are meant to prevent things.
as for granularity of whitelisting - that is only true of application whitelisting. behavioural whitelisting has the potentially to be far more granular.
a pedant might point out that application whitelisting is behavioural whitelisting that focuses on just one sort of behaviour - program execution/application launching.
greater granularity may get you closer to preventative perfection but only in the same way using a smaller interval for a riemann sum gets you closer to the actual area under a curve - and there's a point of diminishing returns with regards to the work involved in defining these increasingly granular behavioural whitelist rules.
and since prevention is only the beginning, it might be good to save some effort for what comes after (since invariably what comes after will be needed some of the time).
Post a Comment