We investigate the problem of optimally mitigating the progression of an adversary through a network in real-time, decreasing the probability of it reaching its goal(s), while minimizing the negative impact… Click to show full abstract
We investigate the problem of optimally mitigating the progression of an adversary through a network in real-time, decreasing the probability of it reaching its goal(s), while minimizing the negative impact to availability. Our model is based on a type of attack graph, termed a condition dependency graph, which models the dependencies between security conditions (attacker capabilities) and exploits. By embedding a state space on the graph, we are able to quantify the progression of the attacker over time. The defender is able to interfere with the attacker’s progression by blocking some exploits from being carried out. The nature of the attacker’s progression through the network is dictated by its private strategy, which depends on the defender’s action. The defender’s uncertainty of the attacker’s true strategy is modeled by considering a finite collection of attacker types. Using noisy security alerts (exhibiting both missed detections and false alarms), the defender maintains a belief representing the joint distribution over the attacker’s current capabilities and true strategy. The resulting problem of determining how to optimally interfere with the attacker’s progression is cast as a partially observable Markov decision process. To deal with the large state space, we develop a scalable online defense algorithm for tracking beliefs and prescribing defense actions over time. Using the context provided by the state, we are able to efficiently process security alerts even in the presence of a high rate of false alarms. The behavior of the computed defense policy is demonstrated on an illustrative example.
               
Click one of the above tabs to view related content.