Security risk assessment is a key tool to identify security threats and mitigations (a.k.a requirements) early in the system design life cycle and thus to avoid costly re-design of the system at later stages.
A security risk assessment can be performed following many of the different methodologies available on the market like OCTAVE, MAGERIT, BSI IT-Grundschutz, NIST 800-30, ISO 2700x or are proposed by academics like CORAS, Misuse Cases, and SI*. Each of them claim that they offer well-defined methods, techniques and guidelines for how to do conduct a risk assessment in practice. But what if these methods do not work in practice? Critical vulnerabilities may be overlooked and also lead to costly re-design of the system. A key research question is therefore “How do we evaluate if these methods are successful in identifying security risks?’’
In this talk I will report the results of the empirical studies, and in particular the experimental protocol that we designed to compare security risk assessment methods, its execution to compare academic and industrial methods for security risk assessment and some preliminary results on which are relevant features that affect the success of a security risk assessment method. I will conclude with some lessons learned that may be interesting for those who want to conduct this kind of research.
In recent years, there has been great interest in Functional Encryption (FE), a generalization of traditional encryption where a token enables a user to learn a specific function of the encrypted data and nothing else. One of the main lines of research in the area has consisted in studying the security notions for FE and their achievability. This study was initiated by [Boneh et al. -- TCC'11, O'Neill -- ePrint'10] where it was first shown that for FE the indistinguishability-based (IND) security notion is not sufficient in the sense that there are FE schemes that are provably IND-Secure but concretely insecure. For this reason, researchers investigated the achievability of Simulation-based (SIM) security, a stronger notion of security. Unfortunately, the above-mentioned works and others [e.g., Agrawal et al. -- CRYPTO'13] have shown strong impossibility results for SIM-Security. One way to overcome these impossibility results was first suggested in the work of Boneh et al. where it was shown how to construct, in the Random Oracle (RO) model, SIM-Secure FE for restricted functionalities and was asked the generalization to more complex functionalities as a challenging problem in the area. Subsequently, [De Caro et al. -- CRYPTO'13] proposed a candidate construction of SIM-Secure FE for all circuits in the RO model assuming the existence of an IND-Secure FE scheme for circuits with RO gates. This means that the functionality has to depend on the RO, thus it is not fixed in advance as in the standard definitions of FE. Moreover, to our knowledge there are no proposed candidate IND-Secure FE schemes for circuits with RO gates and they seem unlikely to exist. In this paper, we propose the first constructions of SIM-Secure FE schemes in the RO model that overcome the current impossibility results in different settings. We can do that because we resort to the two following models:
In the public-key setting we assume a bound q on the number of queries but this bound only affects the running-times of our encryption and decryption procedures. We stress that our FE schemes in this model are SIM-Secure and have ciphertexts and tokens of constant-size, whereas in the standard model, the current SIM-Secure FE schemes for general functionalities [De Caro et al., Gorbunov et al. -- CRYPTO'12] have ciphertexts and tokens of size growing as the number of queries.
In the symmetric-key setting we assume a timestamp on both ciphertexts and tokens. This is reasonable because, in the symmetric-key setting, there is only one user that encrypts and generates tokens. In this model, we provide FE schemes with short ciphertexts and tokens that are SIM-Secure against adversaries asking an unbounded number of queries.
Both results also assume the RO model, but not functionalities with RO gates and rely on extractability obfuscation w.r.t. distributional auxiliary input [Boyle et al. -- TCC'14] (and other standard primitives) secure only in the standard model.
Given a set of users in a system, roles are recursively defined as: subsets of users, or delegation between roles. The RX language, by using roles as security labels for sensible objects, aims to guarantee both security (preventing unwanted information flows) and flexibility (allowing roles' updating). Since roles' updatings could themselves reveal how system's security lattice is defined, RX provides constructs to label roles' updatings as well, called metapolicies labels. This way, we ensure that only allowed users have the right to observe security lattice's mutations. In this talk, we will see how RX is designed: from syntactic rules to semantics.
Anonymous Post-office Protocol (AnonPoP) is a messaging protocol ensuring strong anonymity to senders and recipients, even against powerful adversaries. AnonPoP is practical, scalable and efficient, with reasonable overhead in latency and communication. Furthermore, it is appropriate even for use in mobile devices, with modest, reasonable energy consumption (validated experimentally), and with good security even against MitM adversaries and even for disconnection-intersection attacks.
To provide anonymity and unobservability even against MitM attackers, AnonPoP uses Post-Office Servers, which keep anonymous mailboxes, and Mixes, between the Post-office and senders/recipients. The Post-Office is aware of the total amount of traffic in the system, but not of traffic patterns of individual senders and recipients. AnonPoP uses efficient cryptographic mechanisms, to ensure anonymity even against a malicious post office, who can also be MitM on all network traffic (and control some mixes).
AnonPop supports many diverse scenarios and applications, with an expressive mailbox authorization policy, including defenses against spam and Denial-of-Service.
Data publishing is an easy and economical means for data sharing, but the privacy risk is a major concern in data publishing. Privacy preservation is a vital task in data sharing for organizations like hospitals. While a large number of data publishing models and methods have been proposed, their utility is of concern when a high privacy requirement is imposed. In this talk, I will present two probabilistic models for privacy preserving data publications.
In the first model we cap the belief of an adversary inferring a sensitive value in a published data set to as high as that of an inference based on public knowledge. The semantic meaning is that when an adversary sees a record in a published data set, s/he will have a lower confidence that the record belongs to a victim than not. The second model deals with inference of confidential information by using multiple published data sets, called composition attack. The model is designed to mitigate the risk of a composition attack by independent publication (without coordination). Both models have been implemented and assessed in comparison with some benchmark models. Their strengths and weaknesses will be discussed.
Individual privacy is a core human need, but society sometimes has the requirement to do targetted, proportionate investigations in order to provide security. To reconcile individual privacy and societal security, we explore whether we can have surveillance in a form that is verifiably accountable to citizens. This means that citizens get verifiable proofs of how much surveillance actually takes place.
Joint work with Jia Liu and Liqun Chen.
During the last couple of decades, different approaches of threat analysis and modeling have been introduced. In 1999, Schneier introduced the concept of attack trees which he defined as a formal, methodical way of describing the security of systems, based on varying attacks. Schneier argued that by using attack trees as a modeling technique, we would be able to understand all the different ways that can be exploited to attack our systems. Aren't we? According to his argument, we will be able then to design countermeasures to defend our systems. A few years later, the notation of attack graphs have been used and it wasn't clear whether it is the same as attack trees or not. Some considered attack graphs as a different way of threat modeling because it focuses more on the sequence of events rather than the event abstraction. However, attack graphs can be seen as a possible extension for attack trees and it can be modeled and formalized based on the attack trees formalization.
Different techniques have been developed to construct both attack trees and attack graphs. Security experts started to write them manually based on system's configuration, but later they realized that this may be very complex ending with trees and graphs containing thousands of nodes. At this point, lot of trials have been performed to automatically generate the attack graphs with the least possible input from the user to make it practical enough to be used in real dynamic large systems. However, the question of whether these threat modeling techniques are really useful in practice or they are just theoretical concepts is still raised. In this talk, we will discuss new models that have been developed to automatically generate attack graphs focusing on an approach that generate approximate attack graphs based on traffic data and network flows. We will also try to find the links between those theoretical approaches which are well modeled and formalized and the new practical techniques.
A large networked system is typically vulnerable to a wide range of cyber-attacks. The impact of such attacks on the applications deployed in the network ranges from simple response time degradation to complete unavailability and loss of critical data.
Existing solutions mainly improve security either by designing the network using defense tools such as intrusion detection systems and firewalls, or by developing applications using security measures such as data obfuscation and memory management. However, they fail to take into account the complex interdependencies between the network infrastructure, application tasks, and residual vulnerabilities in the system.
This talk discusses an approach to improve the security of applications, given the vulnerability state of the network. First, we consider the vulnerability distribution within the network and deploy applications while minimizing their exposure to residual vulnerabilities. Then, we apply network hardening techniques using attack graphs to protect the deployed applications from possible cyber-attacks.
Dr Richard Clayton has recently completed a major study of the `whois' contact details for domain names used in malicious or harmful Internet activities. ICANN wanted to know if a significant percentage of these domain registrations used privacy or proxy services to obscure the perpetrator's identity? No surprises in the results: Yes!
What was perhaps surprising was that quite a significant percentage of domains used for lawful and harmless activities ALSO used privacy and proxy services.
But the real distinction is that for maliciously registered domains, then contact details are hidden in a range of different ways so that 9 out 10 of these registrants are a priori uncontactable - whereas the a priori uncontactable rate varies between a quarter and at most two- thirds for the non-malicious registrations.
This talk discusses how these results were obtained and what their implications are for the future of the whois system. It also reports on spin-off research, inquiring into what happens to online banking domains when the bank is merged or is shut down - although remain owned by banks, a great many end up being used for somewhat dubious purposes.
We explore ICT security in a socio-technical world and focus in particular on the susceptibility to social engineering attacks. We pursue the question if and how personality traits influence this susceptibility. This allows us to research human factors and their potential impact on the physical and digital security domains. We show how Cialdini's principles of influence can be used to explain why most social engineering attacks succeed and that these attacks mainly rely on peripheral route persuasion.
A comprehensive literature review reveals that individual values of a victim's personality traits relate to social engineering susceptibility. Furthermore, we construct suggestions for plausible relations between personality traits of the Five-Factor Model (Big 5) and the principles of influence. Based on these arguments, we propose our "Social Engineering Personality Framework" (SEPF). It supports and guides security researchers in developing holistic detection, mitigation, and prevention strategies while dealing with human factors.
Despite Alice's best efforts, her long-term secret keys may be revealed to an adversary. Possible reasons include weakly generated keys, compromised key storage, subpoena, and coercion. However, Alice may still be able afterwards to communicate securely with other parties. Whether this is possible depends on the protocol used. We call the associated property resilience against Actor Key Compromise (AKC). We formalise this property in a symbolic model and identify conditions under which it can and cannot be achieved. In case studies that include TLS and SSH, we find that many protocols are not resilient against AKC. We implement a concrete AKC attack on the mutually authenticated TLS protocol.
This is joint work with Marko Horvat and David Basin.
We study the security of interaction protocols when incentives of participants are taken into account. We begin by formally defining correctness of a protocol, given a notion of rationality and utilities of participating agents. Based on that, we propose how to assess security when the precise incentives are unknown. Then, the security level can be defined in terms of defender sets, i.e., sets of participants who can effectively "defend" the security property as long as they are in favor of the property. In terms of technical results, we present a theoretical characterization of defendable protocols under Nash equilibrium, and study the computational complexity of related decision problems.
The talk will present joint work with Matthijs Melissen and Henning Schnoor.