In the Internet of Services (IoS), systems and applications are no longer the result of programming components in the traditional meaning but are built by composing services that are distributed over the network and reconfigured and consumed dynamically in a demand-driven, flexible way. However, composing services leads to new, subtle and dangerous, vulnerabilities due to interference between component services and policies, the shared communication layer, and application functionality.
In this seminar, I will briefly present the AVANTSSAR Platform and the SPaCIoS Tool, two integrated toolsets for the formal specification and automated validation of trust and security of applications in the IoS at design time and run time, respectively. (Both have been developed in the context of FP7 projects that I have been coordinating.) To highlight some of my contributions to AVANTSSAR and SPaCIoS, I will focus on two particular results.
First, I will discuss a compositionality result that formalizes conditions for vertical composition of security protocols, i.e., when an application protocol (e.g., a banking service) runs over a channel established by another protocol (e.g., a secure channel provided by TLS). This is interesting and useful as protocol composition can lead to attacks even when the individual protocols are all secure in isolation.
Second, I will briefly discuss how although computer security typically revolves around threats, attacks and defenses, the sub-field of security protocol analysis has so far focused almost exclusively on the notion of attack. I will motivate that there is room for a fruitful notion of defense for vulnerable protocols and that the conceptual bridge lies in the notion of multiple non-collaborating attackers and interference between simultaneous attack procedures.
In this seminar, a novel security policy specification and verification framework named FPFM (Formal Policy Framework for Mobility) will be presented. The proposed framework presents novel findings in the areas of location and mobility based security, multi-domain security and formal verification of security models and security policies, including information flow analysis, model checking of security policies based on process calculus and inductive theorem proving of security policies. A formal role-based access control model named FPM-RBAC is proposed for the specification of security policies. The security policy model represents novelty regarding location and mobility constraints, mapping of role hierarchies, specification of multi-domain security policies and separation of duty constraints. An XML-based security policy language named XFPM-RBAC implements the FPM-RBAC security policy model and supports automatic generation of formal security policy specifications.
Formal verification of security policies are supported through model checking and theorem proving. A spatio-temporal model checking algorithm based on Ambient Logic has been proposed, for the formal verification of Ambient Calculus based specifications for mobile systems. The proposed algorithm is capable of spatio-temporal model checking of mobile network configurations with respect to location and mobility constraints. Information flow analysis, based on allowed flows by a security policy, is also possible by using the same approach. Theorem prover specifications have been developed for the purpose of verification of role based access control policies by use of the Coq theorem prover.
The presentation will mainly focus on two of the components of FPFM: the FPM-RBAC model and the spatio-temporal model checking algorithm. Possible extensions and improvements to the FPFM framework, such as partial order reduction, stochastic model checking, attack modeling and privacy enhanced access control will be discussed.
A sequence over a fixed finite set is said to be complete, if it contains all permutations of the set as (not necessarily consecutive) subsequences. Determining the length of shortest complete sequences is an open problem. The problem arises naturally when considering fairness of optimistic multi-party protocols in asynchronous communication networks with an offline trusted third party. Examples of such protocols are reconstruction of a shared secret and contract signing.
I will present new upper bounds for the length of shortest complete sequences and discuss a variety of approaches to construct such sequences.
Online businesses increasingly rely on personal data in order to provide value for their users and to differentiate from competition. At the same time users become more and more concerned about privacy threats resulting from their online activities. Today, it is (still) difficult for users to control the collection and use of their personal data when interacting with online businesses. Therefore, privacy research so far has been focusing mainly on the development of means to support users in protecting their privacy. However, from an online business perspective, existing or upcoming means of privacy protection (e.g. tracking protection tools) could also hamper current online business models. Consequently, privacy research should also consider and address the need of online businesses for personal data in their research agenda.
The first half of the talk will outline a vision on how privacy can become a competitive edge for online businesses. This can provide an incentive for online businesses to excel at protecting the privacy of their users. At the same time, it lays the foundation for a fair trade-off between user privacy and value provision by online businesses. As a starting point for this endeavor, we propose a paradigm shift in which online businesses offer users the ability to maintain their digital identity instead of solely enabling them to review their personal data and setup privacy policies. The second half of the talk will provide an overview of selected research projects at Goethe University in the area of mobile marketing, mobile sensing and privacy aspects of online communities, which contribute to this vision.
A major problem in the global information society is the lack of a worldwide legal framework accepted by all countries. This raises the issue of how to protect the legitimate interests of the various players in information transactions. Protocols for information technology transactions should be designed so that the best interest of any protocol participant leads her to act in such a way that she serves the best interest of the other protocol participants. This will allow effortlessly achieving a fair operation of the information society as the result of rational co-operation rather than as an expensive legal requirement.
We introduce the novel concept of co-utility or co-operative utility, which can be defined in game-theoretic terms: a protocol offers co-utility or is co-utile if the best option for a player to maximize her utility is to co-operate with one or more other players to maximize their utility. Reasonable components of utility in the information society are security, privacy and functionality. The privacy component of co-utility corresponds to the co-privacy property that we have recently introduced.
The co-utility property makes an individual's utility maximization a goal that rationally interests other individuals: it is a matter of helping oneself by helping someone else. Co-utility is a groundbreaking concept because it opens the way to a joint and coordinated treatment of privacy, security and functionality. We formally define co-utility in terms of game equilibria and we illustrate its use in several practical applications: i) controlled content distribution, data liberation and digital content oblivion; ii) anonymous keyword search; iii) communication in vehicular ad hoc networks (VANETs), and iv) social networks.
We show that it is possible to achieve perfect forward secrecy in two-message or one-round key exchange (KE) protocols that satisfy even stronger security properties than provided by the extended Canetti-Krawczyk (eCK) security model. In particular, we consider perfect forward secrecy in the presence of adversaries that can reveal ephemeral secret keys and the long-term secret keys of the actor of a session.
We propose two new security models for KE protocols. First, we formalize a slightly stronger variant of the eCK security model that we call eCKw. Second, we integrate perfect forward secrecy into eCKw, which gives rise to the even stronger eCK-PFS model. We propose a security-strengthening transformation (i.e. a compiler) between our new models. Given a two-message Diffie-Hellman type protocol secure in eCKw, our transformation yields a two-message protocol that is secure in eCK-PFS. We demonstrate how to generalize this result to other KE security models and argue why the MAC transformation does not achieve eCK-PFS security.
The MarkPledge 3 (MP3) technique is a highly sound voter verifiable encryption technique that follows the working principle of the MarkPledge technique (Neff, 2004). Due to its simple mathematical structure, MP3 presents significant theoretical and practical improvements over the previous two MarkPledge technique specifications (MP1 and MP2) and it is specially adapted to run on computational constrained devices, e.g. smart cards.
EVIV is a highly sound End-to-end Verifiable Internet Voting system whose integrity is guaranteed by the participation of one honest trustee in a random number generation protocol at the beginning of the election day. The EVIV vote protocol integrates the MarkPledge and the code voting techniques to create a vote protocol which, in addition to the strong integrity properties, offers full voter's mobility and preserves the voter's privacy from the vote casting PC, even if the voter votes from a public PC, such as a PC at a cybercafé or at a public library.
The MP3 is specially relevant to EVIV since, to give full voter's mobility and to protect the voter's privacy from the vote casting PC, the vote encryption is created by a voter security token, e.g. a smart card, which is a computational constrained device.
Multi-party contract signing (MPCS) protocols specify how a number of signers can cooperate in achieving a fully signed contract, even in the presence of dishonest signers. Here we consider optimistic MPCS protocols, where we assume presence of a trusted third party which is contacted only in case of a conflict.
The presentation discusses a connection between optimistic MPCS protocols and the combinatorial problem of constructing sequences which contain all permutations of a set as subsequences. We provide an explicit and general construction for MPCS protocols which converts a sequence over a finite set of signers into a protocol specification for the signers. Furthermore, we give tight conditions under which the resulting protocols satisfy fairness and timeliness.
In Estonia, legally binding voting over Internet has been possible since 2005. During the parliamentary elections of 2011, almost 25% of the votes were cast over Internet. Unfortunately, this also means that Internet voting is an attractive target for variously motivated attackers. Hence, it is necessary to analyze the current risk situation and design the appropriate measures against the emerging threats.
This talk consists of two parts. First, we will give an overview of the attack-tree based risk model of Estonian (electronic) voting system, and in the second part we will propose basic principles of vote verifiability to be used during the next Estonian elections. This is a joint research together with Sven Heiberg.
Mix-nets are of interest for several applications where privacy plays an important role like, for instance, eVoting and electronic mail services. Important for their use in application with high requirements regarding integrity, is that the whole process can be verified by any third party. Therefore, all recent variations of mix-nets publish additional information to allow verification of its correct functioning by any observer. This information is encrypted using some public key algorithm (often ElGamal or Paillier) based on number theoretic problems which are assumed to be computationally hard. However, attacks like brute-force that consists of simply checking all possible solution would still be successful (although a computational limited attacker needs decades).
In this talk I present a novel, publicly verifiable mixing scheme which has everlasting privacy towards observers: all the information published on the bulletin board by the mixes (audit information etc.) reveals no information about the identity of any of the messages published. The correctness of the mixing process is statistical: even if all authorities conspire, they cannot change the contents of any message without being detected with overwhelming probability.
We want to study complex dynamical biological systems but the knowledge we can have of these systems is weak, and it mainly relies upon observations of their behaviours. For this purpose, models are built according to various approaches, and they are accompanied with many parameters that have to be introduced in order to stand for a priori unknown values. We are mostly involved in the logical modelling approach (a la René Thomas) where we want to bring in the temporal dimension. This leads to a hybrid representation which is much intricate. As a matter of fact, we have to face the problem of dealing with models with a lot of parameters and frequently a rather huge or even sometimes unmanageable number of states. For this reason, we introduce a new approach that avoids the computation of the whole states space.
It applies naturally on Gene Regulatory Networks (GRN) which is one of these biological systems we want to study and we shortly introduce their dynamics, at first. Then, we present a new stochastic π-calculus framework, namely the Process Hitting. From this framework, we start a sequence of consistent analyses which deal with both stochastic and temporal aspects and reveal important features of the studied biological system.
The first result is that it makes it possible to check structural properties such as the set of stable states, discrete parameters identification, specification of cooperative actions of associated genes for their coordinated interaction, and so on... As a matter of fact, it appears that very large GRNs can now be studied since such a method avoids building the actual states graph. We show that we are able to take into account a regulatory network of at least twenty genes, which means a graph of 2^66 states!
The second stage of our approach consists in benefiting from the ability of using the stochastic π-calculus in order to execute simulations with stochastic rates. We recall that these rates stand for the time delays for an interaction to take place. This allows us to tune stochastic and temporal parameters so that we can implement temporal properties. We are able to perform these simulations using SPIM, which is an implementation of the stochastic π-calculus. Furthermore, since the translation of the aforementioned Process Hitting in the language of the probabilistic model-checker PRISM is also very easy, we are able to achieve formal verification of the temporal properties that have emerged from the previous phase of the temporal parameters tuning.
Along the talk, we sketch some elements of our method through a tiny toy example.
Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database x is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from x for only one individual.
Information flow is an area of security concerned with the problem of controlling the leakage of confidential information in programs and protocols. Nowadays, one of the most established approaches to quantify and reason about leakage is based on the Renyi min entropy version of information theory.
In this talk, I will analyze the notion of differential privacy in light of the conceptual framework provided by the Renyi min-entropy. I show that there is a close relation between differential privacy and leakage, due to the graph symmetries induced by the adjacency relation. Furthermore, I will discuss the utility of the randomized answer, which measures its expected degree of accuracy, and in particular certain kinds of utility functions called "binary", which have a close correspondence with the Renyi min mutual information. Again, it turns out that there can be a tight correspondence between differential privacy and utility, depending on the symmetries induced by the adjacency relation and by the query. I will discuss how, depending on these symmetries, we can build an optimal-utility randomization mechanism while preserving the required level of differential privacy.
Nowadays more and more people use computer to manage their everyday life. There are several electronic systems that require people to fill out a simple or sometimes rather complex questioner and later all the answers are evaluated. These assessment systems bring up serious security issues. Security requirements for different assessment systems are sometimes similar. We propose a universal protocol that possesses all necessary security requirements, such that eligibility, secrecy, anonymity, individual and global verifiability and can be applied for e-exam, e-pollster, e-auction, e-tender and e-vote, as well. Our solution employs blind signature and secret sharing schemes, digital envelope technique and provides receipt on a bulletin board.
In case of micropayment schemes all costs that appear during functioning should be minimized. This also includes the cost of disputes and charge backs that result in penalty for the vendor. We extend PayWord micropayment scheme with payment approval to minimize disputes, charge backs or to avoid attacks that ruin reputation of the vendor. We achieve payment approval with employing a MAC function per a purchase that does not increase time complexity significantly. We give also a formal evaluation in applied pi calculus and prove that our scheme fulfills secure payment authorization, payment approval and secrecy of payment information.
Automation projects are not always successful. One particular reason for failure is that digital systems have serious security problems that were less apparent in the original physical systems. For example, electronic voting security has proven to be very challenging, and so has digital rights management.
In this thesis we examine the security-relevant differences of physical and digital systems by my means of case studies on voting, rights management, access control and server virtualization. We investigate the strength of combinations of physical and digital security mechanisms and attempt to generalize our results.
Smart cards have become ubiquitous in our lives. However, most of them are not really smart and just store an unique identifier. Even the smarter ones still rely heavily on unique identifiers. This results in simple traceability of such cards which can lead to detailed files of the users actions and behaviour, interfering with the user's privacy.
To prevent such privacy issues we propose to use attribute-based credentials instead of identity-based solutions. This is possible since most systems only rely on certain attributes instead of the user's identity. For example, when you want to board a train, the system only needs to verify whether you're allowed to or not, based on a ticket or year pass.
In this talk we will have a look at the available technologies which implement an attribute-based credential system: Microsoft's U-Prove, IBM's Idemix, and our own self-blindable credentials. More specific we will discuss our results on the smart card implementations of these technologies. Finally I'll give an demonstration to show how such credentials can be used in practice.
Process algebras, on one hand, provide a convenient means for specifying protocols. Epistemic models, on the other hand, are appropriate for specifying knowledge-related properties of such protocols (e.g., anonymity or secrecy). These two approaches to specification and verification have so far developed in parallel and one has either to define ad-hoc correctness criteria for the process-algebraic model or use complicated epistemic models to specify the behavioral aspects. We work towards bridging this gap by proposing a combined framework which allows for modeling the behavior of a protocol in a process language with an operational semantics and supports reasoning about properties expressed in a rich logic which combines temporal and epistemic operators.
Further, we report on an extension of this framework with cryptographic construct and its influence on the epistemic aspects and conclude with a brief comparison of our semantic framework with the interpreted systems model of Fagin and Vardi.
This talk does not assume any pre-knowledge of either of the two fields (and in particular of process algebras).