UML and SysML play a central role in modern software and systems engineering. They are considered as the de facto standard for modeling software and systems. Today's systems are created from a myriad of interacting parts that are combined to produce visible behaviors. The main difficulty arises from the different ways in modeling each component and the way they interact with each other. Moreover, nowadays secure software has become an essential part in industrial development. One challenge in academia as well as in industry is to produce a secure product. Another challenge is to prove its correctness especially when the software environment is imprecise and uncertain.
The aim of this talk is to provide a practical and formal framework that enables security risk assessment and security requirements verification on a system modeled as a composition of UML/SysML behavioral diagrams. Our main contribution is a novel approach to automatically verify security of systems on their design models based on security requirements, probabilistic adversarial interactions between potential attackers and the system's models. These structures are shaped to provide an elegant way to define the combination between different kinds of diagrams. We rely on stochastic security templates to specify security properties and a standard catalogue of attack patterns to build a library of attacks design templates. The result of the interaction between selected attack scenarios and the composed diagrams with the instantiated security properties are used to quantify security risk by applying probabilistic model-checker. To handle the verification process scalability, our approach allows the verification of large system efficiently by optimizing and avoiding the global model construction. To demonstrate the effectiveness of our approach, we apply our methodology on academia as well as industrial benchmarks.
Dining Cryptographers is a well-known cryptographic protocol for anonymous broadcast, proposed by Chaum. In the original model it is assumed that all participants are on-line, which allows correcting problems that may occur, such as collisions (two participants happening to choose the same slot) or disrupters (dishonest participants who deviate from the protocol).
In 2006, the author proposed an off-line variant in which, after a preliminary phase in which random strings are exchanged, the participants only publish one long message. In this talk we will discuss this protocol and present some new results to improve efficiency.
This is joint research with Pablo García and Germán Montejano.
Signaling pathways of the cell are causal cascades of protein-protein interactions (that involve complex formation and post-transcriptional modifications) that are triggered in response to the reception of some (protein) signal at the surface of the cell. Such pathways are generally classified according to their phenotypic outcome, such as cell growth, death or motion.
In this talk, which requires no deep knowledge about computer science nor biology, I will introduce Kappa, a rule-based formalism designed to model protein-protein interaction networks in a scalable fashion. Kappa is equipped with a stochastic simulator (KaSim) that may output time trajectories of particular protein complexes. We will see that by tracking the causal interplay between Kappa rule applications, it becomes possible to "explain" the appearance of particular observables (for instance a gene activation) during a simulation. We will argue that such "explanations" are good candidates for being "formal signaling pathways". It is noteworthy that standard causal semantics that are considered in computer science fail to give satisfactory "formal pathways". As a consequence the "formal pathways" that I will introduce in this talk require a notion of "non local causality". This new concept, albeit non standard in computer science, seems to match the intuitive notion of causality in signaling pathways. We will conclude about the general interest of "non local causality" for the computer science community.
Functional encryption is a generalization of public-key encryption that enables selective and fine-grained control over encrypted data. Though functional encryption is becoming very popular in real world applications and cloud computing, general and meaningful security definitions for this primitive are either insufficient or subject to impossibility results.
In this seminar I will survey the research in this area as well as I will present new perspectives. The first part of the talk will concern my paper published in CRYPTO 2013, and in the last part I will present my new results aimed to improve the state of the art.
Indistinguishability properties are essential in formal verification of cryptographic protocols. They are needed to model anonymity properties, strong versions of confidentiality and resistance against offline guessing attacks, which can be conveniently modeled using process equivalences. Currently, we are in the strange situation where nearly all security properties (with the notable exception of authentication) are modeled using such equivalences, while most tools are only able to verify trace properties.
We will discuss the few tools that have the ability to verify such equivalences and there limitations. Then we will present our approach which is based on a fully abstract modelling of the traces of a bounded number of sessions of the protocols into first-order Horn clauses on which a dedicated resolution procedure is used to decide equivalence properties. The procedure has been implemented in a prototype tool A-KiSs (Active Knowledge in Security Protocols) and has been effectively tested on examples. Several of the examples were outside the scope of existing tools, including checking anonymity of an electronic voting protocol.
Digital signatures are schemes for demonstrating the authenticity of a digital message or document. A valid standard signature gives reason to believe that the message was created by a known sender, who cannot deny having sent it. Under more strict anonymity requirements, one can sign using group signatures, ring signatures or attribute based signatures. The latter is the focus of our talk.
In Attribute Based Signatures (ABS), messages are signed under an attribute predicate, and users are anonymous under the group of all possible users that posses enough attributes to satisfy that predicate. In other words, attribute-based signatures demonstrate that a message was created by someone that owns enough attributes.
In this talk we introduce the basics notions of digital signatures, and present the advantages and shortcomings of the previous signing schemes. We will motivate and describe in detail the concept of attribute-based signatures.
Linguistic Steganography is concerned with hiding information in a natural language text, for the purposes of sending secret messages. A related area is natural language watermarking, in which information is added to a text in order to identify it, for example for the purposes of copyright. Linguistic Steganography algorithms hide information by manipulating properties of the text, for example by replacing some words with their synonyms. Unlike image-based steganography, linguistic steganography is in its infancy with little existing work. In this talk I will motivate the problem, in particular as an interesting application for natural language processing (NLP). Linguistic steganography is a difficult NLP problem because any change to the cover text must retain the meaning and style of the original, in order to prevent detection by an adversary.
I will describe a number of linguistic transformations that we have investigated, including synonym substitution and adjective deletion. For the adjective deletion I will describe a novel secret sharing scheme in which many people receive a copy of the original text, but with different adjectives deleted; only when the various texts are combined together can the secret message be revealed.
Joint work with Ching-Yun (Frannie) Chang.
This talk describes how to apply the MarkPledge voter (human) verifiable vote encryption technique to complex ballots, e.g. ballots with a large number of candidates and preferential voting ballots.
The presentation will be divided in two parts. The first part will focus on a new partial vote receipt approach that aims to minimize the usability issues of a direct MarkPledge approach to complex ballots. The second part of the presentation will focus on a new way to implement MarkPledge that is more flexible and efficient, thus better suited for complex ballots.
Nowadays web applications and browser security are hot topics. Many believe a simple Cross Site Scripting vulnerability might be more dangerous than a local privilege escalation. In this seminar we'll focus on XSS post exploitation, and generally hooking victim browsers with Social Engineering techniques to do a variety of nasty exploitation techniques.
In order to achieve this we'll use BeEF, the Browser Exploitation Framework, covering various modules and extensions that help to achieve persistence, trick victims to do unwanted actions, and launch exploits and additional attacks, sometimes to fully patched browsers.
Between the many, we'll cover Social Engineering attacks, Man in The Browser, Tunneling Proxy, together with various exploitation scenarios on the Intranet of the hooked browser.
Electric vehicles are promising and futuristic automobiles which use electric batteries for clean energy. Because energy storage devices nowadays cannot meet the requirement for long-term driving, the electric vehicles need to visit charging stations frequently for energy supplying. As a result, the location privacy disclosed along with the charging process has drawn particular attention.
In the talk, we will describe a state-of-the-art protocol for this purpose and we will also present our ongoing work on formal verification of the protocol. This protocol has four sub-routines which are registration, charging, topup and statement. In each sub-routine, separate protocol and zero-knowledge proof have been used. Due to its high complexity, we decide to treat the zero-knowledge proofs as black boxes in the first place and reasoning each sub-routine symbolically. After that, we shall verify the zero-knowledge proofs on a computational level and give a complete proof to whole system. So far, the symbolic models have been built and formally verified. The verification on computational level is partially finished.
Probabilistic programs are pivotal for cryptography, privacy and quantum programs, are typically a few number of lines, but hard to understand and analyse. The main challenge for their automated analysis is the in finite domain of the program variables combined with the occurrence of parameters. Here, parameters can range over concrete probabilities but can also encompass various aspects of the program, such as number of participants, size of certain variables etc.
McIver and Morgan have extended the seminal proof method of Floyd and Hoare for sequential programs to probabilistic programs by making the program annotations real- rather than Boolean-valued expressions in the program variables. As in traditional program analysis, the crucial annotations are the loop invariants. The problem to synthesize loop invariants is very challenging for probabilistic programs where real-valued, quantitative invariants are needed.
In this talk, we present a novel approach to synthesize loop invariants for the class of linear probabilistic programs. In order to provide an operational interpretation to the quantitative annotations, we give a connection between McIver and Morgan's wp-semantics of probabilistic programs and parameterized Markov decision processes.
The success of a security attack crucially depends on time: the more time available to the attacker, the higher the probability of a successful attack; when given enough time, any system can be compromised. Insight in time-dependent behaviors of attacks and the evolution of the attacker's success as time progresses is therefore a key for effective countermeasures in securing systems.
In this talk we present an efficient technique to analyze attack times for an extension of the prominent formalism of attack trees. If each basic attack step, i.e., each leaf in an attack tree, is annotated with a probability distribution of the time needed for this step to be successful, we show how this information can be propagated to an analysis of the entire tree. In this way, we obtain the probability distribution for the entire system to be attacked successfully as time progresses. For our approach to be effective, we take great care to always work with the best possible compression of the representations of the probability distributions arising. This is achieved by an elegant calculus of acyclic phase type distributions, together with an effective compositional compression technique.
Security protocols use cryptographic techniques to achieve some important goals in computer networks. These goals are for example authentication (identity confirmation of communicating sides), distribution of new session keys, or ensuring confidentiality of messages. For many years verification methods have been intensively studied, including verification of security properties in protocols.
In this talk, we will present a new concept of mathematical modeling and formal verification of security protocols. We will start with basic mathematical definitions of our mathematical model for protocol executions. Next we define correct sequences of protocol executions. Then, using the simple idea of chains of states, we will express the basic concepts, including knowledge of users and dependencies between steps of the protocol and other simultaneous executions. In particular, the feasible executions of the protocol will be modeled as a subtree of the tree of all executions of the protocol. Finally, we will show how to search the tree to find a possible attack or to prove correctness of the protocol. Suitable restrictions on the structure of the tree, ensure very short time of verification in all the examples considered and verified so far.
We will use the NSPK Protocol as a working example on which we explain the model and the algorithm. We will also give a glimpse of our experimental results for several protocols, and compare them with results obtained by the leading verification tools.
Our energy infrastructure in changing. The amount of green energy is increasing, and consumers become prosumers. The classic infrastructure with one power plant which provides power to all consumers, is slowly transforming into a grid of consumers, prosumers, distributed energy suppliers, and power plants. The original hierarchical structure is being changed into an distributed structure. All these changes have an impact on the infrastructure, the traditional grid changes into a so called 'smart grid'. This means that more ICT is used to measure and control the energy infrastructure.
These changes to the energy infrastructure have the potential to increase the efficiency of the infrastructure, and enable the use of more green energy. However, they also introduce new risks. In my presentation, I'll clarify the changes made in the infrastructure, and I'll discuss some of the arising security questions.
An electronic auction protocol will only be used by those who trust that it operates correctly. Therefore, e-auction protocols must be verifiable: seller, buyer and losing bidders must all be able to determine that the result was correct. We pose that the importance of verifiability for e-auctions necessitates a formal analysis. Consequently, in the first part of the talk, we identify notions of verifiability for each stakeholder. We formalize these and then use the developed framework to study the verifiability of several examples. We provide an analysis of the protocol by Sako in the applied pi-calculus with help of ProVerif, finding it to be correct. Additionally we identify issues with the protocols due to Curtis et al. and Brandt.
In the second part, we will analyze the protocol by Brandt in more detail. We show first that this protocol - when using malleable interactive zero-knowledge proofs - is vulnerable to attacks by dishonest bidders. Such bidders can manipulate the publicly available data in a way that allows the seller to deduce all participants' bids. Additionally we discuss attacks on non-repudiation, fairness and the privacy of individual bidders exploiting authentication problems.
Discrete linear dynamical systems permeate a vast number of areas of mathematics and computer science, and also have many applications in other fields such as economics and theoretical biology. In the context of computer science, discrete linear dynamical systems arise among others in connection with automated verification (termination of linear programs, probabilistic systems, stochastic logics), quantum computing, formal language theory, etc.
In this talk I will introduce discrete linear dynamical systems (absolutely no prior knowledge will be assumed) and present some fundamental decision problems, such as reachability and positivity.
This is joint work with James Worrell and Matt Daws.
The recent advent of ubiquitous technologies has raised an important concern for citizens: the need to protect their privacy. So far, this wish was not heard of industrials, but national and international regulation authorities, as the European Commission recently published some guidelines to enforce customers' privacy in RFID systems: "Privacy by design" is the way to be followed as stated in EC Recommendation. Research on privacy is an active domain but there is still a wide gap between theory and everyday life's applications. Filling this gap will require academia to design protocols and algorithms that fit the real life constraints. These requirements are 'sine qua non' conditions for deploying privacy-friendly authentication protocols in real-life applications. It is a popular challenge to design authentication protocols that are both privacy-friendly and usable in real-life. A significant portion of the literature in RFID is dedicated to that goal, and many inventive mechanisms were proposed to achieve it.
In this presentation, firstly, I will reintroduce security and privacy aspects of RFID applications and the trade-off between security and scalability. I will present our implementation of a scalable and forward-private authentication protocol, using a variant by Avoine et. al. that accommodates it to time-memory trade-offs. Then I will summarize some of our publications about RFID protocols and distance bounding protocols with their application scenarios.