Publications
2024
Event
Abstract
Nix and similar systems are based on hashing their inputs. I took a really close look at how this works, and I’d like to help you understand it better as well.
It’s quite difficult to really understand how the hashing of build inputs works in Nix. I think one of the reasons for this is that people talking about this, and also the terminology that they use (e.g. content-addressing vs input-addressing), is not about how build inputs are hashed, but about how contents of the Nix store are addressed. The way Nix is designed, those things are related, but how they are related is complicated, and I don’t think we have good terminology for the bits and pieces involved in the hashing part.
So, … I took some definitions from the Build Systems à la Carte paper, which explain this, added a few definitions of my own, so we can name and talk about the bits and pieces involved, and I’m going to walk you through that. If that does not sound like fun to you, think of it more as us delving into the abyss of terminology and pedantry together. I am pretty sure there are quite a few people at NixCon who find that appealing.
If you get stuck in said abyss, come talk to me. I am happy to spend NixCon just explaining this over and over again.
Event
Abstract
The key principles Nix is built on are great for supply chain security. Those principles could take us much further, if we extended or replaced the signatures that provide transport security for binary caches today, in favor of a more powerful mechanism. A mechanism that works end to end from builder to user, includes provenance data about the builder, and ideally makes that provenance data verifiable.
Adopting Trustix, or extending the existing signing scheme are both possible ways to add builder provenance data, but comparing those options is not the focus of my talk. Instead I would like to focus on the kind of data that we might want to add, and the benefit we would obtain. This starts simple with a boolean flag, which lets signers claim to have built a derivation themselves, all the way up to a source link and remote attestation, which make it possible to verify which software is running on the builder.
Event
Abstract
Trusting the output of a build process requires trusting the build process itself, and the build process of all inputs to that process, and so on. Cloud build systems, like Nix or Bazel, allow their users to precisely specify the build steps making up the intended software supply chain, build the desired outputs as specified, and on this basis delegate build steps to other builders or fill shared caches with their outputs. Delegating build steps or consuming artifacts from shared caches, however, requires trusting the executing builders, which makes cloud build systems better suited for centrally managed deployments than for use across distributed ecosystems. We propose two key extensions to make cloud build systems better suited for use in distributed ecosystems. Our approach attaches metadata to the existing cryptographically secured data structures and protocols, which already link build inputs and outputs for the purpose of caching. Firstly, we include builder provenance data, recording which builder executed the build, its software stack, and a remote attestation, making this information verifiable. Secondly, we include a record of the outcome of how the builder resolved each dependency. Together, these two measures eliminate transitive trust in software dependencies, by enabling users to perform verification of transitive dependencies independently, and against their own criteria, at time of use. Finally, we explain how our proposed extensions could theoretically be implemented in Nix in the future.
Event
Abstract
Android’s fast-paced development cycles and the large number of devices from different manufacturers do not allow for an easy comparison between different devices’ security and privacy postures. Manufacturers each adapt and update their respective firmware images. Furthermore, images published on OEM websites do not necessarily match those installed in the field. Relevant software security and privacy aspects do not remain static after initial device release, but need to be measured on live devices that receive these updates. There are various potential sources for collecting such attributes, including webscraping, crowdsourcing, and dedicated device farms. However, raw data alone is not helpful in making meaningful decisions on device security and privacy. We make available a website to access collected data. Our implementation focuses on reproducible requests and supports filtering by OEMs, devices, device models, and displayed attributes. To improve usability, we further propose a security score based on the list of attributes. Based on input from Android experts, including a focus group and eight individuals, we have created a method that derives attribute weights from the importance of attributes for mitigating threats on the Android platform. We derive weightings for general use cases and suggest possible examples for more specialist weightings for groups of confidentiality/privacy-sensitive users and integrity-sensitive users. Since there is no one-size-fits-all setting for Android devices, our website provides the possibility to adapt all parameters of the calculated security score to individual needs.
Abstract
This thesis explores the potential of decentralized technologies for enhancing privacy and operational efficiency within biometric authentication systems. The widespread use of centralized biometric systems is associated with significant risks, such as data breaches and privacy violations, highlighted by vulnerabilities in systems like India’s Aadhaar. Promoting a shift towards decentralized frameworks, it allows users to control where their personal data is stored, aiming to reduce the risks of large-scale unauthorized access.
This research aims to enhance biometric systems for embedded devices through a holistic approach that progresses systematically from individual data elements, specifically embeddings, to complete application scenarios utilizing state-of-the-art technologies. The study begins by reducing the embedding size by 96 %, substantially increasing the processing efficiency of personal identifiers. Subsequently, the focus shifts to optimizing the most time-intensive component of the sensor by incorporating multiple face detection models that enhance specific operational efficiencies. Furthermore, developing a domain-specific sensor language allows for a precise definition of performance standards across various applications, facilitating a tailored and fully realized implementation that meets real-world requirements.
Testing a real-world prototype with cameras that incorporate the suggested improvements validates the effectiveness of decentralized biometric systems. This research demonstrates practical, efficient, and decentralized methods for authentication, making a significant contribution to the field and setting the stage for future studies in secure digital solutions focused on privacy.
Event
Abstract
With biometric identification systems becoming increasingly ubiquitous, their complexity is escalating due to the integration of diverse sensors and modalities, aimed at minimizing error rates. The current paradigm for these systems involves hard-coded aggregation instructions, presenting challenges in system maintenance, scalability, and adaptability. These challenges become particularly prominent when deploying new sensors or adjusting security levels to respond to evolving threat models.
To address these concerns, this research introduces BioDSSL, a Domain Specific Sensor Language to simplify the integration and dynamic adjustment of security levels in biometric identification systems. Designed to address the increasing complexity due to diverse sensors and modalities, BioDSSL promotes system maintainability and resilience while ensuring a balance between usability and security for specific scenarios.
Furthermore, it facilitates decentralization of biometric identification systems, by improving interoperability and abstraction. Decentralization inherently disperses the concentration of sensitive biometric data across various nodes, which could indirectly enhance privacy protection and limit the potential damage from localized security breaches. Therefore, BioDSSL is not just a technical improvement, but a step towards decentralized, resilient, and more secure biometric identification systems. This approach holds the promise of indirectly improving privacy while enhancing the reliability and adaptability of these systems amidst evolving threat landscapes and technological advancements.
Abstract
Trustworthiness assessment is an essential step to assure that interdependent systems perform critical functions as anticipated, even under adverse conditions. In this paper, a holistic trustworthiness assessment framework for ultra-wideband self-localization is proposed, including the attributes of reliability, security, privacy, and resilience. Our goal is to provide guidance for evaluating a system’s trustworthiness based on objective evidence, i.e., so-called trustworthiness indicators. These indicators are carefully selected through the threat analysis of the particular system under evaluation. Our approach guarantees that the resulting trustworthiness indicators correspond to chosen real-world threats. Moreover, experimental evaluations are conducted to demonstrate the effectiveness of the proposed method. While the framework is tailored for this specific use case, the process itself serves as a versatile template, which can be used in other applications in the domains of the Internet of Things or cyber–physical systems.
Abstract
This master thesis explores the feasibility and security aspects of implementing a digital identity wallet on Android smartphones. With the increasing prominence of digital wallets in various domains, the security of these wallets, which store and manage sensitive data, is of paramount importance. The research project Digidow aims to develop decentralized digital identity systems for the physical world, with the digital wallet being a crucial component of this system. This thesis assesses the current state of protection capabilities on Android smartphones and aims to define a pathway for implementing a secure digital identity wallet. The research involves redefining the requirements for and threats to a digital identity wallet, analyzing best practice advice and theoretical capabilities, dissecting actual wallets to understand their implementation, and refining the list of theoretical capabilities based on the evaluation. The findings of this research could potentially contribute to the development of more secure digital identity wallets and enhance the overall security of digital identification systems.
Event
Abstract
Being the victim of DDoS attacks is an experience shared by many Tor relay operators. Despite the prevalence of this type of attack, the experiences and lessons learned after such attacks are rarely discussed publicly. This work provides a detailed description of a DDoS attack against two Tor relays operated by the authors. By sharing experiences on how an attack was analyzed after it happened and what mitigation mechanisms would have been capable of stopping it, this work tries to support a discussion on guidelines for relay operators on how to properly and securely run their relays. In addition to that, the included attack analysis tries to understand why the attack took place in the first time, what the attackers were trying to achieve, the amount of resources they had to expend and how the attack actually worked. Hopefully, this information will be useful in future discussions on how to make the Tor network as a whole more resilient against this kind of attack.
Event
Abstract
Conventional embeddings employed in facial verification systems typically consist of hundreds of floating-point numbers, a widely accepted design paradigm that primarily stems from the swift computation of vector distance metrics for identification and authentication such as the L2 norm. However, the utility of such high-dimensional embeddings can become a potential concern when they are integrated into complex comparative strategies, for example multi-party computations. In this study, we challenge the presumption that larger embedding sizes are always superior and provide a comprehensive analysis of the effects and implications of substantially reducing the dimensions of these embeddings (by a factor of 29). We demonstrate that this dramatic size reduction incurs only a minimal compromise in the quality-performance trade-off. This discovery could lead to enhancements in computation efficiency without sacrificing system performance, potentially opening avenues for more sophisticated and decentral uses of facial verification technology. To enable other researchers to validate and build upon our findings, the Rust code used in this paper has been made publicly accessible and can be found at https://github.com/mobilesec/reduced-embeddings-analysis-icprs.
Abstract
Anonymous credential systems allow users to obtain a credential on multiple attributes from an organization and then present it to verifiers in a way that no information beyond what attributes are required to be shown is revealed. Moreover, multiple uses of the credential cannot be linked. Thus they represent an attractive tool to realize fine-grained privacy-friendly authentication and access control. In order to avoid a single point of trust and failure, decentralized AC systems have been proposed. They eliminate the need for a trusted credential issuer, e.g., by relying on a set of credential issuers that issue credentials in a threshold manner (e.g., t out of n f). In this paper, we present a novel AC system with such a threshold issuance that additionally provides credential delegation. It represents the first decentralized and delegatable AC system. We provide a rigorous formal framework for such threshold delegatable anonymous credentials (TDAC’s). Our concrete approach departs from previous delegatable ACs and is inspired by the concept of functional credentials. More precisely, we propose a threshold delegatable subset predicate encryption (TDSPE) scheme and use TDSPE to construct a TDAC scheme and present a comparison with previous work and performance benchmarks based on a prototype implementation.
Abstract
The Digidow project aims to research solutions for privacy-preserving decentralized digital identity authentication in the real world. The project uses a Personal Identity Agent (PIA) to manage the user’s identity and credentials, and a sensor to determine the user movement and intentions. These sensors register real world events and send the data to the PIA, which then processes the data and sends it to a verifier. There was an existing sensor implementation for a facial recognition sensor, a generic sensor library that can be used to implement other sensors, and somewhat working UWB anchors. This report describes the implementation of a UWB sensor that detects the door a user is standing in front of, and which integrates seamlessly with the exiting components.
Abstract
An emerging supply-chain attack due to a backdoor in XZ Utils has been identified. The backdoor allows an attacker to run commands remotely on vulnerable servers utilizing SSH without prior authentication. We have started to collect available information with regards to this attack to discuss current mitigation strategies for such kinds of supply-chain attacks. This paper introduces the critical attack path of the XZ backdoor and provides an overview about potential mitigation techniques related to relevant stages of the attack path.
Abstract
Identifying people digitally and securely is already important today and will become increasingly important in the future. For this reason, the JKU launched the Digidow project. Digidow tries to use a distributed system to link digital identities and the people associated with them. Therefore, everyone is expected to manage their own identity on their own devices and interact with sensors which are also distributed. However, such a highly distributed system requires participants to know or discover each other. For this reason, the idea of a sensor directory which is used to find and identify sensors was born. In this thesis, some core requirements are established which are then extended with additional requirements after the threats of the system are analyzed. After the requirements are clear several technologies and their components are analyzed that could solve parts of the sensor directory. It is also shown how those technologies might be used to implement the sensor directory. Finally, those technologies are compared to each other and a group of technologies is shown which could be used to implement the sensor directory.
Abstract
Android is the most widely deployed end-user focused operating system. With its growing set of use cases encompassing communication, navigation, media consumption, entertainment, finance, health, and access to sensors, actuators, cameras, or microphones, its underlying security model needs to address a host of practical threats in a wide variety of scenarios while being useful to non-security experts. To support this flexibility, Android’s security model must strike a difficult balance between security, privacy, and usability for end users; provide assurances for app developers; and maintain system performance under tight hardware constraints. This paper aims to both document the assumed threat model and discuss its implications, with a focus on the ecosystem context in which Android exists. We analyze how different security measures in past and current Android implementations work together to mitigate these threats, and, where there are special cases in applying the security model in practice; we discuss these deliberate deviations and examine their impact.
Previous version
This work presents a major revision of the article “The Android Platform Security Model” originally published in ACM Transactions on Privacy and Security, Volume 24, Issue 3, Article No. 19, 2021, pp. 1-35, https://doi.org/10.1145/3448609.
2023
Event
Abstract
Sometimes entities have to prove to others that they are still alive at a certain point in time, but with the added requirements of anonymity and plausible deniability; examples for this are whistleblowers or persons in dangerous situations. We propose a system to achieve this via hash chains and publishing liveness signals on Tor onion services. Even if one participant is discovered and (made to) cooperate, others still enjoy plausible deniability. To support arbitrary numbers of provers on a potentially limited list of online storage services, an additional “key” distinguishes multiple provers. This key should neither be static nor predictable to third parties, and provide forward secrecy. We propose both a derivation from user-memorable passwords and an initial pairing step to transfer unique key material between prover and verifier. In addition to describing the protocol, we provide an open source App implementation and evaluate its performance.
Event
Abstract
While real-time face recognition has become increasingly popular, its use in decentralized systems and on embedded hardware presents numerous challenges. One challenge is the trade-off between accuracy and inference-time on constrained hardware resources. While achieving higher accuracy is desirable, it comes at the cost of longer inference-time. We first conduct a comparative study on the effect of using different face recognition distance functions and introduce a novel inference-time/accuracy plot to facilitate the comparison of different face recognition models. Every application must strike a balance between inference-time and accuracy, depending on its focus. To achieve optimal performance across the spectrum, we propose a combination of multiple models with distinct characteristics. This allows the system to address the weaknesses of individual models and to optimize performance based on the specific needs of the application.
We demonstrate the practicality of our proposed approach by utilizing two face detection models positioned at either end of the inference-time/accuracy spectrum to develop a multimodel face recognition pipeline. By integrating these models on an embedded device, we are able to achieve superior overall accuracy, reliability, and speed; improving the trade-off between inference-time and accuracy by striking an optimal balance between the performance of the two models, with the more accurate model being utilized when necessary and the faster model being employed for generating fast proposals. The proposed pipeline can be used as a guideline for developing real-time face recognition systems on embedded devices.
Event
Abstract
Anonymous credentials (AC) offer privacy in user-centric identity management. They enable users to authenticate anonymously, revealing only necessary attributes. With the rise of decentralized systems like self-sovereign identity, the demand for efficient AC systems in a decentralized setting has grown. Relying on conventional AC systems, however, require users to present independent credentials when obtaining them from different issuers, leading to increased complexity. AC systems should ideally support being multi-authority for efficient presentation of multiple credentials from various issuers. Another vital property is issuer hiding, ensuring that the issuer’s identity remains concealed, revealing only compliance with the verifier’s policy. This prevents unique identification based on the sole combination of credential issuers. To date, there exists no AC scheme satisfying both properties simultaneously.
This paper introduces Issuer-Hiding Multi-Authority Anonymous Credentials (IhMA), utilizing two novel signature primitives: Aggregate Signatures with Randomizable Tags and Public Keys and Aggregate Mercurial Signatures. We provide two constructions of IhMA with different trade-offs based on these primitives and believe that they will have applications beyond IhMA. Besides defining the notations and rigorous security definitions for our primitives, we provide provably secure and efficient constructions, and present benchmarks to showcase practical efficiency.
Event
Abstract
Current mobile app distribution systems use (asymmetric) digital signatures to ensure integrity and authenticity for their apps. However, there are realistic threat models under which these signatures cannot be fully trusted. One example is an unconsciously leaked signing key that allows an attacker to distribute malicious updates to an existing app; other examples are intentional key sharing as well as insider attacks. Recent app store policy changes like Google Play Signing (and other similar OEM and free app stores like F-Droid) are a practically relevant case of intentional key sharing: such distribution systems take over key handling and create app signatures themselves, breaking up the previous end-to-end verifiable trust from developer to end-user device. This paper addresses these threats by proposing a system design that incorporates transparency logs and end-to-end verification in mobile app distribution systems to make unauthorized distribution attempts transparent and thus detectable. We analyzed the relevant security considerations with regard to our threat model as well as the security implications in the case where an attacker is able to compromise our proposed system. Finally, we implemented an open-source prototype extending F-Droid, which demonstrates practicability, feasibility, and performance of our proposed system.
Abstract
Users or devices regularly need to demonstrate who they are on the Internet to enable decisions like whether they can access a certain resource such as a service. This often involves dedicated issuing authorities or identity providers (IdP), like Facebook or Google, who issue digital credentials for this purpose. Such credentials are important in securing access to traditional online services such as banking or email. However, they are becoming increasingly important in other non-digital areas such as travel (digital passports and driving licenses), physical door access (building or car keys), or digital health/vaccination credentials. However, in addition to creating single points of failure, relying on large and centralized identity providers raises concerns about the privacy of user data and the required trust in those central points. In particular, users lose control over their digital identity and disclose private data to authorities, which increases the severity of data breaches.
In this thesis, we investigate privacy-enhancing technologies to achieve the benefits of digital identity while preserving user privacy. By leveraging efficient cryptographic tools, e.g., signatures, zero-knowledge proofs, commitments, and encryption schemes, we propose secure protocols that safeguard user privacy while remaining practical. In particular, we focus on Anonymous Credentials (AC) as a basis for authentication and authorization, which have emerged as a promising solution for proving possession of credentials and attributes while preserving user privacy. Additionally, ACs can enable individuals to control their personal information and limit its collection and use by third parties. We develop and extend AC schemes regarding various properties while optimizing their efficiency as follows:
Issuer-Hiding Multi-Authority AC: We introduce the concept of Issuer-Hiding Multi-Authority Anonymous Credentials IhMA, which provides the Multi-Authority (MA) and Issuer Hiding (IH) critical concerns that have not yet been adequately addressed so far. MA means proving possession of attributes from multiple independent credential issuers more efficiently than showing multiple independent credentials. IH allows users to prove the validity of their credentials by only revealing that they have been issued by some the set of acceptable issuers but not the exact issuers. This protects the user’s privacy, especially in decentralized settings where many issuers are involved, as verifying a user’s credential may require knowledge of the Issuer’s public key, which could inadvertently disclose private information about the user. Our proposed solution involves the development of two new primitives which are independent of interest:
- Aggregate Signatures with Randomizable Tags and Public Keys called AtoSa, where the aggregation and tag are useful for MA, and the latter feature (randomizable public keys) is essential for realizing the IH feature.
- Aggregate Tag based Mercurial Signatures called ATMS, which extend AtoSa to additionally support the randomization of messages and achieve equivalence class signatures (SPSEQ) and thus obtain a version of mercurial signatures that are aggregatable and have randomizable tags in other to provide the issuing hiding and unlinkability in multi-authority.
Delegatable AC: We present a novel delegatable anonymous credential (DAC) scheme that allows the owners of credentials to delegate the obtained credential to other users. It supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and comes with efficient construction. In particular, our DAC credentials do not grow with delegations, i.e., they are of constant size. Our approach builds on a new primitive:
- Structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme to sign vectors of set commitments which can be extended by additional set commitments. Signatures additionally include a user’s public key, which can be switched. Similar to conventional SPSEQ signatures, the signatures and messages can be publicly randomized and thus allow unlinkable showings in the DAC system.
Threshold Delegatable AC: We present a novel AC system with threshold issuance that additionally provides credential delegation and thus represents the first decentralized and delegatable AC. We provide a rigorous formal framework for such threshold delegatable anonymous credentials (TDAC). Our concrete approach departs from previous delegatable ACs and is inspired by the concept of predicate encryption and, in particular, functional credentials and builds upon the following primitive:
- A threshold delegatable subset predicate encryption (TDSPE) scheme, in which partial decryption keys are issued in a threshold way by multiple authorities and from which users then can generate decryption keys. We also show how one can use any existing AC system (not necessarily from the abovedeveloped AC systems) with login credentials (e.g., password and biometric) to provide privacy-preserving single sign-on.
Privacy-Preserving Single Sign-On: We construct a novel decentralized privacypreserving single sign-on mechanism using a combination of existing AC systems and OPRF schemes with Multi-Factor Authentication, where the process of user authentication no longer depends on a single trusted third party (i.e., the IdP) in control of the whole authentication process. Also, it permits services where authenticating users remain anonymous within a group of users. Moreover, our scheme does not require the IdP to be online during the verification (passive verification).
Recovery of Encrypted Mobile Device Backups (eID): We propose a secure protocol for users to recover their electronic identity (eID) data in case of smartphone loss or malfunction. We leverage biometric authentication and auxiliary devices to allow clients to recover their secret keys from partially trusted servers using a Fuzzy Extractor.
We formalize all concepts and provide rigorous security definitions for all our proposed primitives and AC protocols. To validate the efficacy of our proposed solutions, we present efficient instantiations of the primitives/protocols. We also conduct performance benchmarking based on a prototype implementation made available as an open source python package to demonstrate the practical efficiency of our protocols and also primitives.
Abstract
Anonymous credentials (AC) have emerged as a promising privacy-preserving solution for user-centric identity management. They allow users to authenticate in an anonymous and unlinkable way such that only required information (i.e., attributes) from their credentials are revealed. With the increasing push towards decentralized systems and identity, e.g., self-sovereign identity (SSI) and the concept of verifiable credentials, this also necessitates the need for suitable AC systems. For instance, when relying on existing AC systems, obtaining credentials from different issuers requires the presentation of independent credentials, which can become cumbersome. Consequently, it is desirable for AC systems to support the so-called multi-authority (MA) feature. It allows a compact and efficient showing of multiple credentials from different issuers. Another important property is called issuer hiding (IH). This means that showing a set of credentials is not revealed which issuer has issued which credentials but only whether a verifier-defined policy on the acceptable set of issuers is satisfied. This issue becomes particularly acute in the context of MA, where a user could be uniquely identified by the combination of issuers in their showing. Unfortunately, there are no AC schemes that satisfy both these properties simultaneously.
To close this gap, we introduce the concept of Issuer-Hiding Multi-Authority Anonymous Credentials (IhMA). Our proposed solution involves the development of two new signature primitives with versatile randomization features which are independent of interest:
- Aggregate Signatures with Randomizable Tags and Public Keys (AtoSa) and 2) Aggregate Mercurial Signatures (ATMS), which extend the functionality of AtoSa to additionally support the randomization of messages and yield the first instance of an aggregate (equivalence-class) structure-preserving signature. These primitives can be elegantly used to obtain IhMA with different trade-offs but have applications beyond.
We formalize all notations and provide rigorous security definitions for our proposed primitives. We present provably secure and efficient instantiations of the two primitives as well as corresponding IhMA systems. Finally, we provide benchmarks based on an implementation to demonstrate the practical efficiency of our constructions.
Abstract
In this thesis we propose an access control system for the peer-to-peer filesystem Hyperdrive that utilizes a cryptographic capability system based approach. The overall goal is to simplify the development of local-first software, following the principles of prioritizing local resources instead of relying on centralized services. Hyperdrive can be a useful foundation for local-first software, but it provides only very limited access control functionality. Sharing read or write capabilities with other users is crucial for applications that enable collaborative work or social interactions. Fine-grained access control therefore is a central requirement for such use cases.
Our proposed system utilizes a graph data structure for key management and enables per-file and per-directory control of read and write permissions. Additionally, it provides a simple user system that keeps track of a user’s friends and other contacts. It includes a system for initial key exchange and asynchronous communication that also works with sporadic internet access. Read and write permissions can be shared with known users, read permissions also by URL.
We implemented the proposed system as a NodeJS module and published it as an open-source library called CertaCrypt. In addition to that, we also published a demonstrator application, the CertaCrypt-Filemanager. It aims to implement an app that, from a user perspective, looks like the web interface of a cloud-storage solution similar to Dropbox or Google Drive, hiding the fact that it works completely decentralized using Peer-to-Peer (P2P) technology. This demonstrates the potential of P2P systems for implementing local-first software that replaces Software-as-a-Service (SaaS) applications.
Abstract
This bachelor thesis aims to extend the Personal Identity Agent of the Digidow project by adding two new authentication methods with FIDO2 tokens. So far, users had to use a password for the authentication process. A method for authenticating with FIDO2 tokens has not been implemented yet. Therefore, the authentication process was enhanced by implementing the authentication with security keys. Initially, two-factor authentication with security keys as second factor was implemented. In addition to that, the application now fulfills the requirement of passwordless authentication. First, this bachelor thesis describes the theoretical background of FIDO2 token authentication. Second, it gives a detailed overview of the functionality of FIDO2 token authentication. Additionally, the design choices for the implementation and the individual implementation steps are outlined. Furthermore, an evaluation concerning the Tor Browser, the WebAuthn standard, the security key setup, and implementation options is done.
Abstract
Biometrische Daten gehören zu den datenschutzrechtlich besonders sensiblen Daten. Immer mehr Systeme verwerten diese Daten. Da diese (zumindest technisch) ihre Existenz nicht offenlegen müssen, kann es keine vollständige Liste von Systemen geben, welche die eigenen persönlichen Daten verarbeiten. Zumindest jene Systeme, welche reale Konsequenzen verursachen, sind der Öffentlichkeit jedoch bekannt. Welche Möglichkeiten gibt es, sich vor den Risiken solcher Systeme zu verteidigen?
Abstract
Anonymous credentials (ACs) systems are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators can also be hidden. In this paper we present a novel DAC scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments, where signatures can be extended by additional set commitments. Signatures additionally include a user’s public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ, the signatures and messages can be publicly randomized and thus allow unlinkable delegation and showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable efficient selective showing of attributes in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Abstract
Anforderungen an Datenschutz und Informationssicherheit, aber auch an Datenaktualität und Vereinfachung bewirken einen kontinuierlichen Trend hin zu plattformübergreifenden ID-Systemen für die digitale Welt. Das sind typischerweise föderierte Single-Sign-On-Lösungen großer internationaler Konzerne wie Apple, Facebook und Google. Dieser Beitrag beleuchtet die Frage, wie ein dezentrales, offenes, globales Ökosystem nach dem Vorbild des Single-Sign-On für die digitale, biometrische Identifikation in der physischen Welt aussehen könnte. Im Vordergrund steht dabei die implizite Interaktion mit vorhandener Sensorik, mit der Vision, dass Individuen in der Zukunft weder Plastikkarten noch mobile Ausweise am Smartphone mit sich führen müssen, sondern ihre Berechtigung für die Nutzung von Diensten rein anhand ihrer biometrischen Merkmale nachweisen können. Während diese Vision bereits jetzt problemlos durch Systeme mit einer zentralisierten Datenbank mit umfangreichen biometrischen Daten aller Bürger*innen möglich ist, wäre ein Ansatz mit selbstverwalteten, dezentralen digitalen Identitäten erstrebenswert, bei dem die Nutzer*in in den Mittelpunkt der Kontrolle über ihre eigene digitale Identität gestellt wird und die eigene digitale Identität an beliebigen Orten hosten kann. Anhand einer Analyse des Zielkonflikts zwischen umfangreichem Privatsphäreschutz und Praktikabilität, und eines Vergleichs der Abwägung dieser Ziele mit bestehenden Ansätzen für digitale Identitäten wird ein Konzept für ein dezentrales, offenes, globales Ökosystem zur privaten, digitalen Authentifizierung in der physischen Welt abgeleitet.
Abstract (English)
Requirements on data privacy and information security, as well as data quality and simplification, cause a continuous trend towards federated identity systems for the digital world. These are often the single sign-on platforms offered by large international companies like Apple, Facebook and Google. This article evaluates how a decentralized, open, and global ecosystem for digital biometric identification in the physical world could be designed based on the model of federated single sign-on. The main idea behind such a concept is implicit interaction with existing sensors, in order to get rid of plastic cards and smartphone-based mobile IDs in a far future. Instead, individuals should be capable of proving their permissions to use a service solely based on their biometrics. While this vision is already proven feasible using centralized databases collecting biometrics of the whole population, an approach based on self-sovereign, decentralized digital identities would be favorable. In the ideal case, users of such a system would retain full control over their own digital identity and would be able to host their own digital identity wherever they prefer. Based on an analysis of the trade-off between privacy and practicability, and a comparison of this trade-off with observable design choices in existing digital ID approaches, we derive a concept for a decentralized, open, and global-scale ecosystem for private digital authentication in the physical world.
Abstract
Biometrics are one of the most privacy-sensitive data. Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used. Moreover, in comparison with a centralized system, designs with higher end-user freedom often incur additional network overhead. Therefore, when using face recognition for biometric authentication, an efficient way to compare faces is important in practical deployments, because it reduces both network and hardware requirements that are essential to encourage device diversity. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
Event
Abstract
Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used what in turn often incurs additional network overhead. Therefore, when using face recognition, an efficient way to compare faces is important in practical deployments. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
Event
Abstract
When it comes to visual based gait recognition, one of the biggest problems is the variance introduced by different camera viewing angles. We generate 3D human models from single RGB person image frames, rotate these 3D models into the side view, and compute gait features used to train a convolutional neural network to recognize people based on their gait information. In our experiment we compare our approach with a method that recognizes people under different viewing angles and show that even for low-resolution input images, the applied view-transformation 1) preserves enough gait information for recognition purposes and 2) produces recognition accuracies just as high without requiring samples from each viewing angle. We believe our approach will produce even better results for higher resolution input images. As far as we know, this is the first appearance-based method that recreates 3D human models using only single RGB images to tackle the viewing-angle problem in gait recognition.
2022
Abstract
Biometrics are one of the most privacy-sensitive data. Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used. Moreover, in comparison with a centralized system, designs with higher end-user freedom often incur additional network overhead. Therefore, when using face recognition for biometric authentication, an efficient way to compare faces is important in practical deployments, because it reduces both network and hardware requirements that are essential to encourage device diversity. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
Event
Abstract
Signatures are annoying when you are trying build software reproducibly, especially when deeply embedded in the output artifact. Let’s look at how we can tackle this problem elegantly with Nix.
Reproducibility means that someone else can independently recreate exactly the same binary artifact.
This is very useful for confidently knowing what source code a binary artifact was built from.
People are analyzing artifacts with tools like diffoscope
to exactly locate differences between two artifacts built using the same build instructions.
For complex projects even when looking at an exact difference in the output that way, it is not always easy to find the cause of that difference.
In general using Nix to split the build instructions into smaller steps can help us make this process easier, because we can notice differences at the end of the intermediary step that introduced them, as long as we are nix build --rebuild
ing the right build steps.
Even then signatures are still a problem, because we can never really reproduce a signed artifact without access to the signing key and even with access to the key not all popular signing schemes produce signatures deterministically. We either have to substitute in the expected signatures or keep track of those expected differences.
There is a nice pattern that we can use for always substituting the correct signatures with Nix, which makes it easy to verify embedded signatures as part of such an independent recreation process even for a large and complicated artifact. The same pattern also takes advantage of Nix’s binary caches to automatically obtain all the required signatures, which are ideally the only thing we cannot reproduce.
Abstract
Distributed systems are widely considered more privacy friendly than centralized systems because there is no central authority with access to all of the information. However, this does not consider the importance of network privacy. If users establish peer-to-peer connections to each other, adversaries monitoring the network can easily find out who is communicating with whom, at which times, and for how long, even if the communication is end-to-end encrypted. For digital identity systems this is especially critical, because knowledge about when and where an individual uses their digital identity is equivalent with knowing what the individual is doing.
The research presented in this thesis strives to design a distributed digital identity system that remains resilient against passive adversaries by instrumenting the anonymity network Tor. Significant efforts were dedicated to analyze how suited the Tor network is for supporting such distributed systems by measuring the usage of onion services and the time needed to start a new onion service. While this analysis did not detect any privacy issues within the current Tor network, it revealed several shortcomings in regard to the network latency of Tor onion services, which are addressed in the final parts of this thesis. Several modifications are proposed that are shown to significantly reduce the waiting times experienced by users of privacy preserving distributed digital identity systems.
Abstract
A Personal Identity Agent (PIA) is a digital representative of an individual and enables their authentication in the physical world with biometrics. Crucially, this authentication process maximizes privacy of the individual via data minimization. The PIA is an essential component in a larger research project, namely the Christian Doppler Laboratory for Private Digital Authentication in the Physical World (Digidow). While the project is concerned with the overall decentralized identity system, spanning several entities (e.g. PIA, sensor, verifier, issuing authority) and their interactions meant to establish trust between them, this work specifically aims to design and implement a PIA for Android. The latter entails three focus areas: First, an extensive analysis of secret storage on Android for securely persisting digital identities and/or their sensitive key material. Specifically, we are looking at the compatibility with modern cryptographic primitives and algorithms (group signatures and zero knowledge proofs) to facilitate data minimization. Second, we reuse existing Rust code from a different PIA variant. Thereby we analyze and adopt a solution for language interoperability between the safer systems programming language Rust and the JVM. And third, we strengthen the trust in our Android PIA implementation by evaluating the reproducibility of the build process. As part of the last focus area we uncovered and fixed a non-determinism in a large Rust library and subsequently achieved the desired reproducibility of the Android PIA variant.
Event
Abstract
Gerade in Zeiten einer Pandemie, in der das Tragen von Masken zeitweise verpflichtend wurde, stellt sich die Frage, welchen Einfluss das Verhüllen verschiedener Gesichtsteile auf moderne Gesichtserkennung hat. Gibt es Teile des Gesichts die von besonderer Bedeutung für die Erkennung sind? Dies evaluieren wir objektiv anhand eines häufig verwendeten Datensatzes und drei verschiedenen, modernen Gesichtserkennungssysteme. Dabei entdeckten wir ein Verhalten eines state-of-the-art Gesichtserkennungsalgorithmus, welcher reproduzierbar Personen in gewissen spezifischen Situationen nicht erkennt. Außerdem leben wir in einer Welt, in der immer mehr Daten generiert werden. So ist auch die Chance hoch, dass Gesichtserkennungssysteme mehrere Bilder derselben Person besitzen. Mehrere, verschiedene Bilder bedeuten im Vergleich zu einem einzigen Bild sehr wahrscheinlich zusätzliche Informationen. Wie können Bilder kombiniert werden, um diese Zusatzinformationen zu verwenden ohne dabei einen erheblich größeren (Zeit-)aufwand zu generieren?
Abstract
Anonymous credentials systems (ACs) are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators are also hidden. A credential issuer’s identity may convey information about a user’s identity even when all other information about the user is protected.
We present a novel delegatable anonymous credential scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. In particular, our DAC credentials do not grow with delegations, i.e., are of constant size. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments which can be extended by additional set commitments. Signatures additionally include a user’s public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ signatures, the signatures and messages can be publicly randomized and thus allow unlinkable showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable selective, efficient showings in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Event
Abstract
This work proposes a modular automation toolchain to analyze current state and over-time changes of reproducibility of build artifacts derived from the Android Open Source Project (AOSP). While perfect bit-by-bit equality of binary artifacts would be a desirable goal to permit independent verification if binary build artifacts really are the result of building a specific state of source code, this form of reproducibility is often not (yet) achievable in practice. Certain complexities in the Android ecosystem make assessment of production firmware images particularly difficult. To overcome this, we introduce “accountable builds” as a form of reproducibility that allows for legitimate deviations from 100 percent bit-by-bit equality. Using our framework that builds AOSP in its native build system, automatically compares artifacts, and computes difference scores, we perform a detailed analysis of differences, identify typical accountable changes, and analyze current major issues leading to non-reproducibility and non-accountability. We find that pure AOSP itself builds mostly reproducible and that Project Treble helped through its separation of concerns. However, we also discover that Google’s published firmware images deviate from the claimed codebase (partially due to side-effects of Project Mainline).
Abstract
Web technologies have evolved rapidly in the last couple of years and applications have gotten significantly bigger. Common patterns and tasks have been extracted into numerous frameworks and libraries, and especially JavaScript frameworks seem to be recreated daily. This poses a challenge to many developers who have to choose between the frameworks, as a wrong decision can negatively influence the path of a project.
In this thesis, the three most popular front-end frameworks Angular, React and Vue are compared by extracting relevant criteria from the literature and evaluating the frameworks against these criteria. Angular is then used to develop a web application for displaying data from the Android Device Security Rating.
Abstract
Smartphones generate an abundance of network traffic while active and during software updates. With such a high amount of data it is hard for humans to comprehend the processes behind the traffic and find points of interest that could compromise the device security. To solve this problem, this thesis proposes a system to automatically monitor the traffic of Android clients, store it in a database and perform a first analysis of the network data. For the capturing and monitoring tasks, we decided to use the full packet capture system Arkime and expand its functionality with a custom tool built in the course of this thesis. To be able to gain relevant insights, the system monitors the traffic over a long time frame, which prevents false data caused by holes in the data stream or one time events. All Android devices are separated from each other by assigning each device to a separate VLAN. For each session the system produces custom tags, low level statistical data and high level classification data. Further, the system provides a solution to apply custom rules in which data from sessions can be freely accessed and modified. Additionally, tags can be set with a matching of host names against custom regular expressions or update information stored in the database. The system uses only the captured data so that changes that can occur later on like the DNS resolution don’t affect the accuracy of the outcome.
Abstract
Digital identity documents provide several key benefits over physical ones. They can be created more easily, incur less costs, improve usability and can be updated if necessary. However, the deployment of digital identity systems does come with several challenges regarding both security and privacy of personal information. In this paper, we highlight one challenge that digital identity system face if they are set up in a distributed fashion: Network Unlinkability. We discuss why network unlinkability is so critical for a distributed digital identity system that wants to protect the privacy of its users and present a specific definition of unlinkability for our use-case. Based on this definition, we propose a scheme that utilizes the Tor network to achieve the required level of unlinkability by dynamically creating onion services and evaluate the feasibility of our approach by measuring the deployment times of onion services.
Abstract
Abbuchen von Geld im “Vorbeigehen”, Auslesen/Kopieren von Karten durch kurzes Auflegen eines Smartphone, Mithören von Transaktionen aus der Ferne; all das sind häufig genannte Angriffsszenarien im Zusammenhang mit Near-Field-Communication-(NFC-)Zahlungen. Doch stellen diese Szenarien ein ernsthaftes Sicherheitsrisiko dar? Gibt es weitere kritische Sicherheitsaspekte? Unterscheiden sich Zahlungen mit der Plastikkarte dahingehend von jenen mit dem Smartphone? Der nachfolgende Beitrag gibt einen Überblick über NFC-Zahlungen und deren potenzielle Sicherheitsrisiken.
Abstract
The Digital Shadow project, developed at the Institute for Networks and Security, requires verifiable trust in many areas in order to recognize and authorize users based on their biometric data. This trust should give the user the opportunity to check the correctness of the system quickly and easily before he or she provides the system with biometric data. This master’s thesis deals with the existing tools that can create such trust. The implemented system combines these tools in order to identify users in the Digital Shadow network with their biometric data. Incorrect use of this sensitive data should be excluded and the smallest possible set of metadata should be generated. Based on the implemented system, we discuss the properties of a trustworthy environment for software and explain the necessary framework requirements.
Abstract
In current single sign-on authentication schemes on the web, users are required to interact with identity providers securely to set up authentication data during a registration phase and receive a token (credential) for future access to services and applications. This type of interaction can make authentication schemes challenging in terms of security and availability. From a security perspective, a main threat is theft of authentication reference data stored with identity providers. An adversary could easily abuse such data to mount an offline dictionary attack for obtaining the underlying password or biometric. From a privacy perspective, identity providers are able to track user activity and control sensitive user data. In terms of availability, users rely on trusted third-party servers that need to be available during authentication. We propose a novel decentralized privacy-preserving single sign-on scheme through the Decentralized Anonymous Multi-Factor Authentication (DAMFA), a new authentication scheme where identity providers no longer require sensitive user data and can no longer track individual user activity. Moreover, our protocol eliminates dependence on an always-on identity provider during user authentication, allowing service providers to authenticate users at any time without interacting with the identity provider. Our approach builds on threshold oblivious pseudorandom functions (TOPRF) to improve resistance against offline attacks and uses a distributed transaction ledger to improve availability. We prove the security of DAMFA in the universal composibility (UC) model by defining a UC definition (ideal functionality) for DAMFA and formally proving the security of our scheme via ideal-real simulation. Finally, we demonstrate the practicability of our proposed scheme through a prototype implementation.
2021
Abstract
This bachelor thesis looks at the development of securely exporting single conversations in IM for Android apps, specifically for the Private Messenger Signal-Android, a cross-platform centralized encrypted messaging service that is free and AOSP. Initially, this paper looks at the existing Messenger apps and their chat export tools that allow users to obtain similar results as the designed feature. This document evaluates the user’s risks when trusting these other services but does not reflect much about its components or technological aspects. The present paper ignores OSs and platforms other than Android, considering that the main point of the work lies in the security of the IM app and not on the device reliability.
The writing re-examines the characteristics of different Messenger apps and presents a simple solution for exporting single chats in Signal. In this context, some other options as apps that present alternatives to the designed choice were researched. The proposal of the chat export feature for Signal-Android begins in the next section, explaining mostly primary functionalities for exporting chats, including the functionality analysis, the solution design, and the implementation. The following chapter focuses on developing this new function added to the Signal-Android APP and compares the obtained outcomes with other results of a similar solution. The final part summarizes the project, explains the problems and reviews possible improvements and subsequent development steps.
Event
Abstract
Every distributed system needs some way to list its current participants. The Tor network’s consensus is one way of tackling this challenge. But creating a shared list of participants and their properties without a central authority is a challenging task, especially if the system is constantly targeted by state level attackers. This work carefully examines the Tor consensuses created in the last two years, identifies weaknesses that did already impact users and proposes improvements to strengthen the Tor consensus in the future. Our results show undocumented voting behavior by directory authorities and suspicious groups of relays that try to conceal the fact that they are all operated by the same entity.
Abstract
This master’s thesis engages the reverse-engineering of the Da Jian Innovation low level Wi-Fi protocol. With deductive reasoning we try to establish logical connections between drone control instructions and their corresponding sent network packets. We further cluster UDP packets based on their payload length and execute bit-precise reasoning for payloads of interest. We unveil the protocol’s core structure which enables pixel-perfect camera-feed and telemetry data extraction. Finally, we introduce a proprietary software solution to capture, analyse and post-process drone operation relevant network packets.
Abstract
A service that has to interact with multiple potential biometric sensors, needs to share information about an individual with them. Although it is possible that there will no interaction with such an sensor, the data is shared nevertheless. Every shared information about biometric data of an individual could lead to a potential leakage of sensitive data. To prevent this we introduce fuzzy hash which prevents this problem by generating a hash that cannot be tracked back to the original biometric data. Still, this hash can be compared against other embeddings which allows the sensor to interact with the correct service without an interactive protocol.
Event
Abstract
With the deprecation of V2 onion services right around the corner, it is a good time to talk about V3 onion services. This post will discuss the most important privacy improvements provided by V3 onion services as well as their limitations. Aware of those limitations, our research group at the Institute of Network and Security at JKU Linz conducted an experiment that extracts information about how V3 onion services are being used from the Tor network.
Abstract
In order to increase the accuracy of SOTA face recognition pipelines, intuitively it would make sense to not only use a single image as reference embedding (template), but combine multipĺe embeddings from different images (different pose, angle, setting) to create a more accurate and robust template. In order to objectively evaluate our different proposed combinations of embeddings, we would benefit from having a single metric to tell how well the template is performing on our dataset. For certain applications (e.g. opening doors) a low false-positive rate is required, while in other situations (e.g. sensor contacting PIA’s) a low false-negative rate is required. Therefore, in this document we try to balance these different approaches by using the harmonic mean of recall and precision.
Event
Abstract
Tor onion services are a challenging research topic because they were designed to reveal as little metadata as possible which makes it difficult to collect information about them. In order to improve and extend privacy protecting technologies, it is important to understand how they are used in real world scenarios. We discuss the difficulties associated with obtaining statistics about V3 onion services and present a way to monitor V3 onion services in the current Tor network that enables us to derive statistically significant information about them without compromising the privacy of individual Tor users. This allows us to estimate the number of currently deployed V3 onion services along with interesting conclusions on how and why onion services are used.
Abstract
Mobile device authentication has been a highly active research topic for over 10 years, with a vast range of methods proposed and analyzed. In related areas, such as secure channel protocols, remote authentication, or desktop user authentication, strong, systematic, and increasingly formal threat models have been established and are used to qualitatively compare different methods. However, the analysis of mobile device authentication is often based on weak adversary models, suggesting overly optimistic results on their respective security. In this article, we introduce a new classification of adversaries to better analyze and compare mobile device authentication methods. We apply this classification to a systematic literature survey. The survey shows that security is still an afterthought and that most proposed protocols lack a comprehensive security analysis. The proposed classification of adversaries provides a strong and practical adversary model that offers a comparable and transparent classification of security properties in mobile device authentication.
Abstract
This work proposes a modular automation toolchain to analyze the current state and measure over-time improvements of reproducibility of the Android Open Source Project (AOSP). While perfect bit-by-bit equality of binary artifacts would be a desirable goal to permit independent verification if binary build artifacts really are the result of building a specific state of source code, this form of reproducibility is often not (yet) achievable in practice. In fact, binary artifacts may require to be designed in a way that makes it impossible to simply detach all sources of non-determinism and all non-reproducible build inputs (such as private signing keys). We introduce “accountable builds” as a form of reproducibility that allows such legitimate deviations from 100 percent bit-by-bit equality. Based on our framework that builds AOSP with its native build system, automatically compares artifacts, and computes difference scores, we perform a detailed analysis of discovered differences, identify typical accountable changes, and analyze current major issues that lead to non-reproducibility. While we find that AOSP currently builds neither fully reproducible nor fully accountable, we derive a trivial weighted change metric to continuously monitor changes in reproducibility over time.
Abstract
This document tries to find simple heuristics of images of faces to differentiate between successful and unsuccessful face recognition. Intuitively, the camera-face angle might play an important role: In full-frontal images a lot of information is contained, in contrast to full-profile images where at least half of the face is hidden. Therefore, as a proxy for this angle we will focus on these metrics: (1) distance between the eyes, relative to the face-width, (2) distance between the center of the eye to the mouth, relative to the faceheight, and (3) face size.
Event
Abstract
Tor onion services utilize the Tor network to enable incoming connections on a device without disclosing its network location. Decentralized systems with extended privacy requirements like metadata-avoiding messengers typically rely on onion services. However, a long-lived onion service address can itself be abused as identifying metadata. Replacing static onion services with dynamic short-lived onion services may by a way to avoid such metadata leakage. This work evaluates the feasibility of short-lived dynamically generated onion services in decentralized systems. We show, based on a detailed performance analysis of the onion service deployment process, that dynamic onion services are already feasible for peer-to-peer communication in certain scenarios.
Event
Abstract
Most state-of-the-art face detection algorithms are usually trained with full-face pictures, without any occlusions. The first novel contribution of this paper is an analysis of the accuracy of three off-the-shelf face detection algorithms (MTCNN, Retinaface, and DLIB) on occluded faces. In order to determine the importance of different facial parts, the face detection accuracy is evaluated in two settings: Firstly, we automatically modify the CFP dataset and remove different areas of each face: We overlay a grid over each face and remove one cell at a time. Similarly, we overlay a rectangle over the main landmarks of a face – eye(s), nose and mouth. Furthermore, we resemble a face mask by overlaying a rectangle starting from the bottom of the face. Secondly, we test the performance of the algorithms on people with real-world face masks. The second contribution of this paper is the discovery of a previously unknown behaviour of the widely used MTCNN face detection algorithm – if there is a face inside another face, MTCNN does not detect the larger face.
Abstract
In our digitized society, in which different organizations attempt to control and monitor Internet use, anonymity is one of the most desired properties that ensures privacy on the Internet. One of the technologies that can be used to provide anonymity is the anonymization network Tor, which obfuscates the connection data of communications in a way that its initiator cannot be identified. However, since this only protects the initiator without protecting further communication participants, Tor Onion Services were developed, which ensure the anonymity of both the sender and the recipient. Due to the metadata created when using these Onion Services, adversaries could still be able to identify participants in a communication by using additional sources of information.
In the course of this thesis, a protocol was developed that reduces metadata leading to the identification of communication participants as far as possible. For this purpose, a two-staged addressing scheme was employed that allows users to obtain an individual address for a service via its public service address, which cannot be traced back. To prove its technical feasibility, a prototype of the protocol was implemented based on Python. Since latency is one of the decisive criteria in the usage decision of services, a performance analysis was carried out to measure the provisioning time of onion services, since this has a significant influence on the duration of address issuing. The architecture and procedure for this had to be specially designed and implemented, as at the time of writing no research existed on the provisioning time of onion services in their current version.
A statistical analysis of the results revealed that the duration of issuing individual addresses using the proposed protocol exceeds the acceptance threshold of users with 6.35 seconds. However, this does not apply to service access using the individual address, implying that the use of the protocol is possible after improving the address issuance procedure. This would reduce the metadata when accessing an Onion service and thus help improve the anonymity of communication participants.
Abstract
Various forms of digital identity increasingly act as the basis for interactions in the “real” physical world. While transactions such as unlocking physical doors, verifying an individual’s minimum age, or proving possession of a driving license or vaccination status without carrying any form of physical identity document or trusted mobile device could be easily facilitated through biometric records stored in centralized databases, this approach would also trivially enable mass surveillance, tracking, and censorship/denial of individual identities.
Towards a vision of decentralized, mobile, private authentication for physical world transactions, we propose a threat model and requirements for future systems. Although it is yet unclear if all threats listed in this paper can be addressed in a single system design, we propose this first draft of a model to compare and contrast different future approaches and inform both the systematic academic analysis as well as a public opinion discussion on security and privacy requirements for upcoming digital identity systems.
Abstract
This Bachelor Thesis is about the development of the secure export of chat history from the messenger app Wire. Wire is an end-to-end encrypting audio/video/chat service for various platforms. The aim of this Thesis is to expand the open source Android client in such a way that a secure export of an entire (group-) conversation, including the media it contains, is possible. Additional reference is given for restrictions such as time-limited messages. The export is done as a Zip file, which contains the messages in an XML document as well as the media files. Additionally, an HTML-Viewer can be included to view the exported data.
Abstract
Android is the most widely deployed end-user focused operating system. With its growing set of use cases encompassing communication, navigation, media consumption, entertainment, finance, health, and access to sensors, actuators, cameras, or microphones, its underlying security model needs to address a host of practical threats in a wide variety of scenarios while being useful to non-security experts. The model needs to strike a difficult balance between security, privacy, and usability for end users, assurances for app developers, and system performance under tight hardware constraints. While many of the underlying design principles have implicitly informed the overall system architecture, access control mechanisms, and mitigation techniques, the Android security model has previously not been formally published. This article aims to both document the abstract model and discuss its implications. Based on a definition of the threat model and Android ecosystem context in which it operates, we analyze how the different security measures in past and current Android implementations work together to mitigate these threats. There are some special cases in applying the security model, and we discuss such deliberate deviations from the abstract model.
Abstract
Face recognition pipelines are under active development, with many new publications every year. The goal of this report is to give an overview a modern pipeline and recommend a state-of-the-art approach while optimizing for accuracy and performance on low-end hardware, such as a Jetson Nano.
Abstract
Monitoring the activities of onion services by deploying multiple HSDir nodes has been done repeatedly in the past. With v3 onion services, Tor mitigated such attacks by blinding the public keys of onion services before uploading them. This effectively prevents the collection of onion addresses, but it does not prevent the collection blinded public key uploads and downloads, which provide statistical insight into how onion services are being used. Additionally, it is possible to identify and link blinded keys derived from well-known onion services, providing a solid estimate on how often they are accessed. This report presents our setup to collect statistically significant information on v3 onion service usage without compromising the privacy of Tor users.
Abstract
This work focuses on methods to capture and analyze data transmitted by Wireless Local Area Network (WLAN) clients in order to track them. This includes evaluation of methods where control of the Access Point (AP) infrastructure is not needed and clients do not need to be connected to a WLAN network. This mainly involves data in probe requests which are transmitted by clients when actively searching for WLAN APs. To evaluate this in a real world scenario a setup consisting of multiple distributed capture devices and a central analysis system is introduced. The captured data is analyzed to verify theoretical concepts. There is still a big part of WLAN client devices that leak lists of stored SSID values when actively scanning for WLAN networks. MAC address randomization helps to protect privacy if enabled. User identities for EAP authentication however are still leaked in default configuration by all major operating systems. Finally some extension ideas and current trends and developments are presented.
2020
Event
Abstract
Token-based authentication is usually applied to enable single-sign-on on the web. In current authentication schemes, users are required to interact with identity providers securely to set up authentication data during a registration phase and receive a token (credential) for future accesses to various services and applications. This type of interaction can make authentication schemes challenging in terms of security and usability. From a security point of view, one of the main threats is the compromisation of identity providers. An adversary who compromises the authentication data (password or biometric) stored with the identity provider can mount an offline dictionary attack. Furthermore, the identity provider might be able to track user activity and control sensitive user data. In terms of usability, users always need a trusted server to be online and available while authenticating to a service provider.
In this paper, we propose a new Decentralized Anonymous Multi-Factor Authentication (DAMFA) scheme where the process of user authentication no longer depends on a trusted third party (the identity provider). Also, service and identity providers do not gain access to sensitive user data and cannot track individual user activity. Our protocol allows service providers to authenticate users at any time without interacting with the identity provider.Our approach builds on a Threshold Oblivious Pseudorandom Function (TOPRF) to improve resistance to offline attacks and uses a distributed transaction ledger to improve usability. We demonstrate practicability of our proposed scheme through a prototype.
Abstract
Confidentiality of data stored on mobile devices depends on one critical security boundary in case of physical access, the device’s lockscreen. If an adversary is able to satisfy this lockscreen challenge, either through coercion (e.g. border control or customs check) or due to their close relationship to the victim (e.g. intimate partner abuse), private data is no longer protected. Therefore, a solution is necessary that renders secrets not only inaccessible, but allows to plausibly deny their sole existence. This thesis proposes an app-based system that hides sensitive apps within Android’s work profile, with a strong focus on usability. It introduces a lockdown mode that can be triggered inconspicuously from the device’s lockscreen by entering a wrong PIN for example. Usability, security and current limitations of this approach are analyzed in detail.
Abstract
Reproducible builds enable the creation of bit identical artifacts by performing a fully deterministic build process. This is especially desireable for any open source project, including Android Open Source Project (AOSP). Initially we cover reproducible builds in general and give an overview of the problem space and typical solutions. Moving forward we present Simple Opinionated AOSP builds by an external Party (SOAP), a simple suite of shell scripts used to perform AOSP builds and compare the resulting artifacts against Google references. This is utulized to create a detailed report of the differences. The qualitative part of this report attempts to find insight into the origin of differences, while the quantitative provides a quick summary.
Abstract
Mobile device authentication has been a highly active research topic for over 10 years, with a vast range of methods having been proposed and analyzed. In related areas such as secure channel protocols, remote authentication, or desktop user authentication, strong, systematic, and increasingly formal threat models have already been established and are used to qualitatively and quantitatively compare different methods. Unfortunately, the analysis of mobile device authentication is often based on weak adversary models, suggesting overly optimistic results on their respective security. In this article, we first introduce a new classification of adversaries to better analyze and compare mobile device authentication methods. We then apply this classification to a systematic literature survey. The survey shows that security is still an afterthought and that most proposed protocols lack a comprehensive security analysis. Our proposed classification of adversaries provides a strong uniform adversary model that can offer a comparable and transparent classification of security properties in mobile device authentication methods.
Event
Abstract
We are very pleased to welcome you to the 2nd ACM Workshop on Wireless Security and Machine Learning. This year’s WiseML is a virtual workshop and we are both excited to try out this workshop format and regretful not to be able to welcome you in the beautiful city of Linz, Austria, due to the ongoing COVID-19 pandemic. ACM WiseML 2020 continues to be the premier venue to bring together members of the AI/ML, privacy, security, wireless communications and networking communities from around the world, and to offer them the opportunity to share their latest research findings in these emerging and critical areas, as well as to exchange ideas and foster research collaborations, in order to further advance the state-of-the-art in security techniques, architectures, and algorithms for AI/ML in wireless communications. The program will be presented online in a single track. WiseML 2020 will be open at no extra cost to everyone and we are trying out new formats such as a mixture of live streams, pre-recorded talks, and interactive Q/A sessions.
Event
Abstract
We are very pleased to welcome you to the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks. This year’s WiSec marks the first virtual WiSec conference and we are both excited to try out this conference format and regretful to not be able to welcome you in the beautiful city of Linz, Austria, due to the ongoing SARS-CoV-2 pandemic. ACM WiSec 2020 continues to be the premier venue for research dedicated to all aspects of security and privacy in wireless and mobile networks, their systems, and their applications. The program will be presented online in a single track, along with a poster and demonstration session. WiSec 2020 will be open at no extra cost to everyone and we are trying out new formats such as a mixture of live streams, pre-recorded talks, and interactive Q/A sessions.
Abstract
Methods for recognizing people are both heavily researched presently and widely used in practice, for example by government and police. People can be recognized using various methods, such as face, finger and iris recognition, which differ in terms of requirements massively. Gait recognition allows identifying people despite large distances, hidden body parts and with any camera angle – which makes it a naturally attractive method of identifying people. This approach uses the uniqueness of gait information in every person. Most of the current literature focuses on hand-crafting features, such as step and stride length, cadence, speed and hip angle. This thesis proposes a way of performing gait recognition using neural networks. Hence, features have not to be specified manually anymore, while also boosting current state-of-the-art accuracy of being able to recognize people. First, in order to increase the robustness against cloth-changes, the silhouette from a person is extracted using Mask R-CNN. In order to capture spatial information about the subject, a convolutional neural network creates a gait-embedding based on each silhouette. To augment the quality, the next step is to take temporal information into account, using a long short-term memory network which uses the single-picture-based embedding of multiple images and computes its own, enhanced, embedding. Last but not least, the network should not be trained for every new person from scratch. Thus, a Siamese network is trained to be able to distinguish two people, which the network has (probably) never seen before.
Abstract
Contact tracing is one of the main approaches widely proposed for dealing with the current, global SARS-CoV-2 crisis. As manual contact tracing is error-prone and doesn’t scale, tools for automated contact tracing, mainly through smart phones, are being developed and tested. While their effectiveness—also in terms of potentially replacing other, more restrictive measures to control the spread of the virus—has not been fully proven yet, it is critically important to consider their privacy implications from the start. Deploying such tools quickly at mass scale means that early design choices may not be changeable in the future, and potential abuse of such technology for mass surveillance and control needs to be prevented by their own architecture.
Many different implementations are currently being developed, including international projects like PEPP-PT/DP-3T and national efforts like the “Stopp Corona” app published by the Austrian Red Cross. In this report, we analyze an independent implementation called NOVID20 that aims to provide a common framework for on-device contact tracing embeddable in different apps. That is, NOVID20 is an SDK and not a complete app in itself. The initial code drop on Github was released on April 6, 2020, without specific documentation on the intent or structure of the code itself. All our analysis is based on the Android version of this open source code alone. Given the time period, our analysis is neither comprehensive nor formal, but summarizes a first impression of the code.
NOVID20 follows a reasonable privacy design by exchanging only pseudonyms between the phones in physical proximity and recording them locally on-device. However, there is some room for improvement: (a) pseudonyms should be generated randomly on the phone, and not on the server side; (b) transmitted pseudonyms should be frequently rotated to avoid potential correlation; (c) old records should automatically be deleted after the expunge period; (d) absolute location tracking, while handled separately from physical proximity and only optionally released, can be problematic depending on its use—absolute location data must be protected with additional anonymization measures such as Differential Privacy, which are left to the application/server and may, therefore, not be implemented correctly; and (e) device analytics data, while helpful during development and testing, should be removed for real deployments. Our report gives more detailed recommendations on how this may be achieved.
We explicitly note that all of these points can be fixed based on the current design, and we thank the NOVID20 team for openly releasing their code, which made this analysis possible in a shorttime window.
Event
Abstract
How can we use digital identity for authentication in the physical world without compromising user privacy? Enabling individuals to – for example – use public transport and other payment/ticketing applications, access computing resources on public terminals, or even cross country borders without carrying any form of physical identity document or trusted mobile device is an important open question. Moving towards such a device-free infrastructure-based authentication could be easily facilitated by centralized databases with full biometric records of all individuals, authenticating and therefore tracking people in all their interactions in both the digital and physical world. However, such centralized tracking does not seen compatible with fundamental human rights to data privacy. We therefore propose a fully decentralized approach to digital user authentication in the physical world, giving each individual better control over their interactions and data traces they leave.
In project Digidow, we assign each individual in the physical world with a personal identity agent (PIA) in the digital world, facilitating their interactions with purely digital or digitally mediated services in both worlds. We have two major issues to overcome. The first is a problem of massive scale, moving from current users of digital identity to the whole global population as the potential target group. The second is even more fundamental: by moving from trusted physical documents or devices and centralized databases to a fully decentralized and infrastructure-based approach, we remove the currently essential elements of trust. In this poster, we present a system architecture to enable trustworthy distributed authentication and a simple, specific scenario to benchmark an initial prototype that is currently under development. We hope to engage with the NDSS community to both present the problem statement and receive early feedback on the current architecture, additional scenarios and stakeholders, as well as international conditions for practical deployment.
2019
Abstract
The so called Digidow Project aims to provide a decentralized solution for digital identity management. A key feature is to provide a service for authentication along with the identification of individual persons based on biometric features.
In the center of this idea a so called personal agent should provide this decentralized functionality for each individual user. The sensitive nature of the data this agent handles requires a special level of security standards on both the implementation and surrounding system.
This master thesis evaluates the programming language Rust as potential platform choice for the personal agent. We discuss the features Rust has been chosen for and which additional frameworks where selected and used to create the prototype we used for the evaluation. Furthermore, we dive into details about our prototype and present the implemented concepts. Moreover, we test our implementation and discuss our achievements, like isolated access to the hard drive, the developed concept behind the architecture and how incoming data is verified. Finally, we are going to discuss how future work can build on the introduced and existing concepts.
Event
Abstract
The Digidow architecture is envisioned to tie digital identities to physical interactions using biometric information without the need for a central collection of biometric templates. A key component of the architecture is the distributed service discovery, for establishing a secure and private connection between a prover, a verifier and a sensor, if none of them knows the others ahead of time. In this paper we analyze the requirements of the service discovery with regard to functionality and privacy. Based on typical use-cases we evaluate the advantages and disadvantages of letting each of the actors be the initiator of the discovery process. Finally, we outline existing technologies could be leveraged to achieve our requirements.
Abstract
The prediction of future locations can be useful in various settings, one being the authentication process of a person. In this thesis, we perform the prediction of next places with the help of a HMM. We focus on models with a discrete state space and thus need to discretise the data. This is done by pre-processing the raw, continuous location data in two steps. The first step is the extraction of stay-points, i.e. regions in which a person spends a given time period at. In the second step, multiple stay-points are grouped with the clustering algorithm DBSCAN to form significant places. After pre-processing, we train a HMM with a state and observation space that correspond to the extracted significant places. Based on the previously observed location, our model predicts the next place for a person. In order to find good models for next place prediction, we did experiments with two datasets. The first one is the Geolife GPS trajectory dataset from Microsoft, which consists of GPS traces. The second dataset was self-collected and contains additional data obtained from WiFi and cell towers. Our final model achieves a validation accuracy higher than 0.95 on both datasets. However, a prediction accuracy reaching from 0.8 to 0.99 of a model that solely predicts noise as its future location, leads us to the conclusion that the datasets, as well as the pre-processing step need further refinements for our HMM to encapsulate more valuable information.