publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
A tag cloud generated from my research papers.
2025
- PETSBeyond Noise: Privacy-Preserving Decentralized Learning with Virtual NodesSayan Biswas, Mathieu Even, Anne-Marie Kermarrec, and 4 more authorsIn Proceedings of the Privacy Enhancing Technologies Symposium, 2025
Decentralized learning (DL) enables collaborative learning without a server and without training data leaving the users’ devices. However, the models shared in DL can still be used to infer training data. Conventional privacy defenses such as differential privacy and secure aggregation fall short in effectively safeguarding user privacy in DL. We introduce Shatter, a novel DL approach in which nodes create virtual nodes (VNs) to disseminate chunks of their full model on their behalf. This enhances privacy by (i) preventing attackers from collecting full models from other nodes, and (ii) hiding the identity of the original node that produced a given model chunk. We theoretically prove the convergence of Shatter and provide a formal analysis demonstrating how Shatter reduces the efficacy of attacks compared to when exchanging full models between participating nodes. We evaluate the convergence and attack resilience of Shatter with existing DL algorithms, with heterogeneous datasets, and against three standard privacy attacks, including gradient inversion. Our evaluation shows that Shatter not only renders these privacy attacks infeasible when each node operates 16 VNs but also exhibits a positive impact on model convergence compared to standard DL. This enhanced privacy comes with a manageable increase in communication volume.
2024
- MiddlewareQuickDrop: Efficient Federated Unlearning by Integrated Dataset DistillationAkash Dhasade, Yaohong Ding, Song Guo, and 3 more authorsIn Proceedings of the 25th International Middleware Conference, 2024
Federated Unlearning (FU) aims to delete specific training data from an ML model trained using Federated Learning (FL). However, existing FU methods suffer from inefficiencies due to the high costs associated with gradient recomputation and storage. This paper presents QuickDrop, an original and efficient FU approach designed to overcome these limitations. During model training, each client uses QuickDrop to generate a compact synthetic dataset, serving as a compressed representation of the gradient information utilized during training. This synthetic dataset facilitates fast gradient approximation, allowing rapid downstream unlearning at minimal storage cost. To unlearn specific knowledge from the trained model, QuickDrop clients execute stochastic gradient ascent with samples from the synthetic datasets instead of the training dataset. The tiny volume of synthetic data significantly reduces computational overhead compared to conventional FU methods. Evaluations with three standard datasets and five baselines show that, with comparable accuracy guarantees, QuickDrop reduces the duration of unlearning by 463x compared to retraining the model from scratch and 65x-218x compared to prominent FU approaches. QuickDrop supports both class- and client-level unlearning, handles multiple unlearning requests, and supports relearning of previously erased data.
- FGCSLight-HIDRA: Scalable and decentralized resource orchestration in Fog-IoT environmentsCarlos Núñez-Gómez, Martijn de Vos, Jérémie Decouchant, and 3 more authorsFuture Generation Computer Systems, 2024
With the proliferation of Internet of Things (IoT) ecosystems, traditional resource orchestration mechanisms, executed on fog devices, encounter significant scalability, reliability and security challenges. To tackle these challenges, recent decentralized algorithms in Fog-IoT use Distributed Ledger Technologies to orchestrate resources and payments between peers. However, while distributed ledgers provide many desirable properties, their consensus mechanism introduces a performance bottleneck. This paper introduces Light-HIDRA, a consensus-less and decentralized resource orchestration system for Fog-IoT environments. At its core, Light-HIDRA uses Byzantine Reliable Broadcast (BRB) to coordinate actions without centralized control, therefore drastically reducing communication overhead and latency compared to consensus-based solutions. Light-HIDRA coordinates the scheduling and execution of workloads, and securely manages the payments that peers receive for dedicating resources to workloads. Light-HIDRA further increases performance and reduces overhead by grouping peers into distinct domains. We conduct an in-depth analysis of the protocol’s security properties, investigating its efficiency and robustness in diverse situations. We evaluate the performance of Light-HIDRA, highlighting its performance over HIDRA, a state-of-the-art baseline that uses smart contracts. Our experiments demonstrate that Light-HIDRA reduces the bandwidth usage by up to 57x, the latency of workload offloading by up to 142x, and shows superior throughput compared to HIDRA.
- AlgoTelAuditer l’équité: l’union fait-elle la force?Martijn de Vos, Akash Dhasade, Jade Garcia Bourrée, and 4 more authorsIn AlgoTel 2024–26èmes Rencontres Francophones sur les Aspects Algorithmiques des Télécommunications, 2024
Usuellement, les agents qui auditent l’équité des algorithmes les étudient de manière indépendante, avec leurs propres données. Dans nos travaux, nous considérons le cas de plusieurs agents auditant un même algorithme pour atteindre différents objectifs. Les agents ont la possibilité d’influencer l’audit à travers deux leviers : une stratégie de collaboration avec les autres agents, avec ou sans coordination préalable, et le choix d’une méthode d’échantillonnage. Nous étudions les interactions possibles qui en résultent. Nous prouvons que, contre-intuitivement, la coordination peut se faire au détriment de la précision de l’audit, alors que la collaboration non coordonnée conduit généralement à de bons résultats. Des expériences sur des jeux de données réels confirment cette observation, lorsque nous observons que la précision de la collaboration non coordonnée atteint celle de l’échantillonnage collaboratif optimal.
- ECAIFairness auditing with multi-agent collaborationMartijn de Vos, Akash Dhasade, Jade Garcia Bourrée, and 4 more authorsIn Proceedings of the 27th European Conference on Artificial Intelligence, 2024
Existing work in fairness auditing assumes that each audit is performed independently. In this paper, we consider multiple agents working together, each auditing the same platform for different tasks. Agents have two levers: their collaboration strategy, with or without coordination beforehand, and their strategy for sampling appropriate data points. We theoretically compare the interplay of these levers. Our main findings are that (i) collaboration is generally beneficial for accurate audits, (ii) basic sampling methods often prove to be effective, and (iii) counter-intuitively, extensive coordination on queries often deteriorates audits accuracy as the number of agents increases. Experiments on three large datasets confirm our theoretical results. Our findings motivate collaboration during fairness audits of platforms that use ML models for decision-making.
- arXivHarnessing Increased Client Participation with Cohort-Parallel Federated LearningAkash Dhasade, Anne-Marie Kermarrec, Tuan-Anh Nguyen, and 2 more authorsarXiv preprint, 2024
Federated Learning (FL) is a machine learning approach where nodes collaboratively train a global model. As more nodes participate in a round of FL, the effectiveness of individual model updates by nodes also diminishes. In this study, we increase the effectiveness of client updates by dividing the network into smaller partitions, or cohorts. We introduce Cohort-Parallel Federated Learning (CPFL): a novel learning approach where each cohort independently trains a global model using FL, until convergence, and the produced models by each cohort are then unified using one-shot Knowledge Distillation (KD) and a cross-domain, unlabeled dataset. The insight behind CPFL is that smaller, isolated networks converge quicker than in a one-network setting where all nodes participate. Through exhaustive experiments involving realistic traces and non-IID data distributions on the CIFAR-10 and FEMNIST image classification tasks, we investigate the balance between the number of cohorts, model accuracy, training time, and compute and communication resources. Compared to traditional FL, CPFL with four cohorts, non-IID data distribution, and CIFAR-10 yields a 1.9x reduction in train time and a 1.3x reduction in resource usage, with a minimal drop in test accuracy.
- SRDSPeerSwap: A Peer-Sampler with Randomness GuaranteesRachid Guerraoui, Anne-Marie Kermarrec, Anastasiia Kucherenko, and 2 more authorsIn Proceedings of the 43rd Symposium on Reliable Distributed Systems, 2024
The ability of a peer-to-peer (P2P) system to effectively host decentralized applications often relies on the availability of a peer-sampling service, which provides each participant with a random sample of other peers. Despite the practical effectiveness of existing peer samplers, their ability to produce random samples within a reasonable time frame remains poorly understood from a theoretical standpoint. This paper contributes to bridging this gap by introducing PeerSwap, a peer-sampling protocol with provable randomness guarantees. We establish execution time bounds for PeerSwap, demonstrating its ability to scale effectively with the network size. We prove that PeerSwap maintains the fixed structure of the communication graph while allowing sequential peer position swaps within this graph. We do so by showing that PeerSwap is a specific instance of an interchange process, a renowned model for particle movement analysis. Leveraging this mapping, we derive execution time bounds, expressed as a function of the network size N. Depending on the network structure, this time can be as low as a polylogarithmic function of N, highlighting the efficiency of PeerSwap. We implement PeerSwap and conduct numerical evaluations using regular graphs with varying connectivity and containing up to 32768 (2^15) peers. Our evaluation demonstrates that PeerSwap quickly provides peers with uniform random samples of other peers.
- arXivFair Decentralized LearningSayan Biswas, Anne-Marie Kermarrec, Rhisi Sharma, and 2 more authorsarXiv preprint, 2024
Decentralized learning (DL) is an emerging approach that enables nodes to collaboratively train a machine learning model without sharing raw data. In many application domains, such as healthcare, this approach faces challenges due to the high level of heterogeneity in the training data’s feature space. Such feature heterogeneity lowers model utility and negatively impacts fairness, particularly for nodes with under-represented training data. In this paper, we introduce \textscFacade, a clustering-based DL algorithm specifically designed for fair model training when the training data exhibits several distinct features. The challenge of \textscFacade is to assign nodes to clusters, one for each feature, based on the similarity in the features of their local data, without requiring individual nodes to know apriori which cluster they belong to. \textscFacade (1) dynamically assigns nodes to their appropriate clusters over time, and (2) enables nodes to collaboratively train a specialized model for each cluster in a fully decentralized manner. We theoretically prove the convergence of \textscFacade, implement our algorithm, and compare it against three state-of-the-art baselines. Our experimental results on three datasets demonstrate the superiority of our approach in terms of model accuracy and fairness compared to all three competitors. Compared to the best-performing baseline, \textscFacade on the CIFAR-10 dataset also reduces communication costs by 32.3% to reach a target accuracy when cluster sizes are imbalanced.
- IPDPSWEnergy-Aware Decentralized Learning with Intermittent Model TrainingMartijn de Vos, Akash Dhasade, Paolo Dini, and 5 more authorsIn Proceedings of the International Parallel and Distributed Processing Symposium Workshops, 2024
SkipTrain is a novel Decentralized Learning (DL) algorithm, which minimizes energy consumption in decentralized learning by strategically skipping some training rounds and substituting them with synchronization rounds. These training-silent periods, besides saving energy, also allow models to better mix and produce models with superior accuracy than typical DL algorithms. Our empirical evaluations with 256 nodes demonstrate that SkipTrain reduces energy consumption by 50% and increases model accuracy by up to 12% compared to D-PSGD, the conventional DL algorithm.
2023
- FGCSDeScan: Censorship-resistant indexing and search for Web3Martijn de Vos, Georgy Ishmaev, and Johan PouwelseFuture Generation Computer Systems, 2023
The popularity of blockchain technology has bootstrapped many “Web3” applications, e.g., Ethereum and IPFS, that apply distributed ledger technology to store transactions. The amount of transactions generated and stored in such Web3 applications is significant and, in its raw form, usually not searchable by users. Existing Web3 transaction indexing and search engines are predominantly centralized and, therefore, can manipulate search results or censor particular queries. With the proliferation of Web3 transactions and applications, a decentralized and censorship-resistant search primitive is becoming essential. We present DeScan, a decentralized and censorship-resistant indexing and search engine for Web3. Users index their local Web3 transactions using custom rules that output triplets. Generated triplets are bundled in a distributed transaction graph that is searchable by other users. To coordinate search and distribute the storage of the transaction graph over peers in the network, we build upon a Skip Graph (SG) data structure. Since the Skip Graph does not provide any resilience against adversarial peers that censor searches, we propose four modifications to improve its robustness. We implement DeScan and conduct experiments with up to 12800 peers and 10 million Ethereum transactions. Our experiments show that DeScan with our modifications enabled can tolerate 20% adversarial peers and 35% unresponsive peers without disruption. Moreover, we find that searches in DeScan are usually completed well within a second, even when the network grows. Finally, we show that storage and network costs are evenly distributed amongst peers as the network grows.
- PERCOMA deployment-first methodology to mechanism design and refinement in distributed systemsMartijn de Vos, Georgy Ishmaev, Johan Pouwelse, and 1 more author2023
Catalyzed by the popularity of blockchain technology, there has recently been a renewed interest in the design, implementation and evaluation of decentralized systems. Most of these systems are intended to be deployed at scale and in heterogeneous environments with real users and unpredictable workloads. Nevertheless, most research in this field evaluates such systems in controlled environments that poorly reflect the complex conditions of real-world environments. In this work, we argue that deployment is crucial to understanding decentralized mechanisms in a real-world environment and an enabler to building more robust and sustainable systems. We highlight the merits of deployment by comparing this approach with other experimental setups and show how our lab applied a deployment-first methodology. We then outline how we use Tribler, our peer-to-peer file-sharing application, to deploy and monitor decentralized mechanisms at scale. We illustrate the application of our methodology by describing a deployment trial in experimental tokenomics. Finally, we summarize four lessons learned from multiple deployment trials where we applied our methodology.
- NEURIPSEpidemic Learning: Boosting Decentralized Learning with Randomized CommunicationMartijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, and 3 more authorsAdvances in neural information processing systems, 2023
We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches. At each round of EL, each node sends its model updates to a random sample of s other nodes (in a system of n nodes). We provide an extensive theoretical analysis of EL, demonstrating that its changing topology culminates in superior convergence properties compared to the state-of-the-art (static and dynamic) topologies. Considering smooth non-convex loss functions, the number of transient iterations for EL, i.e., the rounds required to achieve asymptotic linear speedup, is in O(n3/s2) which outperforms the best-known bound O(n3) by a factor of s2, indicating the benefit of randomized communication for DL. We empirically evaluate EL in a 96-node network and compare its performance with state-of-the-art DL approaches. Our results illustrate that EL converges up to 1.7x quicker than baseline DL algorithms and attains 2.2% higher accuracy for the same communication volume.
2022
- DAPPSGromit: Benchmarking the Performance and Scalability of Blockchain SystemsBulat Nasrulin, Martijn de Vos, Georgy Ishmaev, and 1 more authorIn IEEE International Conference on Decentralized Applications and Infrastructures, 2022
The growing number of implementations of blockchain systems stands in stark contrast with still limited research on a systematic comparison of performance characteristics of these solutions. Such research is crucial for evaluating fundamental trade-offs introduced by novel consensus protocols and their implementations. These performance limitations are commonly analyzed with ad-hoc benchmarking frameworks focused on the consensus algorithm of blockchain systems. However, comparative evaluations of design choices require macro-benchmarks for uniform and comprehensive performance evaluations of blockchains at the system level rather than performance metrics of isolated components. To address this research gap, we implement Gromit, a generic framework for analyzing blockchain systems. Gromit treats each system under test as a transaction fabric where clients issue transactions to validators. We use Gromit to conduct the largest blockchain study to date, involving seven representative systems with varying consensus models. We determine the peak performance of these systems with a synthetic workload in terms of transaction throughput and scalability and show that transaction throughput does not scale with the number of validators. We explore how robust the subjected systems are against network delays and reveal that the performance of permissoned blockchain is highly sensitive to network conditions.
- ECRADecentralizing Components of Electronic Markets to Prevent Gatekeeping and ManipulationMartijn de Vos, Georgy Ishmaev, and Johan PouwelseElectronic Commerce Research and Applications, 2022
The landscape of electronic marketplaces has been monopolized by a handful of market operators that have accumulated tremendous power during the last decades. This trend raises concerns about fairness and market manipulation by these operators acting as gatekeepers. These concerns have recently been outlined in the EU Digital Markets Act (DMA). In this work, we highlight how technological logic of separation understood in the framework of decentralization can address manipulation concerns. As a first step, we devise a reference model of electronic marketplaces, containing six functional components, and outline how control over these components enables different manipulative practices by gatekeepers. We identify two dimensions of decentralization that can counterbalance monopolistic abuse of marketplace components. We then present a software implementation of our reference model and demonstrate how decentralization and unbundling of market components can alleviate manipulation and fairness concerns. We end our work with a review of related approaches and conclude that modular and interoperable marketplaces can enable an open ecosystem of fair electronic markets envisioned by the DMA.
2021
- Applied EnergyA Novel Decentralized Platform for Peer-to-peer Energy Trading Market with Blockchain TechnologyAyman Esmat, Martijn de Vos, Yashar Ghiassi-Farrokhfal, and 2 more authorsApplied Energy, 2021
Peer-to-Peer (P2P) energy trading, which allows energy consumers/producers to directly trade with each other, is one of the new paradigms driven by the decarbonization, decentralization, and digitalization of the energy supply chain. Additionally, the rise of blockchain technology suggests unprecedented socio-economic benefits for energy systems, especially when coupled with P2P energy trading. Despite such future prospects in energy systems, three key challenges might hinder the full integration of P2P energy trading and blockchain. First, it is quite complicated to design a decentralized P2P market that keeps a fair balance between economic efficiency and information privacy. Secondly, with the proliferation of storage devices, new P2P market designs are needed to account for their inter-temporal dependencies. Thirdly, a practical implementation of blockchain technology for P2P trading is required, which can facilitate efficient trading in a secured and fraud-resilient way, while eliminating any intermediaries’ costs. In this paper, we develop a new decentralized P2P energy trading platform to address all the aforementioned challenges. Our platform consists of two key layers: market and blockchain. The market layer features a parallel and short-term pool-structured auction and is cleared using a novel decentralized Ant-Colony Optimization method. This market arrangement guarantees a near-optimally efficient market solution, preserves players’ privacy, and allows inter-temporal market products trading. The blockchain layer offers a high level of automation, security, and fast real-time settlements through smart contract implementation. Finally, using real-world data, we simulate the functionality of the platform regarding energy trading, market clearing, smart contract operations, and blockchain-based settlements.
- WWWJXChange: A Universal Mechanism for Asset Exchange between Permissioned BlockchainsMartijn de Vos, Can Umut Ileri, and Johan PouwelseWorld Wide Web, 2021
Permissioned blockchains are increasingly being used as a solution to record transactions between companies. Several use cases that leverage permissioned blockchains focus on the representation and management of real-world assets. Since the number of incompatible blockchains is quickly growing, there is an increasing need for a universal mechanism to exchange, or trade, digital assets between these isolated platforms. There currently is no universal mechanism for inter-blockchain asset exchange without a requirement for trusted authorities that coordinate the trade. We address this shortcoming and present XChange, a universal mechanism for asset exchange between permissioned blockchains. To achieve universality and to avoid trusted authorities that coordinate a trade, XChange does not provide atomic guarantees but leverages risk mitigation strategies to reduce value at stake. Our mechanism records the specifications and progression of each trade within records on a distributed log. XChange reduces the economic gains of adversaries by bounding the total amount of fraud they can commit at any time. After having committed fraud, an adversary is forced to finish its ongoing trades before it can engage in new trades. We first present a four-phased protocol that coordinates an asset exchange between two traders. We then outline how trade records can be stored on TrustChain, which is a lightweight distributed ledger specifically built for the tamper-proof storage of data elements. We implement XChange and conduct experiments. Our experiments demonstrate that XChange is capable of reducing the economic gains of adversaries by more than 99.9% when replaying a real-world trading dataset. A deployment on low-resource devices reveals that the latency added to a trade by XChange is only 493 milliseconds. Finally, our scalability evaluation shows that XChange achieves over 1’000 trades per second and that its throughput, in terms of trades per second, scales linearly with the system load.
- Computer Netw.ConTrib: Maintaining Fairness in Decentralized Big Tech Alternatives by Accounting WorkMartijn de Vos, and Johan PouwelseComputer Networks, 2021
“Big Tech” companies provide digital services used by billions of people. Recent developments, however, have shown that these companies often abuse their unprecedented market dominance for selfish interests. Meanwhile, decentralized applications without central authority are gaining traction. Decentralized applications critically depend on its users working together. Ensuring that users do not consume too many resources without reciprocating is a crucial requirement for the sustainability of such applications. We present ConTrib, a universal mechanism to maintain fairness in decentralized applications by accounting the work performed by peers. In ConTrib, participants maintain a personal ledger with tamper-evident records. A record describes some work performed by a peer and links to other records. Fraud in ConTrib occurs when a peer illegitimately modifies one of the records in its personal ledger. This is detected through the continuous exchange of random records between peers and by verifying the consistency of incoming records against known ones. Our simple fraud detection algorithm is highly scalable, tolerates significant packet loss, and exhibits relatively low fraud detection times. We experimentally show that fraud is detected within seconds and with low bandwidth requirements. To demonstrate the applicability of our work, we deploy ConTrib in the Tribler file-sharing application and successfully address free-riding behaviour. This two-year trial has resulted in over 160 million records, created by more than 94’000 users.
- DICGUniCon: Universal and Scalable Infrastructure for Digital Asset ManagementPablo Rodrigo, Johan Pouwelse, and Martijn de VosIn Proceedings of the 2nd International Workshop on Distributed Infrastructure for Common Good, 2021
Non-Fungible Tokens (NFTs) leverage blockchain technology to certify and transfer ownership of digital assets to individuals. NFTs on the Ethereum blockchain have garnered significant attention recently, with a trading volume of over $2 billion in Q1 2021 only. At the same time, established NFT solutions have low flexibility, limited scalability, and high transaction fees. These deficiencies make them impractical to use at a larger scale to manage digital assets. We present UniCon, a universal and scalable infrastructure for digital asset management. The key idea of UniCon is to track asset ownership in a tracking blockchain while making minimal assumptions on the capabilities of this blockchain. UniCon enables the exchange of asset ownership in any digital currency, unlike current NFT platforms. We devise a system architecture and build a prototype of UniCon. We use a scalable distributed ledger that is highly suitable for the tracking of asset ownership. Our prototype enables a decentralized ecosystem to manage and trade assets.
2020
- FGCSTrustChain: A Sybil-resistant Scalable BlockchainPim Otte, Martijn de Vos, and Johan PouwelseFuture Generation Computer Systems, 2020
TrustChain is capable of creating trusted transactions among strangers without central control. This enables new areas of blockchain use with a focus on building trust between individuals. Our innovative approach offers scalability, openness and Sybil-resistance while replacing proof-of-work with a mechanism to establish the validity and integrity of transactions. TrustChain is a permission-less tamper-proof data structure for storing transaction records of agents. We create an immutable chain of temporally ordered interactions for each agent. It is inherently parallel and every agent creates his own genesis block. TrustChain includes a novel Sybil-resistant algorithm named NetFlow to determine trustworthiness of agents in an online community. NetFlow ensures that agents who take resources from the community also contribute back. We demonstrate that irrefutable historical transaction records offer security and seamless scalability, without requiring global consensus. Experimentation shows that the transaction throughput of TrustChain surpasses that of traditional blockchain architectures like Bitcoin. We show by using extracted data from a live network that TrustChain has sufficient informativeness to identify freeriders, leading to refusal of service.
- arXivXChange: A Blockchain-based Mechanism for Generic Asset Trading In Resource-constrained EnvironmentsMartijn de Vos, Can Umut Ileri, and Johan PouwelsearXiv preprint, 2020
An increasing number of industries rely on Internet-of-Things devices to track physical resources. Blockchain technology provides primitives to represent these resources as digital assets on a secure distributed ledger. Due to the proliferation of blockchain-based assets, there is an increasing need for a generic mechanism to trade assets between isolated platforms. To date, there is no such mechanism without reliance on a trusted third party. In this work, we address this shortcoming and present XChange. XChange mediates trade of any digital asset between isolated blockchain platforms while limiting the fraud conducted by adversarial parties. We first describe a generic, five-phase trading protocol that establishes and executes trade between individuals. This protocol accounts full trade specifications on a separate blockchain. We then devise a lightweight system architecture, composed of all required components for a generic asset marketplace. We implement XChange and conduct real-world experimentation. We leverage an existing, lightweight blockchain, TrustChain, to account all orders and full trade specifications. By deploying XChange on multiple low-resource devices, we show that a full trade completes within half a second. To quantify the scalability of our mechanism, we conduct further experiments on our compute cluster. We conclude that the throughput of XChange, in terms of trades per second, scales linearly with the system load. Furthermore, we find that XChange exhibits superior throughput and order fulfil latency compared to related decentralized exchanges, BitShares and Waves.
- MiddlewareMATCH: A Decentralized Middleware for Fair Matchmaking In Peer-to-Peer MarketsMartijn de Vos, Georgy Ishmaev, and Johan PouwelseIn Proceedings of the 21st International Middleware Conference, 2020
Matchmaking is a core enabling element in peer-to-peer markets. To date, matchmaking is predominantly performed by proprietary algorithms, fully controlled by market operators. This raises fairness concerns as market operators effectively can hide, prioritize, or delay the orders of specific users. Blockchain technology has been proposed as an alternative for fair matchmaking without a trusted operator but is still vulnerable to specific fairness attacks. We present MATCH, a decentralized middleware for fair matchmaking in peer-to-peer markets. By decoupling the dissemination of potential matches from the negotiation of trade agreements, MATCH empowers end-users to make their own educated decisions and to engage in direct negotiations with trade partners. This approach makes MATCH highly resilient against malicious matchmakers that deviate from a specific matching policy We implement MATCH and evaluate our middleware using real-world ride-hailing and asset trading workloads. It is demonstrated that MATCH maintains high matching quality, even when 75% of all matchmakers is malicious. We also show that the bandwidth usage and order fulfil latency of MATCH is orders of magnitude lower compared to matchmaking on an Ethereum blockchain.
- DICGConTrib: Universal and Decentralized Accounting in Shared-Resource SystemsMartijn de Vos, and Johan PouwelseIn Proceedings of the 1st International Workshop on Distributed Infrastructure for Common Good, 2020
Preventing the abuse of resources is a crucial requirement in shared-resource systems. This concern can be addressed through a centralized gatekeeper, yet it enables manipulation by the gatekeeper itself. We present ConTrib, a decentralized mechanism for tracking resource usage across different shared-resource systems. In ConTrib, participants maintain a personal ledger with tamper-proof records. A record describes a resource consumption or contribution and links to other records. Fraud, maintaining multiple copies of a personal ledger, is detected by users themselves through the continuous exchange of records and by validating their consistency against known ones. We implement ConTrib and run experiments. Our evaluation with up to 1’000 instances reveals that fraud can be detected within 22 seconds and with moderate bandwidth usage. To demonstrate the applicability of our work, we deploy ConTrib in a Tor-like overlay and show how resource abuse by free-riders is effectively deterred. This longitudinal, large-scale trial has resulted in over 137 million records, created by more than 86’000 volunteers.
2019
- DAppConDevID: Blockchain-based Portfolios for Software DevelopersMartijn de Vos, Mitchell Olsthoorn, and Johan PouwelseIn IEEE International Conference on Decentralized Applications and Infrastructures, 2019
Decentralized applications, also known as dApps, are the new paradigm for writing business-critical software. Recruiting developers with appropriate qualifications and skills for this activity is key, yet challenging. The main problem is that the portfolio of developers is usually scattered across centralized platforms like GitHub and LinkedIn, and vendor locked. This can result in an incomplete impression of their capabilities. We address this problem and introduce DevID, a blockchain-based portfolio for developers. Over time, this portfolio enables developers to build up a trustworthy collection of records that showcase their capabilities and expertise. They can import data assets from third parties into a unified DevID portfolio, add projects and skills, and receive endorsements. All portfolio records are stored on a scalable distributed ledger and owned by developers themselves. The essential idea is to exploit the tamper-proof property of the blockchain while providing durable storage. To demonstrate the practical value of DevID, we build the competition-based platform, dAppCoder, for the development of decentralized applications. On dAppCoder clients are able to submit their ideas and developers can find work. dAppCoder utilizes DevID portfolios to match these clients and developers. We fully implement our ideas and conduct a deployment trial. Our trial demonstrates that DevID is efficient at storing portfolio records.
2018
- IFIP NetworkingReal-time Money Routing by Trusting Strangers with your FundsMartijn de Vos, and Johan PouwelseIn IFIP Networking, 2018
We explore a new stage in the evolution of digital trust, trusting strangers with your funds. We address the trust issues when giving money to others and relying on them to forward it. For fraud identification, we leverage our deployed blockchain which gradually builds trust between interacting strangers. Our blockchain fabric, called TrustChain, records interactions between entities in a scalable manner. This work represents a small step towards a generic infrastructure for trust, moving beyond proven single vendor platforms like eBay, Uber and Airbnb.Expanding upon established trust relations, we designed, implemented and evaluated an overlay network: Internet-of-Money. Internet-of-Money routes money to different banks through individuals, so-called money routers. This removes the need for central banks, to handle a payment. Our network reduces the duration of traditional inter-bank payments from up to a day and even a few days during weekends, to mere seconds. Internet-of-Money is fully decentralized, scalable and privacy-preserving.With real-world experimentations, we prove that Internet-of-Money enables fast money forwarding. We show that the overlay network is capable of discovering a majority of available money routers within a minute. Finally, we demonstrate how profit of cheating routers is limited and that misbehaviour is punished.
2017
- EPLJLaws for Creating Trust in the Blockchain AgeJohan Pouwelse, André Kok, Joost Fleuren, and 3 more authorsEuropean Property Law Journal, 2017
Humanity’s notion of trust is shaped by new platforms operating in the emerging sharing economy, acting as intermediate matchmaker for ride sharing, housing facilities or freelance labour, effectively creating an environment where strangers trust each other. While millions of people worldwide rely on online sharing activities, such services are often facilitated by a few predatory companies, managing trust relations. This centralization of responsibility raises questions about ethical and political issues like regulatory compliance, data portability and monopolistic behaviour. Recently, blockchain technology has gathered a significant amount of support and adoption, due to its inherent decentralized and tamper-proof structure.We present a blockchain-powered blueprint for a shared and public programmable economy. The focus of our architecture is on four essential primitives: digital identities, blockchain-based trust, programmable money and marketplaces. Trust is established using only historical interactions between strangers to estimate trustworthiness. Every component of our proposed technology stack is designed according to the defining principles of the Internet itself: self-governance, autonomy and shared ownership. Real-world viability of each component is demonstrated with a functional prototype or running code. Our vision is that the highlighted technology stack devises trust, new acts, principles and rules beyond the possibilities in current economic, legal and political systems.
2014
- arXivThe Fifteen Year Struggle of Decentralizing Privacy-enhancing TechnologyRolf Jagerman, Wendo Sabee, Laurens Versluis, and 2 more authorsarXiv preprint, 2014
Ever since the introduction of the internet, it has been void of any privacy. The majority of internet traffic currently is and always has been unencrypted. A number of anonymous communication overlay networks exist whose aim it is to provide privacy to its users. However, due to the nature of the internet, there is major difficulty in getting these networks to become both decentralized and anonymous. We list reasons for having anonymous networks, discern the problems in achieving decentralization and sum up the biggest initiatives in the field and their current status. To do so, we use one exemplary network, the Tor network. We explain how Tor works, what vulnerabilities this network currently has, and possible attacks that could be used to violate privacy and anonymity. The Tor network is used as a key comparison network in the main part of the report: a tabular overview of the major anonymous networking technologies in use today.