M.Sc. Thesis: Adversarial Workload Matters - Executing a Large-Scale Poisoning Attack against Learned Index Structures
Published in M.Sc. Thesis - The University of Melbourne, 2021
Databases rely on indexes to quickly locate and retrieve data that is stored on disks. While traditional database indexes use tree data structures such as B+ Trees to find the position of a given query key in the index, a learned index structure considers this problem as a prediction task and uses a machine learning model to “predict” the position of the query key. This novel approach of implementing database indexes has inspired a surge of recent research aimed at studying the effectiveness of learned index structures. However, while the main advantage of learned index structures is their ability to adjust to the data via their underlying ML model, this also carries the risk of exploitation by a malicious adversary.
Recommended citation: Bachfischer, M. (2021). Adversarial Workload Matters - Executing a Large-Scale Poisoning Attack against Learned Index Structures