- Jun 11, 2019
- Security
- #machine-learning, #ml
My coauthors and I will be presenting an extended abstract in the 25th Conference on Knowledge Discovery and Data Mining (KDD'19) in August. Below is a preview:
Title: MLsploit: A Framework for Interactive Experimentation with Adversarial Machine Learning Research
Authors: Nilaksh Das, Siwei Li, Chanil Jeon, Jinho Jung, Shang-Tse Chen, Carter Yagemann, Evan Downing, Haekyu Park, Evan Yang, Li Chen, Michael Kounavis, Ravi Sahita, David Durham, Scott Buck, Polo Chau, Taesoo Kim, Wenke Lee
Abstract: We present MLsploit, the first user-friendly, cloud-based system that enables researchers and practitioners to rapidly evaluate and compare state-of-the-art adversarial attacks and defenses for machine learning (ML) models. As recent advances in adversarial ML have revealed that many ML techniques are highly vulnerable to adversarial attacks, MLsploit meets the urgent need for practical tools that facilitate interactive security testing of ML models. MLsploit is jointly developed by researchers at Georgia Tech and Intel, and is open-source. Designed for extensibility, MLsploit accelerates the study and development of secure ML systems for safety-critical applications. In this showcase demonstration, we highlight the versatility of MLsploit in performing fast-paced experimentation with adversarial ML research that spans a diverse set of modalities, such as bypassing Android and Linux malware, or attacking and defending deep learning models for image classification. We invite the audience to perform experiments interactively in real time by varying different parameters of the experiments or using their own samples, and finally compare and evaluate the effects of such changes on the performance of the ML models through an intuitive user interface, all without writing any code.