Xun Xian

ECE, University of Minneosta, Twin Cities.

myself.jpg

I am Xun Xian, a fourth-year Ph.D. student at University of Minnesota, supervised by Prof. Jie Ding and Prof. Mingyi Hong.

I am dedicated to investigating the trustworthiness and safety of LLMs and Diffusion models. Presently, my efforts are concentrated on developing watermarking techniques tailored for generative models, encompassing both diffusion and large language models, to ensure the safe use of generative AI.

Contact: By Email xian0044@umn.edu

news

Jan 16, 2024 Our paper titled ‘Demystifying Poisoning Backdoor Attacks from a Statistical Perspective’ has been accepted at ICLR 2024. In this paper, we developed a fundamental understanding of backdoor attacks for generative models, including both diffusion models and large language models.
Oct 24, 2023 Glad to receive the NeurIPS 2023 Scholar Award!
Sep 30, 2023 Our paper titled ‘A Unified Detection Framework for Inference-Stage Backdoor Defenses’ has been accepted at NeurIPS 2023. In this paper, we developed a unified framework for defending against backdoor attacks in both computer vision (CV) and natural language processing (NLP) applications, achieving up to a 300% improvement in terms of detection performance.
May 30, 2023 A paper is accepted at ICML 2023.
Dec 30, 2022 Glad to receive the Cisco Graduate Research Award!

selected publications

  1. NeurIPS’23
    A Unified Detection Framework for Inference-Stage Backdoor Defenses
    Xun Xian, Ganghua Wang, Jayanth Srinivasa, and 4 more authors
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023
  2. ICML’23
    Understanding Backdoor Attacks through the Adaptability Hypothesis
    Xun Xian, Ganghua Wang, Jayanth Srinivasa, and 4 more authors
    In International Conference on Machine Learning, 2023
  3. NeurIPS’20
    Assisted learning: A framework for multi-organization learning
    Xun Xian, Xinran Wang, Jie Ding, and 1 more author
    In Thirty-fourth Conference on Neural Information Processing Systems, 2020
  4. ICLR’24
    Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
    Ganghua Wang, Xun Xian, Jayanth Srinivasa, and 4 more authors
    In The International Conference on Learning Representations, 2024