Xun Xian
I am Xun Xian, a fourth-year Ph.D. student at University of Minnesota, supervised by Prof. Jie Ding and Prof. Mingyi Hong.
I am dedicated to investigating the trustworthiness and safety of LLMs and Diffusion models. Presently, my efforts are concentrated on developing watermarking techniques tailored for generative models, encompassing both diffusion and large language models, to ensure the safe use of generative AI.
Contact: By Email xian0044@umn.edu
news
May 16, 2024 | I will be joining AWS Bedrock as an applied scientist intern for Summer 2024, under the supervision of Dr. Yanjun Qi. |
---|---|
Apr 16, 2024 | Our multi-year collaboration with Cisco Research has been featured in the news. The quote, “This collaboration is one of the most successful at Cisco Research and an example that industry research and academic research could follow,” comes from the Head of Cybersecurity Research. As one of the leading student researchers, I feel very proud to be part of the team! |
Jan 16, 2024 | Our paper titled ‘Demystifying Poisoning Backdoor Attacks from a Statistical Perspective’ has been accepted at ICLR 2024. In this paper, we developed a fundamental understanding of backdoor attacks for generative models, including both diffusion models and large language models. |
Oct 24, 2023 | Glad to receive the NeurIPS 2023 Scholar Award! |
Sep 30, 2023 | Our paper titled ‘A Unified Detection Framework for Inference-Stage Backdoor Defenses’ has been accepted at NeurIPS 2023. In this paper, we developed a unified framework for defending against backdoor attacks in both computer vision (CV) and natural language processing (NLP) applications, achieving up to a 300% improvement in terms of detection performance. |