##article.return##
OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models
Download
Download PDF