Som Banerji(ee)
Research Scholar @IIT Kharagpur (CNeRG Group)
Software Engineer Technical Leader @Cisco Systems, Inc.

Hello নমস্কার,
I’m Somnath Banerjee (সোমনাথ ব্যানার্জি), a research scholar working on identifying vulnerabilities in large language models (LLM) and transforming them into systems that are safe, coherent, and factually grounded. I aim to make gen AI more equitable and impactful, under the guidance of Prof. Animesh Mukherjee.
Parallel to my research, I work as a Software Engineer Technical Leader at Cisco, where I lead high-impact projects and drive technological excellence. My journey has also included roles as a Software Technical Engineer Expert at Fujitsu Labs, where I led AI initiatives for Australia’s largest Telco organization, and as a Technical Lead and Software Engineer at Cognizant and IBM India.
I hold an M.Tech in Computer Science and Engineering from IIT (ISM) Dhanbad, where I explored extreme classification and earned the university gold medal(🥇) for academic distinction.
Beyond my professional life, I am a fitness enthusiast, an avid reader of bengali literature (প্রধানত শরদিন্দু বন্দ্যোপাধ্যায়, বিভূতিভূষণ বন্দ্যোপাধ্যায়, হেমেন্দ্রকুমার রায়, রূপক সাহা, অভীক সরকার), and a budding guitarist. My love for exploration extends to photography, where I capture life’s fleeting moments with my Fujifilm and Nikon cameras. Whether on my bike or in my car, I seek out lesser-traveled roads to uncover hidden stories.
Be it technology, political debates, traveling, or lifting weights, I believe in living with purpose and passion.
News
Feb 22, 2025 | 🎉 Two new papers are out! 🎯 "Soteria: Language-Specific Functional Parameter Steering for Multilingual Safety Alignment" and "MemeSense: An Adaptive In-Context Framework for Social Commonsense Driven Meme Moderation" |
---|---|
Feb 13, 2025 | 🎉 Paper accepted at NAACL 2025 Industry! 🎯 "Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance" |
Jan 23, 2025 | 🎉 Paper accepted at NAACL 2025 Main! 🎯 "Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models" |
Dec 14, 2024 | 🎉 Paper accepted at AAAI 2025 AI Alignment Track! 🎯 "SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models" |
Nov 15, 2024 | 🎉 Paper accepted at ICWSM 2025! 🎯 "How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries" |
Oct 15, 2024 | 🎉 Paper accepted at EMNLP 2024 Industry Track! 🎯 "Context Matters: Pushing the Boundaries of Open-Ended Answer Generation with Graph-Structured Knowledge Context" |
Sep 15, 2024 | 🎉 Paper accepted at EMNLP 2024 Main! 🎯 "Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations" |