The 2025 Singapore Symposium on Natural Language Processing

April 23, 2025, SUTD

Welcome!

We are excited to announce the Singapore Symposium on Natural Language Processing 2025 (SSNLP'25), which will take place on Wednesday, April 23, 2025, as a full-day event. SSNLP is a premier platform for academic and industrial researchers in Singapore to present ongoing and upcoming work, fostering community building, idea exchange, and collaboration. SSNLP'25 is an excellent opportunity for faculties and students to gain international exposure and engage with leading experts in the field.

Our in-person registration is now open. This year's event will be held at the SUTD, 8 Somapah Rd, Singapore 487372, [Google Map]. Our registration is free of charge, but we have limited seating. Please complete the registration form to secure your spot. The registration deadline is April 4, 23:59, 2025 (SGT).

Registration Closed

Thank you for your interest in our Call for SSNLP-25. The submission deadline has now passed (March 7, 23:59, 2025), and we are no longer accepting new submissions. We appreciate all the contributions and look forward to an exciting lineup of poster presentations.

Call for Presentation Closed

Direction Instruction


Take a cab to Drop-off point at SUTD, 8 Somapah Rd, Singapore 487372. The Campus Center will be just to your right when you get off, while the Albert Hong Lecture Theatre 1 hall is down the path to your left on the ground floor.




Latest news


Mar. 05, 2025 — Open for Registration

Feb. 14, 2025 — Call for Paper, welcome research publications from related conferences working on Natural Language Processing

Jan. 24, 2025 — The date is confirmed: April 23, 2025

Programme

This year, all paper presentations will take the form of posters, allowing for better audience engagement and encouraging in-depth discussions. With distinguished researchers attending ICLR 2025 during the same week, we have increased the number of keynote talks to provide greater exposure to emerging research trends and foster international collaboration. The SSNLP 2025 organising committee express our sincere gratitude to IMDA for their generous sponsorship, which has made this series of keynote talks possible.

Time Event
09:00 - 09:20 Registration
Location:   Outside Albert Hong Lecture Theatre 1
09:20 - 09:30 Opening Remarks
Location:   Albert Hong Lecture Theatre 1
09:30 - 10:30 Keynote & Technical Sharing Session 1
Speaker:   Junyang Lin
10:30 - 10:45 Short Break
10:45 - 11:30 Poster Spotlight
  (5 min each)
11:30 - 13:00 Poster Session w/ Buffet Lunch
13:00 - 14:00 Keynote & Technical Sharing Session 2
Speaker:   Nanyun (Violet) Peng
14:15 - 15:15 Keynote & Technical Sharing Session 3
Speaker:   Faeze Brahman
15:15 - 16:00 Coffee Break
16:00 - 17:00 Keynote & Technical Sharing Session 4
Speaker:   Mohit Bansal
17:00 - 17:10 Closing Remarks

Note: The keynote talks are co-hosted with IMDA, as part of the Technical Sharing Session (TSS) Series.

Poster Presentations

This year, we are introducing a new format that includes a 5-minute poster spotlight and a 45-minute poster session. Each presenter will give a brief research presentation (poster spotlight) to provide a high-level overview of their work, followed by an opportunity for in-depth discussions with event participants during the poster session. Each poster board can accommodate (1mL x 2mH) sized posters.

Spotlight Timeslot Title & Authors
10:45-10:50 Soft Syntactic Reinforcement for Neural Event Extraction
Anran Hao, Jian Su, Shuo Sun, Teo Yong Sen
10:50-10:55 Unmasking Implicit Bias: Evaluating Persona-Prompted LLM Responses in Power-Disparate Social Scenarios
Bryan Chen Zhengyu Tan, Roy Ka-Wei Lee
10:55-11:00 AdaMergeX: Cross-Lingual Transfer with Large Language Models via Adaptive Adapter Merging
Yiran Zhao, Wenxuan Zhang, Huiming Wang, Kenji Kawaguchi, Lidong Bing
11:00-11:05 Enhancing Vision-Language Compositional Understanding with Multimodal Synthetic Data
Haoxin Li, Boyang Li
11:05-11:10 LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models
Kaichen Zhang, Bo Li, Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, Kairui Hu, Shuai Liu, Yuanhan Zhang, Jingkang Yang, Chunyuan Li, Ziwei Liu
11:10-11:15 Just What You Desire: Constrained Timeline Summarization with Self-Reflection for Enhanced Relevance
Muhammad Reza Qorib, Qisheng Hu, and Hwee Tou Ng
11:15-11:20 CDB: A Unified Framework for Hope Speech Detection Through Counterfactual, Desire and Belief
Tulio Ferreira Leite da Silva, Gonzalo Freijedo Aduna, Farah Benamara, Alda Mari, Zongmin Li, Li Yue, Jian Su
None Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing
Fangkai Jiao, Chengwei Qin, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty
None Decomposition Dilemmas: Does Claim Decomposition Boost or Burden Fact-Checking Performance?
Qisheng Hu, Quanyu Long, Wenya Wang
None Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding
Zhihan Zhang, Yixin Cao, Chenchen Ye, Yunshan Ma, Lizi Liao, Tat-Seng Chua
None A Survey of Ontology Expansion for Conversational Understanding
Jinggui Liang, Yuxia Wu, Yuan Fang, Hao Fei, Lizi Liao

Keynote Speakers

The following speakers from both academia and industry are invited to give keynotes at SSNLP 2025. Please click the profile image to view the detailed description of the talk.


Speaker: Junyang Lin @ Alibaba Group

Title: Qwen: Towards Generalist Models

Abstract: Since Alibaba launched the Qwen series of large models in 2023, the Qwen series of large language models and multimodal large models have been continuously updated and improved. This presentation will introduce the latest developments in the Qwen series of models, including the current performance and technical implementation behind the large language model Qwen2.5, vision-language large model Qwen2.5-VL, omni model Qwen2.5-Omni, etc. Additionally, this presentation will also cover the future development directions of the Qwen series.


Bio: Junyang Lin, a senior staff engineer at Alibaba, currently serves as the tech lead for Qwen. His research areas include natural language processing and multi-modal representation learning, with a particular focus on large-scale foundation models. He has published papers in top-tier conferences such as NeurIPS, ICML, and ACL, and his Google Scholar citation count exceeds 13,000. Since 2023, he has primarily been responsible for the development, open-sourcing, and application of the Qwen series of large foundation models. The models he has developed include the large language model Qwen2.5, vision-language large model Qwen2.5-VL, omni model Qwen2.5-Omni, coding models Qwen2.5-Coder, etc.




Speaker: Nanyun (Violet) Peng @ UCLA

Title: Controllable and Creative Natural Language Generation

Abstract: Recent advances of large language models (LLMs) have demonstrated strong results in natural language processing (NLP) applications. With the improving capability of LLMs, there is a growing need for controllable generation to produce reliable and tailored outputs, especially in applications requiring adherence to specific constraints or creativity within defined boundaries. However, the auto-regressive nature LLMs, i.e., generate token-by-token from left-to-right makes it challenging to impose structural or content control/constraints on the model. In this talk, I will present our recent work on controllable natural language generation (NLG) that transcends the conventional auto-regressive formulation, aiming to improve both reliability and creativity of generative models. We introduce controllable decoding-time algorithms that steer auto-regressive models to better conform to specified constraints. Our approach enables more reliable and creative outputs, with applications to creative generation, formality-controlled machine translation, and commonsense-compliant generation.


Bio: Nanyun (Violet) Peng is an Associate Professor of Computer Science at The University of California, Los Angeles. Her research focuses on controllable and creative language generation, multilingual and multimodal models, and the development of automatic evaluation metrics, with a strong commitment to advancing robust and trustworthy artificial intelligence (AI). Her work has been recognized with multiple paper awards including an Outstanding Paper Award at NAACL 2022, three Outstanding Paper Awards at EMNLP 2024, Oral Papers at NeurIPS 2022 and ICML 2023, as well as several Best Paper Awards at workshops affiliated with top AI and NLP conferences. She was featured in the IJCAI 2022 Early Career Spotlight. Her research has received support from NSF CAREER Award, NIH R01, DARPA, IARPA grants, and multiple industrial research awards. She is serving as a Program Chair for ICLR 2025 and EMNLP 2025, and as a board member for NAACL.




Speaker: Faeze Brahman @ Ai2

Title: Open Language Model Adaptation and Reliable Evaluation

Abstract: In this talk, I explore two crucial frontiers in AI development: democratizing language model adaptation and enhancing their reliability in real-world deployment. I will introduce Tulu 3, a family of fully-open post-trained language models. While post-training techniques are critical for refining behaviors and unlocking new capabilities in language models, open recipes significantly lag behind proprietary ones. Tulu 3 addresses this gap by providing complete transparency into data, code, and training methodologies, yielding models that outperform comparable open-weight alternatives while narrowing the gap with proprietary systems. As we increase the capabilities of these models through better post-training techniques, it is important to ensure their responsible deployment. In the second part, I will briefly discuss two projects on balancing reliability and compliance in LMs: a taxonomy of contextual noncompliance that identifies when models should handle out-of-scope queries, and a selective evaluation framework enabling models to abstain from judgments when lacking confidence, achieving stronger alignment with human evaluators.


Bio: Dr. Faeze Brahman is a Research Scientist at the Allen Institute for AI (Ai2). Prior to that, she was a postdoctoral researcher at Ai2 and the University of Washington and received her PhD from UCSC. Her research focuses on constrained reasoning and generation, understanding LLMs' capabilities and limitations, and bridging the capability gap between humans and models beyond scaling through developing resource-efficient algorithms. She is also interested in designing human-centered AI systems that are reliable and safe for real-world applications.




Speaker: Mohit Bansal @ UNC Chapel Hill

Title: Trustworthy Planning Agents for Collaborative Reasoning and Multimodal Generation

Abstract: In this talk, I will present our journey of developing trustworthy and adaptive AI planning agents that can reliably communicate and collaborate for uncertainty-calibrated reasoning (on math, commonsense, coding, tool use, etc.) as well as for interpretable, controllable multimodal generation (across text, images, videos, audio, layouts, etc.). In the first part, we will discuss how to teach agents to be trustworthy and reliable collaborators via social/pragmatic multi-agent interactions (e.g., confidence calibration via speaker-listener reasoning and learning to balance positive and negative persuasion), as well as how to acquire and improve agent skills needed for efficient and robust perception and action (e.g., learning reusable, verified abstractions over actions & code, and adaptive data generation based on discovered weak skills). In the second part, we will discuss interpretable and controllable multimodal generation via LLM-agents based planning and programming, such as layout-controllable image generation and evaluation via visual programming (VPGen, VPEval, DSG), consistent video generation via LLM-guided multi-scene planning, targeted corrections, and retrieval-augmented motion adaptation (VideoDirectorGPT, VideoRepair, DreamRunner), and interactive and composable any-to-any multimodal generation (CoDi, CoDi-2).


Bio: Dr. Mohit Bansal is the John R. & Louise S. Parker Distinguished Professor and the Director of the MURGe-Lab (UNC-NLP Group) in the Computer Science department at UNC Chapel Hill. He received his PhD from UC Berkeley in 2013 and his BTech from IIT Kanpur in 2008. His research expertise is in natural language processing and multimodal machine learning, with a particular focus on multimodal generative models, grounded and embodied semantics, reasoning and planning agents, faithful language generation, and interpretable, efficient, and generalizable deep learning. He is a AAAI Fellow and recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), IIT Kanpur Young Alumnus Award, DARPA Director's Fellowship, NSF CAREER Award, Google Focused Research Award, Microsoft Investigator Fellowship, Army Young Investigator Award (YIP), DARPA Young Faculty Award (YFA), and outstanding paper awards at ACL, CVPR, EACL, COLING, CoNLL, and TMLR. He has been a keynote speaker for the AACL 2023, CoNLL 2023, and INLG 2022 conferences. His service includes EMNLP and CoNLL Program Co-Chair, and ACL Executive Committee, ACM Doctoral Dissertation Award Committee, ACL Americas Sponsorship Co-Chair, and Associate/Action Editor for TACL, CL, IEEE/ACM TASLP, and CSL journals.



Sponsors

Organizing Committee

General Chair:

Boyang Albert Li, Nanyang Technological University

Program Chairs:

Wenya Wang, Nanyang Technological University

Lizi Liao, Singapore Management University

Local Chairs:

Ming Shan Hee, Singapore University of Technology and Design

Jinggui Liang, Singapore Management University

Industry Relations Chairs:

Yixin Cao, Fudan University

Wenxuan Zhang , Singapore University of Technology and Design

International Outreach Chair:

Nancy F. Chen, A*STAR Institute for Infocomm Research

Publicity Chair:

Shengqiong Wu, National University of Singapore

Registration Chair:

Zhihan Zhang , Singapore Management University

Advisory Committee:

Min-Yen Kan, National University of Singapore

Soujanya Poria, Singapore University of Technology and Design

Roy Lee, Singapore University of Technology and Design

Location

SSNLP 2025 will be held at the SUTD, 8 Somapah Rd, Singapore 487372.

Past SSNLP

Contact Us

Please feel free to reach out if you have any inquiries: Lizi Liao, Albert Li and Wenya Wang.