Keynote Speakers
Preslav Nakov
Department Chair and Professor of Natural Language Processing
Research Interests
Professor Nakov’s research interests include computational linguistics and natural language processing, disinformation, propaganda, fake news and media bias detection, fact checking, machine translation, question answering, sentiment analysis, lexical semantics, and biomedical text processing.
Contact: preslav.nakov@mbzuai.ac.ae
Prior to joining MBZUAI, Professor Nakov worked at the Qatar Computing Research, HBKU where he was a principal scientist. Previously, he was a research fellow at the National University of Singapore (2008–2011) and a researcher at the Bulgarian Academy of Sciences (2008). He has been an honorary lecturer at Sofia University, Bulgaria since 2014.
Professor Nakov authored a Morgan and Claypool book titled Semantic Relations Between Nominals (2nd edition in 2021) and two books on computer algorithms. He was also the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer.
Professor Nakov is one of the leading experts on “fake news”, disinformation, fact checking, propaganda, and media bias detection and has published tens of research papers on solutions and stop-gaps for the ever-growing online social media infodemic.
He’s served on the program committees of the major conferences in computational linguistics and artificial intelligence. Most recently, he was a program committee chair of the annual conference of the Association for Computational Linguistics (ACL 2022).
Professor Nakov is president of ACL SIGLEX, the Special Interest Group (SIG) on the Lexicon of the Association for Computational Linguistics (ACL). He is also secretary of SIGSLAV, the ACL SIG on Slavic Natural Language Processing. He also serves on the advisory board of the European Chapter of ACL, and he is a secretary of the Truth and Trust Online TTO) board of trustees. He is an Action Editor for the Computational Linguistics (CL) journal and for the Transactions of the Association for Computational Linguistics (TACL) journal, an Associate Editor for Transactions on Audio, Speech and Language Processing (TASLP) journal, for the IEEE Trans. on Affective Computing journal (TAFFC), for the Frontiers in Artificial Intelligence journal (sections: 1. Language and Computation; 2. Natural Language Processing), and for the AI Communications (AIC) journal, a Member of the Editorial Board of Computer Speech and Language (CSL) and of the Journal of Natural Language Engineering (NLE), and an Editorial Board member of the Language Science Press Book Series on Phraseology and Multiword Expressions.
His research has been featured in more than 100 news outlets, including MIT Technology Review, Communications of the ACM (Research Highlights), Forbes, Boston Globe, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget.
Google scholar: https://scholar.google.com/citations?user=DfXsKZ4AAAAJ
ResearchGate: https://www.researchgate.net/profile/Preslav-Nakov
Scopus: https://www.scopus.com/authid/detail.uri?authorId=15043153900
Information taken: https://mbzuai.ac.ae/study/faculty/preslav-nakov/
Salima Lamsiyah
NLP-Machine Learning Researcher, Luxembourg University
Research Interests
Salima Lamsiyah, is a NLP-Machine Learning Researcher at Luxemburg University. Her research interests include Automatic Text Summarization, Deep Learning Models, Transfer Learning, and Natural Language Processing.
Title: From General LLMs to Cyber-Native Language Models: Specialization, Robustness, and Trust in Cybersecurity AI
Abstract: Recent advances in Large Language Models (LLMs) have created new opportunities for cybersecurity applications, ranging from phishing and malicious URL detection to cyber threat intelligence analysis. However, cybersecurity data differs significantly from natural language, as it often contains highly structured, adversarial, and rapidly evolving patterns that are not always well captured by general-purpose LLMs. This talk explores the transition from generic LLMs toward cyber-native language models designed for cybersecurity tasks. Drawing on recent work on domain-specific transformer models for malicious domain and URL detection, it highlights the importance of specialized pretraining, cybersecurity-aware tokenization, and domain-adaptive representation learning. The talk also discusses emerging challenges related to robustness, interpretability, safe adaptation, and trustworthy deployment in real-world cybersecurity systems.
