Topics

Societal and Human Security and Safety

    • Content legitimacy & quality
        • Detection and mitigation of hate speech & offensive language
        • Fake news, deepfakes, mis- & disinformation
        • Detection of machine-generated language (text, speech, gesture)
        • Trust & credibility of online information
    • User Security and Safety
        • Cyberbullying & offender identification
        • Monitoring extremist fora
        • Suicide prevention
        • Clickbait & scam detection
        • Fake profile detection in social networks
    • Technical Measures and Solutions
        • Social engineering identification, phishing detection
        • NLP for risk assessment
        • Controlled languages for safe messages
        • Prevention of malicious use of ai models
        • Forensic linguistics
    • Human Factors in Cyber Security

Speech Technology and Multimodal Investigations for Cyber Security

    • Voice-based security: Analysis of voice recordings or transcripts for security threats
    • Detection of machine generated language in multimodal context (text, speech and gesture)
    • NLP and biometrics in multimodal context

Data and Software Security

    • Cryptography
    • Digital forensics
    • Malware detection, obfuscation
    • Models for documentation
    • NLP for data privacy and leakage prevention (DLP)
    • Addressing dataset “poisoning” attacks

Human-Centric Security and Support

    • Natural language understanding for chatbots: NLP-powered chatbots for user support and security incident reporting
    • User behaviour analysis: analysing user-generated text data (e.g., chat logs and emails) to detect insider threats or unusual behaviour
    • Human supervision of technology for Cyber Security

Analysis Detection and Threat Intelligence

    • Text-Based Anomaly Detection
        • Identification of unusual or suspicious patterns in logs, incident reports or other textual data
        • Detecting deviations from normal behaviour in system logs or network traffic
    • Threat Intelligence Analysis
        • Processing and analysing threat intelligence reports, news, articles and blogs on latest Cyber Security threats
        • Extracting key information and indicators of compromise (IoCs) from unstructured text

Systems and Infrastructure Security

  • Systems Security
      • Anti-reverse engineering for protecting privacy and anonymity
      • Identification and mitigation of side-channel attacks
      • Authentication and access control
      • Enterprise-level mitigation
      • NLP for software vulnerability detection
  • Malware Detection through Code Analysis
      • Analysing code and scripts for malware
      • Detection using NLP to identify patterns indicative of malicious code

Financial Cyber Security

    • Financial fraud detection
    • Financial risk detection
    • Algorithmic trading security
    • Secure online banking
    • Risk management in finance
    • Financial text analytics

Ethics, Bias, and Legislation in Cyber Security

  • Ethical and Legal Issues
      • Digital privacy and identity management
      • The ethics of NLP and speech technology
      • Explainability of NLP and speech technology tools
      • Legislation against malicious use of AI
      • Regulatory issues
  • Bias and Security
      • Bias in Large Language Models (LLMs)
      • Bias in security related datasets and annotations

Datasets and resources for Cyber Security Aplications

Specialised Security Applications and Open Topics

  • Intelligence applications
  • Emerging and innovative applications in Cyber Security

Special Theme Track – Future of Cyber Security in the Era of LLMs & Generative AI

NLPAICS 2026 will feature a special theme track with the goal of stimulating discussion around Large Language Models (LLMs), Generative AI and ensuring their safety.  The latest generation of LLMs, such as CHATGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has showcased remarkable advancements in text and image understanding and generation. However, as we navigate through uncharted territory, it becomes imperative to address the challenges associated with employing these models in everyday tasks, focusing on aspects such as fairness, ethics, and responsibility. The theme track invites studies on how to ensure the safety of LLMs in various tasks and applications and what this means for the future of the field. The possible topics of discussion include (but are not limited to) the following:

    • Detection of LLM-generated language in multimodal context (text, speech and gesture)
    • LLMs for forensic linguistics
    • Bias in LLMs
    • Safety benchmarks for LLMs
    • Legislation against malicious use of LLMs
    • Tools to evaluate safety in LLMs
    •  Methods to enhance the robustness of language models
Scroll al inicio