What are Cyber Social Threats

The role of online platforms as a prime, daily communication tool is coincident with a sharp rise in its misuse, threatening our society in large. These platforms have been implicated for promoting hate speech, radicalization, harassment, cyberbullying, fake news, human trafficking, drug dealing, gender-based stereotyping, and violence among other ills, with a significant impact on individual and community well-being.

Cyber social threats during political conflicts present a critical area of concern, as online platforms are increasingly used to disseminate misinformation, amplify propaganda, and coordinate malicious activities that exploit political and social divisions. These threats can exacerbate polarization, fuel unrest, and have profound real-world consequences, and may be amplified by the widespread adoption of generative artificial intelligence technology. Understanding and mitigating such behaviors will be of particular interest for CySoc 2025. The inherent complexity of these interactions requires novel computational approaches and interdisciplinary collaboration to analyze, interpret, and counteract their impacts effectively.

These challenges have led to a rising prominence of analysis of online communications in academia, politics, homeland security, and industry using computational techniques from natural language processing, statistics, network science, data mining, machine learning, computational linguistics, human-computer interaction, and cognitive science. To meet these challenges, this workshop aims to stimulate research on social, cultural, emotional, communicative, and linguistic aspects of harmful conversations on online platforms and developing novel approaches to analyze, interpret, and understand them.

The workshop welcomes papers that employ quantitative and/or qualitative, analytical, theoretical approaches examining a diverse range of issues related to online harmful communications. Papers on resources/data and tools will also be welcome either for demos or for short/regular talks.

Why attend the CySoc Workshop?

This workshop will bring together researchers and practitioners in computer and social sciences from both academia and industry to exchange ideas on understanding the multi-faceted aspects of harmful content while leading the discussion on building novel computational methods to reliably detect, derive meaning of, interpret, understand and counter them. The participants will find opportunities to present and hear about other fundamental research and emerging applications, to exchange ideas and experiences, and to identify new opportunities for collaborations across disciplines. The researchers and practitioners from various disciplines are strongly encouraged to attend, including (but not limited to) behavioral science, computer and information sciences, psychology, sociology, political science, cognitive science, cultural study, information systems, terrorism and counter-terrorism, operations research, communication, medicine, and public health.


Themes & Topics

We are interested in both computing and social science approaches that study the above research directions, based on quantitative, qualitative and mixed research methods. We expect to receive submissions and lead discussions on the topics of novel analytic methods, tools, and datasets.

Spotlight topic

Parallel to the main themes, this year, we will have a spotlight topic: Political Conflicts in Online Platforms in the Era of Gen-AI.

Themes

The CySoc workshop has three main themes:

  • Detection and prediction of content, users, and communities
  • Countering harmful narratives
  • Ethical considerations and handling bias

Topics

Topics for research and discussions on challenges in dealing with online harmful content include (but not limited to):

  • Spotlight topic: Political Conflicts in Online Platforms in the Era of Gen-AI
  • Online extremism
  • Harassment and cyberbullying
  • Hate speech
  • Gender-based violence
  • Human trafficking
  • Illicit drug trafficking
  • Mental health implications of social media
  • Ethical considerations on privacy-preserving social media analytics
  • Emotional and psychological support
  • Trust relationship and community dynamics
  • Relationship of the social web and mainstream news media
  • Cultural implications of social web usage
  • Influencer identification and community detection for movements
  • Misinformation and disinformation (e.g., epidemics of fake news, images and videos, during a disaster, health issues and elections)

Important Dates


Paper submissions due: March 31, 2025
Paper submissions due: April 10, 2025 (AOE)
We are no longer accepting submissions.

Submission Instructions


This year CySoc will consider for acceptance research, position, short, and demo paper submissions.

Please reference the ICWSM 2025 Submission Guidelines for all information.

As per the AAAI-ICWSM guidelines: All papers must be submitted as high-resolution PDF files, formatted in AAAI two-column, camera-ready style, for US Letter (8.5" x 11") paper (see the ICWSM 2025 Submission Guidelines for the most up to date templates). Full papers are recommended to be 8 pages long and must be at most 11 pages long, including only the main text and the references. Position and Short papers must be no longer than 5 pages (including references), and Demo descriptions must be no longer than 4 pages (including references). The mandatory Ethics Checklist (and brief additional Ethics Statement, if desired, see below), optional appendices, etc., do not count toward the page limit and should be placed after the references. Appendices, if they exist, should be placed after the Ethics Checklist. Revision papers and final camera-ready full papers can be up to 12 pages. Note that reviewers are not bound to review appendices, and excessively long appendices can be a ground for rejection. No source files (Word or LaTeX) are required at the time of submission for review; only the PDF file is permitted. Finally, the copyright slug may be omitted in the initial submission phase, and no copyright form is required until a paper is accepted for publication.

Anonymity: The review is double-blind. Therefore, please anonymize your submission: do not put the author names or affiliations at the start of the paper, and do not include funding or other acknowledgments in papers submitted for review.

For all other matter, please rely on the ICWSM 2025 Submission Guidelines.

Papers should be submitted via EasyChair.org

Workshop Program

Venue Location

A. C. Meyers Vænge 15, 2450, Building A
København, Denmark, Room 3.0.084A

See the conference website venue information for the latest.

Note: All times in GMT+1 timezone

The below program is tentative and subject to change.
If you are a presenter and you notice a mistake, please contact us via email.

🙏 08.30 - 08.45 AM | Welcome workshop attendees
🎤 08.45 - 09.45 AM | Keynote I: Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence.

Factuality Challenges in the Era of Large Language Models
We will discuss the risks, the challenges, and the opportunities that Large Language Models (LLMs) bring regarding factuality. We will then present some recent work on using LLMs for fact-checking, on detecting machine-generated text, and on fighting the ongoing misinformation pollution with LLMs. Finally, we will present a number of LLM fact-checking tools recently developed at MBZUAI: (i) LM-Polygraph, a tool to predict an LLM's uncertainty in its output using cheap and fast uncertainty quantification techniques, (ii) Factcheck-Bench, a fine-grained evaluation benchmark and framework for fact-checking the output of LLMs, (iii) Loki, an open-source tool for fact-checking the output of LLMs, developed based on Factcheck-Bench and optimized for speed and quality, (iv) OpenFactCheck, a framework for fact-checking LLM output, for building customized fact-checking systems, and for benchmarking LLMs for factuality, (v) LLM-DetectAIve, a tool for machine-generated text detection, and (vi) FRAPPE, a FRAming, Persuasion, and Propaganda Explorer.


📄 09.45 - 10.35 AM | Paper Session I (Computational Methodology): Two full papers, 15 minutes each; two short papers, 10 minutes each, including Q/A.
  • (Full) The Viral Nature of Symbols: Epidemiological Modeling of Visual Elements in Digital Information Campaigns. Sayantan Bhattacharya, Nitin Agarwal, and Diwash Poudel. DOI: 10.36190/2025.21
  • (Full) StressSGCL: A Stress-Specific Embeddings Learning Approach using Contrastive Learning over Skip-Gram Model. Muhammad Abulaish, Rumjot Kaur, and Amit Kumar Sah. DOI: 10.36190/2025.17
  • (Short) PRISM: Perceptual Recognition for Identifying Standout Moments in Human-Centric Keyframe Extraction. Mert Can Cakmak, Nitin Agarwal, and Diwash Poudel. DOI: 10.36190/2025.16
  • (Short) Evaluating Counter-Argument Strategies for Logical Fallacies: An Agent-Based Analysis of Persuasiveness and Polarization. Keisuke Toyoda, Tomoki Fukuma, Koki Noda, Kyosuke Kambe, Yu Masubuchi, Hiroshi Someda, and Fujio Toriumi. DOI: 10.36190/2025.12
☕️ 10.35 - 11.00 AM | Coffee Break
💻 11.00 - 12.00 PM | Demo presentations: 30 minutes each, including Q/A.
📄 12.00 - 12.35 PM | Paper Session II (Elections): One full paper, 15 minutes; two short/dataset papers, 10 minutes each, including Q/A.
  • (Full) Investigating Coordinated Inauthentic Behavior on Alternative Platforms During the 2024 U.S. Election. Carlo Colizzi, Antonio Alessio Della Sala, Giuseppe Fenza, and Lukasz Gajewski. DOI: 10.36190/2025.19
  • (Short) Political Biases on X before the 2025 German Federal Election. Tabia Tanzin Prama, Chhandak Bagchi, Vishal Kalakonnavar, Paul Krauß, and Przemyslaw Grabowicz. (Preprint)
  • (Dataset) A Public Dataset Tracking Social Media Discourse about the 2024 U.S. Presidential Election on Twitter/X. Ashwin Balasubramanian, Vito Zou, Hitesh Narayana, Christina You, Luca Luceri, and Emilio Ferrara. DOI: 10.36190/2025.15
🥪 12.35 - 01.30 PM | Lunch Break
🎤 01.30 - 02.30 PM | Keynote II: Debora Nozza, Bocconi University.

Roadmap to universal hate speech detection
The proliferation of hate speech on social media platforms has been rising, with (pseudo-)anonymity allowing individuals to target others without being recognized or easily traced. While this societal issue has garnered significant attention in the NLP community, it presents three major challenges. Hate speech detection models need to be fair, work across all languages, and incorporate personalization while balancing privacy concerns. Addressing these challenges will revolutionize the field of hate speech detection and contribute to the development of a "universal" model that can adapt to individual user perspectives. In this talk, I will present my contributions in this area along with my perspectives on future directions.

👥 02.30 - 03.30 PM | Panel: Political Conflicts in Online Platforms in the Era of Gen-AI: 10-minute talks per panelist, then moderated Q&A.
📄 03.30 - 03.55 PM | Paper Session III (Misinformation): One full paper, 15 minutes; one short paper, 10 minutes, including Q/A.
  • (Full) "We're watching for cheating": The Role of Visual Media in a U.S. Election Online Rumoring Campaign. Kristen Engel, Ashlyn Aske, Kayla Duskin, Sukrit Venkatagiri, and Emma Spiro. DOI: 10.36190/2025.13
  • (Short) Towards Protecting Youth from Mis- and Disinformation. Matteo Große-Kampmann, and Omed Abed. DOI: 10.36190/2025.20
☕️ 03.55 - 04.15 PM | Coffee Break
📄 04.15 - 05.00 PM | Paper Session IV (Content Analysis): Three full papers, 15 minutes each, including Q/A.
  • (Full) Navigating Hate Speech: Bridging LLMs and Human Expertise in Public Officials' Online Communication. Nitheesha Nakka, Isaac Pollert, Lingyu Fuca, and Cassandra Tai. DOI: 10.36190/2025.11
  • (Full) Gender Inequalities in Content Collaborations: Asymmetric Creator Synergy and Symmetric Audience Biases. Mingyue Zha, and Herbert Chang. DOI: 10.36190/2025.14
  • (Full) Mapping Hate and Extremism: How Brazilian Reddit Communities Spread Toxic Discourse. Pedro Bento, Yan Aquino, Arthur Buzelin, Pedro Robles Dutenhefner, Arthur Chagas, Victoria Estanislau, Lucas Dayrell, Samira Malaquias, Caio Santana, Marcelo Sartori Locatelli, Gisele L. Pappa, Wagner Meira Jr., and Virgilio Almeida. DOI: 10.36190/2025.18
🙇 05.00 - 05.05 PM | Closing Remarks

Registration

Please check the conference site for registration details.

Organizers

Ugur Kursuncu

Georgia State University GA, USA
Contact Email

Kaicheng Yang

Northeastern University, MA, USA
Contact Email

Francesco Pierri

Politecnico di Milano, Milano, Italy
Contact Email

Matthew DeVerna

Indiana University, Bloomington, IN, USA
Contact Email

Yelena Mejova

ISI Foundation, Turin, Italy
Contact Email

Jeremy Blackburn

State University of New York at Binghamton, NY, USA
Contact Email




Steering Committee

Program Committee

More names may be added below once confirmed.