What are Cyber Social Threats
The role of online platforms as a prime, daily communication tool is coincident with a sharp rise in
its misuse, threatening our society in large.
These platforms have been implicated for promoting hate speech, radicalization, harassment,
cyberbullying, fake news, human trafficking, drug dealing, gender-based stereotyping, and
violence among other ills, with a significant impact on individual and community
well-being.
Cyber social threats during political conflicts present a critical area of concern, as online
platforms are increasingly used to disseminate misinformation, amplify propaganda, and coordinate
malicious activities that exploit political and social divisions. These threats can exacerbate polarization,
fuel unrest, and have profound real-world consequences, and may be amplified by the widespread adoption of
generative artificial intelligence technology. Understanding and mitigating such behaviors will be
of particular interest for CySoc 2026. The inherent complexity of these interactions requires novel
computational approaches and interdisciplinary collaboration to analyze, interpret, and counteract their impacts effectively.
These challenges have led to a rising prominence of analysis of online communications in academia,
politics, homeland security, and industry using computational techniques from natural language
processing, statistics, network science, data mining, machine learning, computational linguistics,
human-computer interaction, and cognitive science.
To meet these challenges, this workshop aims to stimulate research on social, cultural, emotional,
communicative, and linguistic aspects of harmful conversations on online platforms and developing
novel approaches to analyze, interpret, and understand them.
The workshop welcomes papers that employ quantitative and/or qualitative, analytical, theoretical approaches examining a diverse range of issues related to online harmful communications. Papers on resources/data and tools will also be welcome either for demos or for short/regular talks.
Why attend the CySoc Workshop?
This workshop will bring together researchers and practitioners in computer and social sciences from both academia and industry to exchange ideas on understanding the multi-faceted aspects of harmful content while leading the discussion on building novel computational methods to reliably detect, derive meaning of, interpret, understand and counter them. The participants will find opportunities to present and hear about other fundamental research and emerging applications, to exchange ideas and experiences, and to identify new opportunities for collaborations across disciplines. The researchers and practitioners from various disciplines are strongly encouraged to attend, including (but not limited to) behavioral science, computer and information sciences, psychology, sociology, political science, cognitive science, cultural study, information systems, terrorism and counter-terrorism, operations research, communication, medicine, and public health.
Themes & Topics
We are interested in both computing and social science approaches that study the above research directions, based on quantitative, qualitative and mixed research methods. We expect to receive submissions and lead discussions on the topics of novel analytic methods, tools, and datasets.
Spotlight topic
This year's workshop spotlights HumanβAI Relationships: Risks, Rewards, and Responsibilities. The growing integration of generative and interactive AI systems into everyday social life, communication, and decision-making creates new opportunities for support, creativity, and efficiency. At the same time, it amplifies cyber social threats, including manipulation, mental health risks, over-reliance and dependency, misleading or deceptive information, harassment, and the normalization of harmful behaviors. Understanding how humans perceive, engage with, rely on, and are influenced by AI systems is therefore essential to assessing their broader societal impact.
In addition to the spotlight topic, the workshop will also cover the following themes and topics:
Themes
The CySoc workshop has three main themes:
- Detection and prediction of content, users, and communities
- Countering harmful narratives
- Ethical considerations and handling bias
Topics
Topics for research and discussions on challenges in dealing with online harmful content include (but not limited to):
- Online extremism
- Harassment and cyberbullying
- Hate speech
- Gender-based violence
- Human trafficking
- Illicit drug trafficking
- Mental health implications of social media
- Ethical considerations on privacy-preserving social media analytics
- Emotional and psychological support
- Trust relationship and community dynamics
- Relationship of the social web and mainstream news media
- Cultural implications of social web usage
- Influencer identification and community detection for movements
- Misinformation and disinformation (e.g., epidemics of fake news, images and videos, during a disaster, health issues and elections)
Important Dates
Submission deadline: March 25, 2026
Author notification: April 8, 2026
We are no longer accepting submissions.
Submission Instructions
This year CySoc will consider for acceptance research, position, short, and demo paper submissions.
Accepted papers will be published in the workshop proceedings of ICWSM 2026.
Non-archival submissions: We also welcome non-archival submissions of previously published or under-review work relevant to the workshop themes. Authors interested in non-archival submission should submit a two-page extended abstract (including references) following the same AAAI two-column, camera-ready format described above. Non-archival submissions will undergo the same peer-review process for relevance and scientific rigor, but will not appear in the workshop proceedings. Authors will have the chance to present their work in the workshop.
Please reference the ICWSM 2026 Submission Guidelines for all information.
As per the AAAI-ICWSM guidelines: All papers must be submitted as high-resolution PDF files, formatted in AAAI two-column, camera-ready style, for US Letter (8.5" x 11") paper (see the ICWSM 2026 Submission Guidelines for the most up to date templates).
Full papers are recommended to be 8 pages long and must be at most 11 pages long, including only the main text and the references.
Position and Short papers must be no longer than 5 pages (including references), and Demo descriptions must be no longer than 4 pages (including references).
The mandatory Ethics Checklist (and brief additional Ethics Statement, if desired, see below), optional appendices, etc., do not count toward the page limit and should be placed after the references.
Appendices, if they exist, should be placed after the Ethics Checklist.
Revision papers and final camera-ready full papers can be up to 12 pages.
Note that reviewers are not bound to review appendices, and excessively long appendices can be a ground for rejection.
No source files (Word or LaTeX) are required at the time of submission for review; only the PDF file is permitted.
Finally, the copyright slug may be omitted in the initial submission phase, and no copyright form is required until a paper is accepted for publication.
Anonymity: The review is double-blind.
Therefore, please anonymize your submission: do not put the author names or affiliations at the start of the paper, and do not include funding or other acknowledgments in papers submitted for review.
Use of AI tools: Authors are expected to ensure the accuracy and integrity of all content in their submissions, including references. Submissions found to contain fabricated or unverifiable content may be subject to desk rejection.
For all other matter, please rely on the ICWSM 2026 Submission Guidelines.
Papers should be submitted via EasyChair.org
Workshop Program
Note: This program is preliminary and subject to change.
π 8:30 β 8:45 AM | Welcome Workshop Attendees
π€ 8:45 β 9:45 AM | Keynote I: Angel Hsing-Chi Hwang (in-person).
Title TBA
Abstract TBA
π 9:45 β 10:35 AM | Paper Session I (Theme TBA): Two full papers, 15 min each; two short papers, 10 min each, incl. Q&A.
- Paper details TBA
β 10:35 β 10:50 AM | Coffee Break
π» 10:50 AM β 12:00 PM | Demo Presentations: Demo I: TBA; Demo II: TBA β 25 min each, incl. Q&A.
π 12:00 β 12:35 PM | Paper Session II (Theme TBA): One full paper, 15 min; two short papers, 10 min each, incl. Q&A.
- Paper details TBA
π½οΈ 12:35 β 1:00 PM | Lunch Break
π€ 1:00 β 2:00 PM | Keynote II: Tiziano Piccardi.
Tuning Our Algorithmic Amplifiers: Embedding Pro-Social Values Into Online Platforms
The design of online platforms shapes public discourse, yet these systems can contribute to misinformation, polarization, and social harm. In this talk, I discuss how to reorient platforms toward positive societal outcomes by rethinking their algorithmic objectives and design principles. I introduce a framework for large-scale reranking field experiments without platform collaboration and present results from two randomized studies on X, demonstrating that feed interventions can reduce affective polarization and improve well-being. I then present a general paradigm for embedding societal values into algorithms, enabling user agency through customizable feed ranking. By translating social science constructs into real-time AI systems, I show how platform design can support healthier information ecosystems.
π£οΈ 2:00 β 3:00 PM | Panel Discussion: Yutong Zhang.
π 3:00 β 3:25 PM | Paper Session III (Theme TBA): One full paper, 15 min; one short paper, 10 min, incl. Q&A.
- Paper details TBA
β 3:25 β 3:45 PM | Coffee Break
π 3:45 β 4:30 PM | Paper Session IV (Theme TBA): Three full papers, 15 min each, incl. Q&A.
- Paper details TBA
π‘ 4:30 β 4:45 PM | Synthesis & Brainstorming Impactful Research
π‘ 4:45 β 5:00 PM | Research Discussion
π 5:00 β 5:30 PM | Closing Remarks
Registration
Please check the conference site for
registration details.
Organizers
Steering Committee
|
|---|
Program Committee
Previous Editions
Explore past CySoc workshops to see previous programs, keynotes, accepted papers, and discussions.
|
|---|