Photo: Wikipedia

Call for Papers and Submission Guidelines

The irreversible dependence on computing technology has paved the way for cybersecurity’s rapid emergence as one of modern society’s grand challenges. To combat the ever-evolving, highly-dynamic threat landscape, numerous academics and industry professionals are systematically searching through billions of log files, social media platforms (e.g., Dark Web), malware files, and other data sources to preemptively identify, mitigate, and remediate emerging threats and key threat actors. Artificial Intelligence (AI)-enabled analytics has started to play a pivotal role in sifting through large quantities of these heterogeneous cybersecurity data to execute fundamental cybersecurity tasks such as asset management, vulnerability prioritization, threat forecasting, and controls allocations. However, the volume, variety, veracity, and variety of cybersecurity data sharply contrasts with conventional data sources. Moreover, industry and academic AI-enabled cybersecurity analytics are often siloed. To this end, this workshop aims to convene academics and practitioners (from industry and government) to share, disseminate, and communicate completed research papers, work in progress, and review articles about AI-enabled cybersecurity analytics. Areas of interest include, but are not limited to:

Each manuscript must clearly articulate their data (e.g., key metadata, statistical properties, etc.), analytical procedures (e.g., representations, algorithm details, etc.), and evaluation set up and results (e.g., performance metrics, statistical tests, case studies, etc.). Providing these details will help reviewers better assess the novelty, technical quality, and potential impact. Making data, code, and processes publicly available to facilitate scientific reproducibility is not required. However, it is strongly encouraged, as it can help facilitate a data/code sharing culture in this quickly developing discipline.

All submissions must be in PDF format and formatted according to the new Standard ACM Conference Proceedings Template. Submissions are limited to a 4-page initial submission, excluding references or supplementary materials. Upon acceptance, the authors are allowed to include an additional page (5-page total) for that camera ready version that accounts for reviewer comments. Authors should use supplementary material only for minor details that do not fit in the 4 pages, but enhance the scientific reproducibility of the work (e.g., model parameters). Since all reviews are double-blind, and author names and affiliations should NOT be listed. For accepted papers, at least one author must attend the workshop to present the work. Based on the reviews received, accepted papers will be designated as a contributed talk (four total, 15 minutes each), or as a poster. All accepted papers will be posted on the workshop website (will not appear in proceedings per ACM KDD Workshop regulations).


Agenda

This workshop will be held on 8/7 at 8 AM – 12 PM Pacific Time (In-Person).

Room: Long Beach Convention & Entertainment Center, Room 202B

An agenda for the workshop is as follows:

8:00 - 8:10 am PT Session 0
Welcome and Overview
8:10 - 9:30 am PT Session 1

8:10 - 8:30 am PT
State-aware Anomaly Detection for Massive Sensor Data in Internet of Things; Jiafan He, Lu-An Tang, Peng Yuan, Yuncong Chen, Haifeng Chen, Yuji Kobayashi, and Quanquan Gu. (Paper)

8:30 - 8:50 am PT
Detecting Robotic and Compromised IPs in Digital Advertising; Rohit R. R., Rakshith V., and Agniva Som. (Paper)

8:50 - 9:10 am PT
Robust Fraud Detection via Supervised Contrastive Learning; Vinay M.S., Shuhan Yuan, and Xintao Wu. (Paper)

9:10 - 9:30 am PT
Improving Email Filtering Systems: A Graph Neural Network Approach; William Arliss. (Paper)
9:50 - 11:10 am PT Session 2

9:50 - 10:10 am PT
Learning Explainable Network Request Signatures for Robot Detection; Rajat Agarwal, Sharad Chitlangia, Anand Muralidhar, Adithya Niranjan, Abheesht Sharma, Koustav Sadhukhan, and Suraj Sheth. (Paper)

10:10 - 10:30 am PT
A Transformer-based User Behavior Representation for Peer Grouping in Threat Detection; Xiao Lin, Glory Avina, and Stanislav Miskovic. (Paper)

10:30 - 10:50 am PT
Assessing Large Language Model’s Knowledge of Threat Behavior in MITRE ATT&CK; Ethan Garza, Erik Hemberg, Stephen Moskal, and Una-May O’Reilly.(Paper)

10:50 - 11:10 am PT
Hybrid Attack Graph Generation with Graph Convolutional Deep-Q Learning; Sam Donald, Rounak Meyur, and Sumit Purohit. (Paper)
11:20 - 12:00 pm PT Session 3
A casual conversation about how Large Language Models (LLMs) will affect AI for Cybersecurity Research (all are invited)
- Moderated by Prof. Jay Yang, Global Cybersecurity Institute, Rochester Institute of Technology


Key Dates


Submission Site

Submission Site: Easy Chair Submission


Workshop Co-Chairs

Dr. Sagar Samtani
Indiana University

Dr. Jay Yang
Rochester Institute of Technology

Dr. Hsinchun Chen
University of Arizona


Program Committee (listed alphabetically based on last name)