Call for the Paper

ACM TRUST 2027 • Call for Papers

ACM Conference on Trustworthy and Responsible AI and Computing Systems 2027

ACM TRUST 2027 will bring together researchers, practitioners, policymakers, and system builders working at the intersection of trustworthy AI, security, resilience, governance, and dependable computing infrastructure. The conference aims to foster rigorous dialogue on how next-generation AI systems can be designed, verified, deployed, and governed in ways that are safe, transparent, accountable, and robust in real-world environments.

The 2027 edition will emphasize both foundational advances and system-level practice, creating a strong venue for work that spans theory, methods, infrastructure, benchmarking, and domain-specific applications. We welcome contributions from academia, industry, and government that address the full lifecycle of trustworthy AI, from formal assurance and secure deployment to responsible use in healthcare, cyber-physical systems, national security, finance, public-sector infrastructure, and sustainability.

Conference Overview

Research + Practice

ACM TRUST 2027 is envisioned as a premier forum dedicated to trustworthy AI and computing systems, with a strong focus on methods, infrastructure, and governance that make advanced AI dependable in practice. The conference is intended to serve as a meeting point for communities that are often separated across machine learning, systems, cybersecurity, software assurance, governance, and domain applications.

In addition to technical rigor, the conference values work with measurable impact, reproducibility, and deployment relevance. Papers may contribute new theories, algorithms, architectures, benchmarks, evaluation frameworks, certification approaches, or application-driven studies that strengthen trust in AI-enabled systems across critical and high-impact domains.

Anticipated Audience

Academia + Industry

Researchers and practitioners across the full breadth of academia and industry who are advancing the design, verification, development, and use of trustworthy AI and computing systems.

Solicitation
How papers will be solicited Via the ACM webpage, targeted email lists, and social media channels including LinkedIn, X, Instagram, and Facebook.
Review
How papers will be selected All research paper submissions will undergo rigorous double-blind peer review organized by the Technical Program Committee.

Topics of Interest

Six Research Pillars

We welcome submissions across six key pillars that collectively define the technical, operational, and societal foundations of trustworthy AI systems.

P1

Foundations of Trustworthy AI

  • Formal verification of AI and ML systems
  • Robustness, generalization, and adversarial resilience
  • Interpretability and explainability foundations
  • Uncertainty quantification and reliability metrics
  • Causal reasoning and trustworthy inference
  • Neuro-symbolic and hybrid AI approaches
  • Formal proofs of safety and correctness
  • Trust metrics and measurable assurance criteria
P2

Secure and Resilient AI Systems

  • Adversarial machine learning
  • Data poisoning and model backdoor defenses
  • Secure model deployment pipelines
  • Privacy-preserving AI, federated learning, differential privacy, and encrypted computation
  • AI supply chain security
  • Runtime monitoring and anomaly detection
  • AI for cybersecurity and cyber-physical systems
  • Red-teaming and stress testing methodologies
P3

Trustworthy AI in Systems and Infrastructure

  • AI in cyber-physical systems
  • Edge AI and trustworthy IoT
  • Cloud and edge orchestration for safe AI
  • Resource-aware and energy-aware trustworthy AI
  • AI reliability in distributed systems
  • Human-in-the-loop system architectures
  • Trustworthy autonomous systems including vehicles, robotics, and UAVs
  • AI lifecycle management and MLOps assurance
P4

Responsible AI, Governance and Compliance

  • Fairness, bias mitigation, and accountability
  • Transparency and auditability
  • AI risk management frameworks
  • Compliance with emerging AI regulations
  • Standards and certification of AI systems
  • AI governance in large-scale deployments
  • Ethical system design methodologies
  • Benchmarking responsible AI practices
P5

Evaluation, Benchmarking and Assurance

  • Trustworthiness benchmarks
  • Stress-testing frameworks
  • Reproducibility and replicability in AI research
  • Certification frameworks for AI systems
  • Dataset integrity and data governance
  • Risk assessment methodologies
  • System-level validation and verification pipelines
P6

Domain-Specific Trustworthy Applications

  • Healthcare AI safety
  • Smart grid and critical infrastructure
  • Industrial AI and cyber-physical systems
  • Financial AI risk control
  • Public-sector AI systems
  • Defense and national security AI
  • Climate and sustainability systems

Submission Process

Double-Blind Review

Authors are invited to submit original, unpublished research papers aligned with the conference themes. All submissions should present significant technical contributions, clear validation, and relevance to trustworthy, secure, and responsible AI systems.

Prepare your manuscript Follow the official ACM conference template and anonymization requirements for double-blind review.
Submit through the conference system Upload your paper, metadata, and required declarations through the designated submission portal.
Peer review and decision Each submission will be evaluated by expert reviewers coordinated by the Technical Program Committee.
Revise and finalize Accepted papers will complete the camera-ready process and be included in the conference proceedings.

Why Submit to TRUST 2027?

TRUST 2027 is designed to highlight impactful work that advances trustworthy AI not only in theory, but also in deployable systems, governance frameworks, and high-stakes applications.

6 Technical pillars spanning foundations to applications
AI + Systems Balanced scope across algorithms, infrastructure, and governance
Rigorous Double-blind peer review with strong technical scrutiny

Stay Connected with TRUST 2027

Update the buttons below with your conference links, CFP PDF, submission portal, and contact email.

Scroll to Top