ACM Conference on
Trustworthy and Responsible AI and Computing
Systems
ACM TRUST 2027 • MAIN CONFERENCE PAGE

ACM Conference on Trustworthy and Responsible AI and Computing Systems

ACM TRUST 2027 will bring together researchers, practitioners, policymakers, and system builders to shape the future of trustworthy, secure, resilient, and responsible AI and computing systems across high-impact domains.

Washington DC • 2027 • Global Forum on Trustworthy AI, Systems, Security, and Governance
CMT ACKNOWLEDGMENT: The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support..

About TRUST 2027

Premier ACM Event

TRUST 2027 is designed as a premier ACM venue for foundational advances, deployment-ready systems, governance frameworks, and responsible innovation in AI and computing systems.

Conference Vision

The conference promotes strong technical rigor and real-world relevance across trustworthy AI, dependable systems, security, resilience, compliance, transparency, and responsible deployment. It aims to connect communities across machine learning, cybersecurity, systems, policy, and applied domains.

Who Should Attend

  • Researchers in AI, systems, and cybersecurity
  • Industry practitioners and product builders
  • Government and public-sector stakeholders
  • Standards, assurance, and governance experts
  • Students and early-career scholars

Research Focus Areas

Trustworthy AI + Systems

TRUST 2027 welcomes foundational, system-level, and domain-driven contributions across the following directions.

Trustworthy AI Foundations Verification, explainability, uncertainty, robust inference, and measurable trust criteria.
Secure and Resilient AI Adversarial ML, model security, privacy-preserving learning, runtime monitoring, and red-teaming.
AI in Systems and Infrastructure Edge AI, cloud orchestration, cyber-physical systems, autonomous systems, and dependable MLOps.
Responsible AI and Governance Fairness, compliance, auditability, policy alignment, governance, and certification frameworks.
Evaluation and Assurance Benchmarks, reproducibility, validation pipelines, system-level assurance, and testing frameworks.
High-Impact Applications Healthcare, public-sector AI, smart infrastructure, defense, finance, and sustainability.
Scroll to Top