Home 9 AI 9 Building Trust in the Age of AI

Building Trust in the Age of AI

by | Oct 2, 2025

Singapore and Adobe are pioneering responsible content through verification.
Source: Adobe.

Singapore sees generative AI as a tool of immense potential but not one to be wielded recklessly. In a recent national address, Prime Minister Lawrence Wong underlined a guiding principle: innovation without trust is fragile. Singapore’s vision for AI is rooted in ethics, human agency, and accountability, not just algorithms, tells Adobe.

Deepfakes and manipulated media now blur reality. Whether used to mislead voters or distort public discourse, such content erodes trust. In the Asia Pacific, concern is high: 80% of respondents in Australia and New Zealand, for example, flagged AI-generated misinformation as a major threat. Adobe, already invested in content creation tools, sees this as a moment to shift the narrative from “what AI can do” to “how reliably and responsibly it does it.”

To anchor trust in digital content, Adobe backs the Content Authenticity Initiative (CAI). This system embeds encrypted metadata, or “content credentials,” that record who created a file, when and where it was created, and whether AI was involved in its creation. This metadata accompanies the content, allowing recipients to verify its authenticity and origin. In Singapore, Adobe has formalized this approach through a memorandum with the Centre for Advanced Technologies in Online Safety (CATOS). The goal: integrate provenance technologies across public communications, journalism, and education, reinforcing Singapore’s AI governance frameworks.

On Adobe’s side, responsible AI is a design priority rather than an add-on. Their Firefly models are trained on licensed and public-domain content; AI-derived outputs are automatically tagged using content credentials. A new web app now allows creators to embed their own attribution details and even opt out of having their work used in AI training, preserving control and transparency.

In public services, digital trust is essential. AI assists in multilingual call-center transcripts, diagnostics, and citizen engagement, but verification is what ensures those outputs hold weight. Singapore and Adobe are aligning to make authenticity intrinsic to digital experiences, not an afterthought.

As AI becomes more pervasive, its success depends on more than capability. It hinges on how confidently people can trust what they see, share, or act upon. That requires technical safeguards, yes, but also media literacy, ethical standards, and a culture comfortable questioning what appears real. The future of AI is not solely about power; it’s about accountability.