Stay on this page and when the timer ends, click 'Continue' to proceed.

Continue in 17 seconds

S'pore launches new governance framework for generative AI

S'pore launches new governance framework for generative AI

Source: The Straits Times
Author: Zhaki Abdullah

SINGAPORE - Transparency about where and how content is generated is crucial in the global fight against misinformation, which has been exacerbated by generative artificial intelligence (gen AI), and is one of nine areas highlighted in an ethical framework for gen AI launched on May 30.

Launched by Deputy Prime Minister Heng Swee Keat on May 30, the Model Governance Framework for Generative AI aims to address concerns around the nascent technology, which has taken the world by storm since late 2022 because of its ability to quickly create realistic content.

"Good governance is crucial. With the right guardrails in place, we create conditions to innovate safely, responsibly, and for a common purpose," said Mr Heng, speaking during the opening ceremony of the fourth annual Asia Tech x Singapore (ATxSG) event, held at the Capella Singapore hotel on Sentosa.

"The borderless nature of tech also means this must be a shared endeavour," he added.

Developed by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA), the framework identifies a total of nine areas - including accountability, trusted data for AI training and content provenance - where governance of gen AI can be strengthened.

The framework - developed in consultation with some 70 organisations ranging from tech giants Microsoft and Google to government agencies such as the US Department of Commerce - also balances the need to facilitate innovation.

In the area of trusted data, for example, the framework calls on policymakers to elaborate how existing personal data laws apply to gen AI, which often draws on large amounts of data.

It also suggests that governments could work with communities to curate a repository of training datasets relevant to their specific contexts, such as those in "low resource languages". This refers to languages which are not well-represented online, which would make gen AI accessible to a greater number of people.

The framework also identifies content provenance as an area of concern, pointing to the increasing difficulty people face in identifying AI-generated content, due to the technology's ability to rapidly create realistic content.

It points to the need for regulators to work with publishers on incorporating technical solutions such as digital watermarking and cryptographic provenance - which can track and verify the origin of digital content - to flag content created or modified by AI.

The Model Governance Framework for Generative AI builds on an existing framework - originally published in 2019 - which covers only traditional AI.

The two differ in that while traditional AI can typically can only analyse given data, gen AI is able to draw on vast amounts of data to generate original content.

The new framework builds on policy ideas highlighted in the IMDA's 2023 discussion paper on gen AI, and also draws on international feedback from discussions with researchers and AI organisations.

It will also be aligned to international AI principles, such as the Hiroshima AI Process announced during the 2023 G7 summit, which calls for the development of interoperable global standards of AI governance frameworks.