South Korea introduces world’s first comprehensive AI safety law
South Korea has officially enacted the world’s first comprehensive law regulating artificial intelligence (AI) safety, the government announced Thursday. The new legislation, known as the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, establishes mandatory safeguards for AI use across multiple sectors.
Under the law, companies must label AI-generated content with clear watermarks, manage high-risk AI responsibly, and ensure safety measures in applications that impact daily life, such as hiring, loan approvals, and medical advice.
International firms operating in South Korea with annual global revenues exceeding $680 million, domestic sales above $6.8 million, or more than one million daily users are required to appoint a local representative. Global technology companies, including OpenAI and Google, are affected.
Violations of the law can result in fines of up to $20,400, though authorities have indicated a one-year grace period to allow businesses to comply. Additionally, the science minister must present a new policy blueprint every three years to promote responsible AI development.
Officials emphasized that watermarks and transparent labeling are the minimum safeguards to prevent risks such as deepfake content and misinformation generated by AI models.
-
13:20
-
13:00
-
12:45
-
12:40
-
12:30
-
12:20
-
12:15
-
12:00
-
11:50
-
11:40
-
11:30
-
11:20
-
11:00
-
10:50
-
10:30
-
10:20
-
10:00
-
09:50
-
09:30
-
09:20
-
09:00
-
08:50
-
08:30
-
08:20
-
08:00
-
07:50
-
07:30
-
07:00
-
17:50
-
17:20
-
16:50
-
16:20
-
15:50
-
15:20
-
14:50
-
14:20
-
13:55
-
13:50