New technology is moving at warp speed, and so are the threats that come with it.
Alarm bells over the latest form of artificial intelligence (AI) — generative AI — are deafening, and they are loudest from the developers who designed it.
These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war.
We must take those warnings seriously. Our proposed Global Digital Compact, New Agenda for Peace, and Accord on the global governance of AI will offer multilateral solutions based on human rights.
But the advent of generative AI must not distract us from the damage digital technology is already doing to our world.
The proliferation of hate and lies in the digital space is causing grave global harm — now.
It is fueling conflict, death and destruction — now. It is threatening democracy and human rights — now. It is undermining public health and climate action — now.
When social media emerged a generation ago, digital platforms were embraced as exciting new ways to connect.
And, indeed, they have supported communities in times of crisis, elevated marginalized voices and helped to mobilize global movements for racial justice and gender equality.
Social media platforms have helped the United Nations to engage people around the world in our pursuit of peace, dignity and human rights on a healthy planet. But today, this same technology is often a source of fear, not hope.
Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people.
Some of our own United Nations peacekeeping missions and humanitarian aid operations have been targeted, making their work even more dangerous.
This clear and present global threat demands clear and coordinated global action.
Our policy brief on information integrity on digital platforms puts forward a framework for a [concerted] international response.
Its proposals are aimed at creating guardrails to help Governments come together around guidelines that promote facts, while exposing conspiracies and lies and safeguarding freedom of expression and information.
And to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem.
Governments have sometimes resorted to drastic measures — including blanket Internet shutdowns and bans — that lack any legal basis and infringe on human rights.
Around the world, some tech companies have done far too little, too late to prevent their platforms from contributing to violence and hatred.
The recommendations in this brief seek to make the digital space safer and more inclusive while vigorously protecting human rights.
They will inform a United Nations Code of Conduct for Information Integrity on Digital Platforms that we are developing ahead of next year’s Summit of the Future. The Code of Conduct will be a set of principles that we hope Governments, digital platforms and other stakeholders will implement voluntarily.
The proposals in this policy brief, in preparation for the Code of Conduct, include:
A commitment by Governments, tech companies and other stakeholders to refrain from using, supporting, or amplifying disinformation and hate speech for any purpose.
A pledge by Governments to guarantee a free, viable, independent, and plural media landscape, with strong protections for journalists.
The consistent application of policies and resources by digital platforms around the world to eliminate double standards that allow hate speech and disinformation to flourish in some languages and countries, while they are prevented more effectively in others.
Agreed protocols for a rapid response by Governments and digital platforms when the stakes are highest — in times of conflict and high social tensions.
And a commitment from digital platforms to make sure all products take account of safety, privacy and transparency.
That includes urgent and immediate measures to ensure that all AI applications are safe, secure, responsible and ethical, and comply with human rights obligations.
The brief proposes that tech companies should undertake to move away from damaging business models that prioritize engagement above human rights, privacy, and safety.
It suggests that advertisers — who are deeply implicated in monetizing and spreading damaging content — should take responsibility for the impact of their spending.
It recognizes the need for a fundamental shift in incentive structures.
Disinformation and hate should not generate maximum exposure and massive profits.
The brief suggests that users — including young people who are particularly vulnerable — should have more influence on policy decisions, and it proposes that digital platforms make a commitment to data transparency.
Users should be able to access their own data. Researchers should have access to the vast quantities of data generated by digital platforms, while respecting user privacy.
I hope this policy brief will be a helpful contribution to discussions ahead of the Summit of the Future.
We are counting on broad engagement and strong contributions from all stakeholders as we work towards a United Nations Code of Conduct for Information Integrity on Digital Platforms.
We don’t have a moment to lose, and I thank you for your attention and for your presence.