Elon Musk & Kamala Harris To Join Chinese Alibaba & Tencent Reps At UK’s AI Safety Summit, As Creative Industries Gather At AI Fringe

  • Oops!
    Something went wrong.
    Please try again later.

Elon Musk touched down in his private jet at Luton airport outside London on Tuesday ahead of the UK’s two-day artificial intelligence safety summit, which kicks off on Wednesday.

The Tesla and SpaceX tech billionaire was a late announced addition to the roster of some 200 participants expected to gather in Bletchley Park, the historic base of the UK’s World War Two codebreakers, which was captured on the big screen in Alan Turing bio pic The Imitation Game.

More from Deadline

Other high-profile attendees will include U.S. Vice-President Kamala Harris, European Commission President Ursula von der Leyen, Microsoft President Brad Smith, Sam Altman, CEO of ChatGPT developer OpenAI, and UK AI guru Demis Hassabis at Google’s Deepmind. (scroll down for full list).

The AI Safety Summit event, which is being billed as the first global conference of this stature on AI safety, has been spearheaded by UK Prime Minister Rishi Sunak to discuss the risks of AI and how they can be mitigated through internationally coordinated action.

Musk will be staying on after the official ending of the conference to conduct a public conversation with Sunak, which will be live-streamed on X (ex-Twitter).

The summit focus on five key areas: the risks posed by Frontier AI (a single AI that can do lots of different tasks); how to put in motion a process for international collaboration on frontier AI safety, in a way that supports national and international frameworks; what measures individual organisations should take to increase frontier AI safety; potential collaboration on AI safety research and a showcase on how safety will enable AI to be used for good.

The event comes amid a raft of initiatives worldwide to create some sort of regulatory framework around AI technology, amid fears it could have a negative impact on human society.

U.S. President Joe Biden got a jump on other nations on Monday when he signed an Executive Order setting out a sweeping series of priorities aimed at AI safety and setting standards around privacy, equity and civil rights and consumer and worker rights.

At the same time, the European Union is currently in the final stage of finalizing its inaugural AI Act, with the aim of getting it over the line by the end of the year.

China also unveiled its Global AI Governance Initiative on October 20, declaring: “We must adhere to the principle of developing AI for good, respect the relevant international laws, and align AI development with humanity’s common values of peace, development, equity, justice, democracy, and freedom.”

The country’s presence at the UK’s AI Safety Summit has drawn criticism in some quarters. Former UK Prime Minister Liz Truss warned in a letter posted on X that China had “a fundamentally different attitude to the West about AI, seeing it as a means of state control and a tool for national security.”

The invitation was maintained amid a thawing in relations between the UK and China in recent months.

The Chinese delegation includes representatives of the Chinese Academy of Sciences as well as of tech giants Alibaba and Tencent.

Beyond the main conference, a series of complementary events has also been taking place across London and the UK this week under the banner of the AI Fringe.

Representatives of the creative industries will be discussing the implications of AI for their sectors at one of these events on Wednesday at the British Library.

Panelists will include Equity UK Industrial Official Liam Budd, Creators’ Rights Alliance Chair Nicola Solomon, Association of Photographers CEO Isabelle Doran and Moiya McTier, advisor to the Washington and Austin-based Human Artistry Campaign.

The latter org was launched in March at SXSW with the aim of being a voice for the creative industries worldwide in the debate over the benefits and risks of AI. Its more than 100 member orgs include SAG-AFTRA, IMPALA and the NFL Players Association.

McTier talked about the body’s remit in an interview with the BBC’s Today radio program ahead of the panel.

“We have seven core principles that we think policy makers and AI developers should adopt when trying to create ethical AI, things like making your algorithms transparent so that we know what type of data goes into them and how it’s used,” she said.

Other principles include getting an artist’s consent for their work to be used to train generative AI models, or put into these models, and if it is used for it to be credited and compensated.

“The technology is here, and all of our members recognize that AI has potential for positive effect and can be used for good and for fun, but we want to make sure it’s used responsibly,” said McTier. “It’s here to stay so let’s put policies in place that make sure the AI companies have to use it responsibly.”

Official Full List of AI Safety Summit Attendee Bodies & Nations

Academia and civil society

  • Ada Lovelace Institute

  • Advanced Research and Invention Agency

  • African Commission on Human and People’s Rights

  • AI Now Institute

  • Alan Turing Institute

  • Algorithmic Justice League

  • Alignment Research Center

  • Berkman Center for Internet & Society, Harvard University

  • Blavatnik School of Government

  • British Academy

  • Brookings Institution

  • Carnegie Endowment

  • Centre for AI Safety

  • Centre for Democracy and Technology

  • Centre for Long-Term Resilience

  • Centre for the Governance of AI

  • Chinese Academy of Sciences

  • Cohere for AI

  • Collective Intelligence Project

  • Columbia University

  • Concordia AI

  • ETH AI Center

  • Future of Life Institute

  • Institute for Advanced Study

  • Liverpool John Moores University

  • Mila – Quebec Artificial Intelligence Institute

  • Mozilla Foundation

  • National University of Cordoba

  • National University of Singapore

  • Open Philanthropy

  • Oxford Internet Institute

  • Partnership on AI

  • RAND Corporation

  • Real ML

  • Responsible AI UK

  • Royal Society

  • Stanford Cyber Policy Institute

  • Stanford University

  • Technology Innovation Institute

  • Université de Montréal

  • University College Cork

  • University of Birmingham

  • University of California, Berkeley

  • University of Oxford

  • University of Southern California

  • University of Virginia

Governments

  • Australia

  • Brazil

  • Canada

  • China

  • France

  • Germany

  • India

  • Indonesia

  • Ireland

  • Israel

  • Italy

  • Japan

  • Kenya

  • Kingdom of Saudi Arabia

  • Netherlands

  • New Zealand

  • Nigeria

  • Republic of Korea

  • Republic of the Philippines

  • Rwanda

  • Singapore

  • Spain

  • Switzerland

  • Turkey

  • Ukraine

  • United Arab Emirates

  • United States of America

Industry and related organisations

  • Adept

  • Aleph Alpha

  • Alibaba

  • Amazon Web Services

  • Anthropic

  • Apollo Research

  • ARM

  • Cohere

  • Conjecture

  • Darktrace

  • Databricks

  • Eleuther AI

  • Faculty AI

  • Frontier Model Forum

  • Google DeepMind

  • Google

  • Graphcore

  • Helsing

  • Hugging Face

  • IBM

  • Imbue

  • Inflection AI

  • Meta

  • Microsoft

  • Mistral

  • Naver

  • Nvidia

  • Omidyar Group

  • OpenAI

  • Palantir

  • Rise Networks

  • Salesforce

  • Samsung Electronics

  • Scale AI

  • Sony

  • Stability AI

  • techUK

  • Tencent

  • Trail of Bits

  • XAI

Multilateral organisations

  • Council of Europe

  • European Commission

  • Global Partnership on Artificial Intelligence (GPAI)

  • International Telecommunication Union (ITU)

  • Organisation for Economic Co-operation and Development (OECD)

  • UNESCO

  • United Nations

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Click here to read the full article.