Google lifts its ban on using AI for weapons

3 hours ago 5

Lucy Hooker

BBC Business reporter

Getty Images Hand holding a Google phone, using the Gemini AI application, with a lit up Google logo on the wall in the backgroundGetty Images

Google's parent company has lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools after changing its long-standing principles.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

It said: "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.

"And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security," it added.

There is debate among AI experts and professionals over how the powerful new technology should be governed in broad terms, how far commercial gains should be allowed to determine its direction, and how best to guard against risks for humanity in general.

There is also controversy around the use of AI on the battlefield and in surveillance technologies.

The blog - written by senior vice president James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind - said the company's original AI principles published in 2018 needed to be updated as the technology had evolved.

"Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.

"It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself," the blog post said.

As a result baseline AI principles were also being developed, which could guide common strategies, it said.

'Don't be evil'

Originally, long before the current surge of interest in the ethics of AI, Google's founders, Sergei Brin and Larry Page, said their motto for the firm was "don't be evil".

When the company was restructured under the name Alphabet Inc in 2015 the parent company switched to "Do the right thing".

Since then Google staff have sometimes pushed back against the approach taken by their executives.

In 2018, the firm did not renew a contract for AI work with the US Pentagon following a resignations and a petition signed by thousands of employees.

They feared "Project Maven" was the first step towards using artificial intelligence for lethal purposes.

The blog was published just ahead of Alphabet's end of year financial report, showing results that were weaker than market expectations, and knocking back its share price.

That was despite a 10% rise in revenue from digital advertising, its biggest earner, boosted by US election spending.

In its earnings report the company said it would spend $75bn ($60bn) on AI projects this year, 29% more than Wall Street analysts had expected.

The company is investing in the infrastructure to run AI, AI research, and applications such as AI-powered search.

Read Entire Article