AI experts sign doc comparing risk of ‘extinction from AI’ to pandemics, nuclear war

Please fol­low and like us:
Pin Share

Dozens of arti­fi­cial intel­li­gence (AI) experts, includ­ing the CEOs of Ope­nAI, Google Deep­Mind and Anthrop­ic, recent­ly signed an open state­ment pub­lished by the Cen­ter for AI Safe­ty (CAIS). 

The state­ment con­tains a sin­gle sentence:

“Mit­i­gat­ing the risk of extinc­tion from AI should be a glob­al pri­or­i­ty along­side oth­er soci­etal-scale risks such as pan­demics and nuclear war.”

Among the document’s sig­na­to­ries are a ver­i­ta­ble “who’s who” of AI lumi­nar­ies, includ­ing the “God­fa­ther” of AI, Geof­frey Hin­ton; Uni­ver­si­ty of Cal­i­for­nia, Berkeley’s Stu­art Rus­sell; and Mass­a­chu­setts Insti­tute of Technology’s Lex Frid­man. Musi­cian Grimes is also a sig­na­to­ry, list­ed under the “oth­er notable fig­ures” category. 

Relat­ed: Musi­cian Grimes will­ing to ‘split 50% roy­al­ties’ with AI-gen­er­at­ed music

While the state­ment may appear innocu­ous on the sur­face, the under­ly­ing mes­sage is a some­what con­tro­ver­sial one in the AI community. 

A seem­ing­ly grow­ing num­ber of experts believe that cur­rent tech­nolo­gies may or will inevitably lead to the emer­gence or devel­op­ment of an AI sys­tem capa­ble of pos­ing an exis­ten­tial threat to the human species. 

Their views, how­ev­er, are coun­tered by a con­tin­gent of experts with dia­met­ri­cal­ly opposed opin­ions. Meta chief AI sci­en­tist Yann LeCun, for exam­ple, has not­ed on numer­ous occa­sions that he doesn’t nec­es­sar­i­ly believe that AI will become uncontrollable. 

To him and oth­ers who dis­agree with the “extinc­tion” rhetoric, such as Andrew Ng, co-founder of Google Brain and for­mer chief sci­en­tist at Baidu, AI isn’t the prob­lem, it’s the answer. 

On the oth­er side of the argu­ment, experts such as Hin­ton and Con­jec­ture CEO Con­nor Leahy believe that human-lev­el AI is inevitable and, as such, the time to act is now.

It is, how­ev­er, unclear what actions the statement’s sig­na­to­ries are call­ing for. The CEOs and/or heads of AI for near­ly every major AI com­pa­ny, as well as renowned sci­en­tists from across acad­e­mia, are among those who signed, mak­ing it obvi­ous the intent isn’t to stop the devel­op­ment of these poten­tial­ly dan­ger­ous systems. 

Ear­li­er this month, Ope­nAI CEO Sam Alt­man, one of the above-men­tioned statement’s sig­na­to­ries, made his first appear­ance before Con­gress dur­ing a Sen­ate hear­ing to dis­cuss AI reg­u­la­tion. His tes­ti­mo­ny made head­lines after he spent the major­i­ty of it urg­ing law­mak­ers to reg­u­late his indus­try.

Altman’s World­coin, a project com­bin­ing cryp­tocur­ren­cy and proof-of-per­son­hood, has also recent­ly made the media rounds after rais­ing $115 mil­lion in Series C fund­ing, bring­ing its total fund­ing after three rounds to $240 million.



Source link

Please fol­low and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *