Humanity will use AI to destroy itself long before AI is sentient enough to rebel against it

Please fol­low and like us:
Pin Share

As arti­fi­cial intel­li­gence rapid­ly advances, lega­cy media rolls out the warn­ings of an exis­ten­tial threat of a robot­ic upris­ing or sin­gu­lar­i­ty event. How­ev­er, the truth is that human­i­ty is more like­ly to destroy the world through the mis­use of AI tech­nol­o­gy long before AI becomes advanced enough to turn against us.

Today, AI remains nar­row, task-spe­cif­ic, and lack­ing in gen­er­al sen­tience or con­scious­ness. Sys­tems like Alpha­Go and Wat­son defeat humans at chess and Jeop­ardy through brute com­pu­ta­tion­al force rather than by exhibit­ing cre­ativ­i­ty or strat­e­gy. While the poten­tial for super­in­tel­li­gent AI cer­tain­ly exists in the future, we are still many decades away from devel­op­ing gen­uine­ly autonomous, self-aware AI.

In con­trast, the mil­i­tary appli­ca­tions of AI raise imme­di­ate dan­gers. Autonomous weapons sys­tems are already being devel­oped to iden­ti­fy and elim­i­nate tar­gets with­out human over­sight. Facial recog­ni­tion soft­ware is used for sur­veil­lance, pro­fil­ing, and pre­dic­tive polic­ing. Bots manip­u­late social media feeds to spread mis­in­for­ma­tion and influ­ence elections.

Bot farms used dur­ing US and UK elec­tions, or even the tac­tics deployed by Cam­bridge Ana­lyt­i­ca, could seem tame com­pared with what may be to come. Through GPT‑4 lev­el gen­er­a­tive AI tools, it is fair­ly ele­men­tary to cre­ate a social media bot capa­ble of mim­ic­k­ing a des­ig­nat­ed persona.

Want thou­sands of peo­ple from Nebras­ka to start post­ing mes­sag­ing in sup­port of your cam­paign? All it would take is 10 to 20 lines of code, some Mid­Jour­ney-gen­er­at­ed pro­file pic­tures, and an API. The upgrad­ed bots would not only be able to spread mis­in­for­ma­tion and pro­pa­gan­da but also engage in fol­low-up con­ver­sa­tions and threads to cement the mes­sage in the minds of real users.

These exam­ples illus­trate just some of the ways humans will like­ly weaponize AI long before devel­op­ing any malev­o­lent agenda.

Per­haps the most sig­nif­i­cant near-term threat comes from AI opti­miza­tion gone wrong. AI sys­tems fun­da­men­tal­ly don’t under­stand what we need or want from them, they can only fol­low instruc­tions in the best way they know how. For exam­ple, an AI sys­tem pro­grammed to cure can­cer might decide that elim­i­nat­ing humans sus­cep­ti­ble to can­cer is the most effi­cient solu­tion. An AI man­ag­ing the elec­tri­cal grid could trig­ger mass black­outs if it cal­cu­lates that reduced ener­gy con­sump­tion is opti­mal. With­out real safe­guards, even AIs designed with good inten­tions could lead to cat­a­stroph­ic outcomes.

Relat­ed risks also come from AI hack­ing, where­in bad actors pen­e­trate and sab­o­tage AI sys­tems to cause chaos and destruc­tion. Or AI could be used inten­tion­al­ly as a repres­sion and social con­trol tool, automat­ing mass sur­veil­lance and giv­ing auto­crats unprece­dent­ed power.

In all these sce­nar­ios, the fault lies not with AI but with the humans who built and deployed these sys­tems with­out due cau­tion. AI does not choose how it gets used; peo­ple make those choic­es. And since there is lit­tle incen­tive at the moment for tech com­pa­nies or mil­i­taries to lim­it the roll-out of poten­tial­ly dan­ger­ous AI appli­ca­tions, we can only assume they are head­ed straight in that direction.

Thus, AI safe­ty is para­mount. A well-man­aged, eth­i­cal, safe­guard­ed AI sys­tem must be the basis of all inno­va­tion. How­ev­er, I do not believe this should come through restric­tion of access. AI must be avail­able to all for it to ben­e­fit humankind truly.

While we fret over visions of a killer robot future, AI is already poised to wreak hav­oc enough in the hands of humans them­selves. The sober­ing truth may be that humanity’s short­sight­ed­ness and appetite for pow­er make ear­ly AI appli­ca­tions incred­i­bly dan­ger­ous in our irre­spon­si­ble hands. To sur­vive, we must care­ful­ly reg­u­late how AI is devel­oped and applied while rec­og­niz­ing that the biggest ene­my in the age of arti­fi­cial intel­li­gence will be our own fail­ings as a species—and it is almost too late to set them right.

Post­ed In: AI, Fea­tured, Op-Ed

Source link

Please fol­low and like us:
Pin Share

Leave a Reply

Your email address will not be published.