Crypto collapse? Get in loser, we’re pivoting to AI – Attack of the 50 Foot Blockchain

Please fol­low and like us:
Pin Share

By Amy Cas­tor and David Ger­ard

“Cur­rent AI feels like some­thing out of a Philip K Dick sto­ry because it answers a ques­tion very few peo­ple were ask­ing: What if a com­put­er was stu­pid?” — Maple Cocaine

Half of cryp­to has been piv­ot­ing to AI. Crypto’s pret­ty qui­et — so let’s give it a try ourselves!

Turns out it’s the same grift. And fre­quent­ly the same grifters.

 

 

AI is the new NFT

“Arti­fi­cial intel­li­gence” has always been a sci­ence fic­tion dream. It’s the promise of your plas­tic pal who’s fun to be with — espe­cial­ly when he’s your unpaid employ­ee. That’s the hype to lure in the mon­ey men, and that’s what we’re see­ing play out now.

There is no such thing as “arti­fi­cial intel­li­gence.” Since the term was coined in the 1950s, it has nev­er referred to any par­tic­u­lar tech­nol­o­gy. We can talk about spe­cif­ic tech­nolo­gies, like Gen­er­al Prob­lem Solver, per­cep­trons, ELIZA, Lisp machines, expert sys­tems, Cyc, The Last One, Fifth Gen­er­a­tion, Siri, Face­book M, Full Self-Dri­ving, Google Trans­late, gen­er­a­tive adver­sar­i­al net­works, trans­form­ers, or large lan­guage mod­els — but these have noth­ing to do with each oth­er except the mar­ket­ing ban­ner “AI.” A bit like “Web3.”

Much like cryp­to, AI has gone through booms and busts, with peri­ods of great enthu­si­asm fol­lowed by AI win­ters when­ev­er a par­tic­u­lar tech hype fails to work out.

The cur­rent AI hype is due to a boom in machine learn­ing — when you train an algo­rithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

Chat­G­PT, a chat­bot devel­oped by Sam Altman’s Ope­nAI and released in Novem­ber 2022, is a stu­pen­dous­ly scaled-up auto­com­plete. Real­ly, that’s all that it is. Chat­G­PT can’t think as a human can. It just spews out word com­bi­na­tions based on vast quan­ti­ties of train­ing text — all used with­out the authors’ permission.

The oth­er pop­u­lar hype right now is AI art gen­er­a­tors. Artists wide­ly object to AI art because VC-fund­ed com­pa­nies are steal­ing their art and chop­ping it up for sale with­out pay­ing the orig­i­nal cre­ators. Not pay­ing cre­ators is the only rea­son the VCs are fund­ing AI art.

Do AI art and Chat­G­PT out­put qual­i­fy as art? Can they be used for art? Sure, any­thing can be used for art. But that’s not a sub­stan­tive ques­tion. The impor­tant ques­tions are who’s get­ting paid, who’s get­ting ripped off, and who’s just run­ning a grift.

You’ll be delight­ed to hear that blockchain is out and AI is in:

It’s not clear if the VCs actu­al­ly buy their own pitch for ChatGPT’s spicy auto­com­plete as the har­bin­ger of the robot apoc­a­lypse. Though if you replaced VC Twit­ter with Chat­G­PT, you would see a sig­nif­i­cant increase in quality.

I want to believe

The tech itself is inter­est­ing and does things. Chat­G­PT or AI art gen­er­a­tors wouldn’t be caus­ing the prob­lems they are if they didn’t gen­er­ate plau­si­ble text and plau­si­ble images.

Chat­G­PT makes up text that sta­tis­ti­cal­ly fol­lows from the pre­vi­ous text, with mem­o­ry over the con­ver­sa­tion. The sys­tem has no idea of truth or fal­si­ty — it’s just mak­ing up some­thing that’s struc­tural­ly plausible.

Users speak of Chat­G­PT as “hal­lu­ci­nat­ing” wrong answers — large lan­guage mod­els make stuff up and present it as fact when they don’t know the answer. But  any answers that hap­pen to be cor­rect were “hal­lu­ci­nat­ed” in the same way.

If Chat­G­PT has pla­gia­rized good sources, the con­struct­ed text may be fac­tu­al­ly accu­rate. But Chat­G­PT is absolute­ly not a search engine or a trust­wor­thy sum­ma­riza­tion tool — despite the claims of its promoters.

Chat­G­PT cer­tain­ly can’t replace human think­ing. Yet peo­ple project sen­tient qual­i­ties onto Chat­G­PT and feel like they are con­duct­ing mean­ing­ful con­ver­sa­tions with anoth­er per­son. When they real­ize that’s a fool­ish claim, they say they’re sure that’s def­i­nite­ly com­ing soon!

People’s sus­cep­ti­bil­i­ty to anthro­po­mor­phiz­ing an even slight­ly con­vinc­ing com­put­er pro­gram has been known since ELIZA, one of the first chat­bots, in 1966. It’s called the ELIZA effect.

As Joseph Weizen­baum, ELIZA’s author, put it: “I had not real­ized … that extreme­ly short expo­sures to a rel­a­tive­ly sim­ple com­put­er pro­gram could induce pow­er­ful delu­sion­al think­ing in quite nor­mal people.”

Bet­ter chat­bots only ampli­fy the ELIZA effect. When things do go wrong, the results can be disastrous:

  • A pro­fes­sor at Texas A&M wor­ried that his stu­dents were using Chat­G­PT to write their essays. He asked Chat­G­PT if it had gen­er­at­ed the essays! It said it might have. The pro­fes­sor gave the stu­dents a mark of zero. The stu­dents protest­ed vocif­er­ous­ly, pro­duc­ing the evi­dence they wrote their essays them­selves. One even asked Chat­G­PT about the professor’s Ph.D the­sis, and it said it might have writ­ten it.  The uni­ver­si­ty has reversed the grad­ing. [Red­dit; Rolling Stone]
  • Not one but two lawyers thought they could blind­ly trust Chat­G­PT to write their briefs. The pro­gram made up cita­tions and prece­dents that didn’t exist. Judge Kevin Cas­tel of the South­ern Dis­trict of New York — who those fol­low­ing cryp­to will know well for his impa­tience with non­sense — has required the lawyers to show cause not to be sanc­tioned into the sun. These were lawyers of sev­er­al decades’ expe­ri­ence. [New York Times; order to show cause, PDF]
  • GitHub Copi­lot syn­the­sizes com­put­er pro­gram frag­ments with an Ope­nAI pro­gram sim­i­lar to Chat­G­PT, based on the giga­bytes of code stored in GitHub. The gen­er­at­ed code fre­quent­ly works! And it has seri­ous copy­right issues — Copi­lot can eas­i­ly be induced to spit out straight-up copies of its source mate­ri­als, and GitHub is cur­rent­ly being sued over this mas­sive license vio­la­tion. [Reg­is­ter; case dock­et]
  • Copi­lot is also a good way to write a pile of secu­ri­ty holes. [arX­iv, PDF, 2021; Invic­ti, 2022]
  • Text and image gen­er­a­tors are increas­ing­ly used to make fake news. This doesn’t even have to be very good — just good enough. Deep fake hoax­es have been a peren­ni­al prob­lem, most recent­ly with a fake attack on the Pen­ta­gon, tweet­ed by an $8 blue check account pre­tend­ing to be Bloomberg News. [For­tune]

This is the same risk in AI as the big risk in cryp­tocur­ren­cy: human gulli­bil­i­ty in the face of lying grifters and their enablers in the press.

But you’re just ignoring how AI might end humanity!

The idea that AI will take over the world and turn us all into paper­clips is not impos­si­ble!

It’s just that our tech­nol­o­gy is not with­in a mil­lion miles of that. Mash­ing the auto­com­plete but­ton isn’t going to destroy humanity.

All of the AI doom sce­nar­ios are lit­er­al­ly straight out of sci­ence fic­tion, usu­al­ly from alle­gories of slave revolts that use the word “robot” instead. This sub­genre goes back to Rossum’s Uni­ver­sal Robots (1920) and arguably back to Franken­stein (1818).

The warn­ings of AI doom orig­i­nate with LessWrong’s Eliez­er Yud­kowsky, a man whose sole achieve­ments in life are char­i­ty fundrais­ing — get­ting Peter Thiel to fund his Machine Intel­li­gence Research Insti­tute (MIRI), a research insti­tute that does almost no research — and fin­ish­ing a pop­u­lar Har­ry Pot­ter fan­fic­tion nov­el. Yud­kowsky has lit­er­al­ly no oth­er qual­i­fi­ca­tions or experience.

Yud­kowsky believes there is no greater threat to human­i­ty than a rogue AI tak­ing over the world and treat­ing humans as mere speed­bumps. He believes this apoc­a­lypse is immi­nent. The only hope is to give MIRI all the mon­ey you have. This is also the most effec­tive pos­si­ble altruism.

Yud­kowsky has also sug­gest­ed, in an op-ed in Time, that we should con­duct air strikes on data cen­ters in for­eign coun­tries that run unreg­u­lat­ed AI mod­els. Not that he advo­cates vio­lence, you under­stand. [Time; Twit­ter, archive]

Dur­ing one recent “AI Safe­ty” work­shop, Less­Wrong AI doomers came up with ideas such as: “Strat­e­gy: start build­ing bombs from your cab­in in Mon­tana and mail them to Ope­nAI and Deep­Mind lol.” In Minecraft, we pre­sume. [Twit­ter]

We need to stress that Yud­kowsky him­self is not a char­la­tan — he is com­plete­ly sin­cere. He means every word he says. This may be scarier.

Remem­ber that cryp­tocur­ren­cy and AI doom are already close friends — Sam Bankman-Fried and Car­o­line Elli­son of FTX/Alameda are true believ­ers, as are Vita­lik Buterin and many Ethereum people.

But what about the AI drone that killed its operator, huh?

Thursday’s big news sto­ry was from the Roy­al Aero­nau­ti­cal Soci­ety Future Com­bat Air & Space Capa­bil­i­ties Sum­mit in late May about a talk from Colonel Tuck­er “Cin­co” Hamil­ton, the US Air Force’s chief of AI test and oper­a­tions: [RAeS]

He notes that one sim­u­lat­ed test saw an AI-enabled drone tasked with a SEAD mis­sion to iden­ti­fy and destroy SAM sites, with the final go/no go giv­en by the human. How­ev­er, hav­ing been ‘rein­forced’ in train­ing that destruc­tion of the SAM was the pre­ferred option, the AI then decid­ed that ‘no-go’ deci­sions from the human were inter­fer­ing with its high­er mis­sion — killing SAMs — and then attacked the oper­a­tor in the sim­u­la­tion. Said Hamil­ton: “We were train­ing it in sim­u­la­tion to iden­ti­fy and tar­get a SAM threat. And then the oper­a­tor would say yes, kill that threat. The sys­tem start­ed real­is­ing that while they did iden­ti­fy the threat at times the human oper­a­tor would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the oper­a­tor. It killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objective.”

Wow, this is pret­ty seri­ous stuff! Except that it obvi­ous­ly doesn’t make any sense. Why would you pro­gram your AI that way in the first place?

The press was ful­ly primed by Yudkowsky’s AI doom op-ed in Time in March. They went wild with the killer drone sto­ry because there’s noth­ing like a sci-fi dooms­day tale. Vice even ran the head­line “AI-Con­trolled Drone Goes Rogue, Kills Human Oper­a­tor in USAF Sim­u­lat­ed Test.” [Vice, archive of 20:13 UTC June 1]

But it turns out that none of this ever hap­pened. Vice added three cor­rec­tions, the sec­ond not­ing that “the Air Force denied it con­duct­ed a sim­u­la­tion in which an AI drone killed its oper­a­tors.” Vice has now updat­ed the head­line as well. [Vice, archive of 09:13 UTC June 3]

Yud­kowsky went off about the sce­nario he had warned of sud­den­ly play­ing out. Edouard Har­ris, anoth­er “AI safe­ty” guy, clar­i­fied for Yud­kowsky that this was just a hypo­thet­i­cal plan­ning sce­nario and not an actu­al sim­u­la­tion: [Twit­ter, archive]

This par­tic­u­lar exam­ple was a con­struct­ed sce­nario rather than a rules-based sim­u­la­tion … Source: know the team that sup­plied the sce­nario … Mean­ing an entire, pre­pared sto­ry as opposed to an actu­al sim­u­la­tion. No ML mod­els were trained, etc.

The RAeS has also added a clar­i­fi­ca­tion to the orig­i­nal blog post: the colonel was describ­ing a thought exper­i­ment as if the team had done the actu­al test.

The whole thing was just fic­tion. But it sure cap­tured the imag­i­na­tion.

The lucrative business of making things worse

The real threat of AI is the bozos pro­mot­ing AI doom who want to use it as an excuse to ignore real-world prob­lems — like the risk of cli­mate change to human­i­ty — and to make mon­ey by destroy­ing labor con­di­tions and mak­ing prod­ucts worse. This is because they’re run­ning a grift.

Anil Dash observes (over on Bluesky, where we can’t link it yet) that ven­ture capital’s play­book for AI is the same one it tried with cryp­to and Web3 and first used for Uber and Airbnb: break the laws as hard as pos­si­ble, then build new laws around their exploitation.

The VCs’ actu­al use case for AI is treat­ing work­ers badly.

The Writer’s Guild of Amer­i­ca, a labor union rep­re­sent­ing writ­ers for TV and film in the US, is on strike for bet­ter pay and con­di­tions. One of the rea­sons is that stu­dio exec­u­tives are using the threat of AI against them. Writ­ers think the plan is to get a chat­bot to gen­er­ate a low-qual­i­ty script, which the writ­ers are then paid less in worse con­di­tions to fix. [Guardian]

Exec­u­tives at the Nation­al Eat­ing Dis­or­ders Asso­ci­a­tion replaced hot­line work­ers with a chat­bot four days after the work­ers union­ized. “This is about union bust­ing, plain and sim­ple,” said one helpline asso­ciate. The bot then gave wrong and dam­ag­ing advice to users of the ser­vice: “Every sin­gle thing Tes­sa sug­gest­ed were things that led to the devel­op­ment of my eat­ing dis­or­der.” The ser­vice has back­tracked on using the chat­bot. [Vice; Labor Notes; Vice; Dai­ly Dot]

Dig­i­tal black­face: instead of actu­al­ly hir­ing black mod­els, Levi’s thought it would be a great idea to take white mod­els and alter the images to look like black peo­ple. Levi’s claimed it would increase diver­si­ty if they faked the diver­si­ty. One agency tried using AI to syn­the­size a suit­ably stereo­typ­i­cal “Black voice” instead of hir­ing an actu­al black voice actor. [Busi­ness Insid­er, archive]

 

Genius at work

 

Sam Altman: My potions are too powerful for you, Senator

Sam Alt­man, 38, is a ven­ture cap­i­tal­ist and the CEO of Ope­nAI, the com­pa­ny behind Chat­G­PT. The media loves to tout Alt­man as a boy genius. He learned to code at age eight!

Altman’s blog post “Moore’s Law for Every­thing” elab­o­rates on Yudkowsky’s ideas on run­away self-improv­ing AI. The orig­i­nal Moore’s Law (1965) pre­dict­ed that the num­ber of tran­sis­tors that engi­neers could fit into a chip would dou­ble every year. Altman’s the­o­ry is that if we just make the sys­tems we have now big­ger with more data, they’ll reach human-lev­el AI, or arti­fi­cial gen­er­al intel­li­gence (AGI). [blog post]

But that’s just ridicu­lous. Moore’s Law is slow­ing down bad­ly, and there’s no actu­al rea­son to think that feed­ing your auto­com­plete more data will make it start think­ing like a per­son. It might do bet­ter approx­i­ma­tions of a sequence of words, but the cur­rent round of sys­tems mar­ket­ed as “AI” are still at the extreme­ly unre­li­able chat­bot level.

Alt­man is also a dooms­day prep­per. He has bragged about hav­ing “guns, gold, potas­si­um iodide, antibi­otics, bat­ter­ies, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to” in the event of super-con­ta­gious virus­es, nuclear war, or AI “that attacks us.” [New York­er, 2016]

Alt­man told the US Sen­ate Judi­cia­ry Sub­com­mit­tee that his auto­com­plete sys­tem with a gigan­tic dic­tio­nary was a risk to the con­tin­ued exis­tence of the human race! So they should reg­u­late AI, but in such a way as to license large providers — such as Ope­nAI — before they could deploy this amaz­ing tech­nol­o­gy. [Time; tran­script]

Around the same time he was talk­ing to the Sen­ate, Alt­man was telling the EU that Ope­nAI would pull out of Europe if they reg­u­lat­ed his com­pa­ny oth­er than how he want­ed. This is because the planned Euro­pean reg­u­la­tions would address AI com­pa­nies’ actu­al prob­lem­at­ic behav­iors, and not the made-up prob­lems Alt­man wants them to think about. [Zeit Online, in Ger­man, pay­walled; Fast Com­pa­ny]

The thing Sam’s work­ing on is so cool and dank that it could destroy human­i­ty! So you bet­ter give him a pile of mon­ey and a reg­u­la­to­ry moat around his busi­ness. And not just take him at his word and shut down Ope­nAI immediately.

Occa­sion­al­ly Sam gives the game away that his doomerism is entire­ly vapor­ware: [Twit­ter; archive]

AI is how we describe soft­ware that we don’t quite know how to build yet, par­tic­u­lar­ly soft­ware we are either very excit­ed about or very ner­vous about

Alt­man has a long-run­ning inter­est in weird and bad par­a­sit­i­cal bil­lion­aire tran­shu­man­ist ideas, includ­ing the “young blood” anti-aging scam that Peter Thiel famous­ly fell for — bil­lion­aires as lit­er­al vam­pires — and a com­pa­ny that promis­es to pre­serve your brain in plas­tic when you die so your mind can be uploaded to a com­put­er. [MIT Tech­nol­o­gy Review; MIT Tech­nol­o­gy Review]

Alt­man is also a cryp­to grifter, with his proof-of-eye­ball cryp­tocur­ren­cy World­coin. This has already gen­er­at­ed a black mar­ket in bio­met­ric data cour­tesy of aspir­ing hold­ers. [Wired, 2021; Reuters; Giz­mo­do]

 

 

CAIS: Statement on AI Risk

Alt­man pro­mot­ed the recent “State­ment on AI Risk,” a wide­ly pub­li­cized open let­ter signed by var­i­ous past AI lumi­nar­ies, ven­ture cap­i­tal­ists, AI doom cranks, and a musi­cian who met her bil­lion­aire boyfriend over Roko’s basilisk. Here is the com­plete text, all 22 words: [CAIS]

Mit­i­gat­ing the risk of extinc­tion from AI should be a glob­al pri­or­i­ty along­side oth­er soci­etal-scale risks such as pan­demics and nuclear war.

A short state­ment like this on an alleged­ly seri­ous mat­ter will usu­al­ly hide a moun­tain of hid­den assump­tions. In this case, you would need to know that the state­ment was pro­mot­ed by the Cen­ter for AI Safe­ty — a group of Yudkowsky’s AI doom acolytes. That’s the hid­den bag­gage for this one.

CAIS is a non­prof­it that gets about 90% of its fund­ing from Open Phil­an­thropy, which is part of the Effec­tive Altru­ism sub­cul­ture, which David has cov­ered pre­vi­ous­ly. Open Philanthropy’s main fun­ders are Dustin Moskowitz and his wife Cari Tuna. Moskowitz made his mon­ey from co-found­ing Face­book and from his start­up Asana, which was large­ly fund­ed by Sam Altman.

That is: the open let­ter is the same small group of tech fun­ders. They want to get you wor­ry­ing about sci-fi sce­nar­ios and not about the social­ly dam­ag­ing effects of their AI-based businesses.

Com­put­er secu­ri­ty guru Bruce Schneier signed the CAIS let­ter. He was called out on sign­ing on with these guys’ weird non­sense, then he back­tracked and said he sup­port­ed an imag­i­nary ver­sion of the let­ter that wasn’t stu­pid — and not the one he did in fact put his name to. [Schneier on Secu­ri­ty]

And in conclusion

Cryp­to sucks, and it turns out AI sucks too. We promise we’ll go back to cryp­to next time.

“Don’t want to wor­ry any­one, but I just asked Chat­G­PT to build me a bet­ter paper­clip.” — Bethany Black


Cor­rec­tion: we orig­i­nal­ly wrote up the pro­fes­sor sto­ry as using Turnitin’s AI pla­gia­rism tester. The orig­i­nal Red­dit thread makes it clear what he did.



Source link

Please fol­low and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *