rationalism AI cultist creeps

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (171 of them)

they also really like bayesian inference

― iatee, Friday, 5 April 2013 14:07 (1 hour ago) Bookmark Flag Post Permalink

haha this makes perfect sense

flopson, Friday, 5 April 2013 19:54 (eleven years ago) link

Exhibit A that these folk are pure nutjobs

http://lesswrong.com/lw/kn/torture_vs_dust_specks/

riverrun, past Steve and Adam's (ledge), Friday, 5 April 2013 22:27 (eleven years ago) link


Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX. This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX - XX = XX.

Chuck E was a hero to most (s.clover), Saturday, 6 April 2013 15:39 (eleven years ago) link

fyi they are among us. there's already an active hidden rationalism AI cultist creep thread on ilx.

Mordy, Saturday, 6 April 2013 15:43 (eleven years ago) link

10 years ago i was reading about yudkowsky when he was "making" an a.i. from what i understand it's a somewhat murky field so self-made men can sort of rope people into their projects. lots of those "cultists" are into a.i. , i forgot most of their names but there was groetzel http://wp.novamente.net/ , i think a guy from http://www.cyc.com/faq-page#n496 etc

Sébastien, Saturday, 6 April 2013 18:02 (eleven years ago) link

some of these guys managed to burn some millions on their projects so it was sort of exciting; i was reading that stuff as cool sf with the option of some results. it's been 10 years i haven't really heard of them , so...

Sébastien, Saturday, 6 April 2013 18:07 (eleven years ago) link

IBM threw hueg resources into Deep Blue (chess) and Watson (Jeopardy) and came away with a ton of great publicity and some technical expertise it could generalize elsewhere, but you've probably noticed that IBM is not yet selling a version of HAL. AI enthusiasts without an NSA, IBM, Google, or Apple paying the freight are notorious overreachers.

Aimless, Saturday, 6 April 2013 18:17 (eleven years ago) link

otm, more or less, although after the continued black eyes AI received many people seemed to drop down into subfields like machine learning and data mining which allowed them to focus on the technical tasks at hand and to avoid using the freighted term "AI" too much. So on the one hand technical successes of AI may live on under different names, on the other true believers of the most grandiose philosophical claims still fly the flannel and ask DO U SEE?

Had not known that James Lighthill was one of the first big critics.

What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:12 (eleven years ago) link

"but you've probably noticed that IBM is not yet selling a version of HAL"

i thought they were, except HAL is handling customer service phone trees instead of running space stations.

Philip Nunez, Saturday, 6 April 2013 19:35 (eleven years ago) link

Which is, um, not quite as hard?

What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:37 (eleven years ago) link

I'll say. HAL really fucked up that space gig.

Philip Nunez, Saturday, 6 April 2013 19:49 (eleven years ago) link

1) HAL was doing fine until the unpredictability of his super-human intelligence made him psychotic
2) HAL is a fictional construct
3) Please provide the 1950s era paper in which someone, preferably Alan Turing, states that if in 50 years we have created a machine that can traverse a tree of extremely limited depth and width using a clearly synthetic or prerecorded voice then we can congratulate ourselves for having built something rivaling the human brain itself.

What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 20:22 (eleven years ago) link

Placing the fictional HAL beside the equally fictional construct of the "singularity", HAL seems to be the more probable.

Aimless, Saturday, 6 April 2013 20:47 (eleven years ago) link

In the movie at least, they trade on the creepiness of HAL's anthropomorphomormomorphization but he's ultimately rendered as just another tool gone on the fritz (complete with bowman as frustrated sys-admin; bowman also demurs when the reporter asks if HAL has a soul), so to the extent that we have things today like apple maps giving terrifyingly bad directions, we have definitely delivered on the promise of HAL.

Philip Nunez, Saturday, 6 April 2013 21:15 (eleven years ago) link

you know, if they would make their workshop into an ebook i would check it out. if it's less than 200 pages.
http://appliedrationality.org/schedule/

Sébastien, Saturday, 6 April 2013 21:39 (eleven years ago) link

Looks like the myriad achievements of poor HAL are ignored as he is shoe-horned into being the latest of a long line of ILX strawmen.

What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 21:41 (eleven years ago) link

http://singularityhub.com/about/
http://lukeprog.com/SaveTheWorld.html

Hardware and software are improving, there are no signs that we will stop this, and human biology and biases indicate that we are far below the upper limit on intelligence. Economic arguments indicate that most AIs would act to become more intelligent. Therefore, intelligence explosion is very likely. The apparent diversity and irreducibility of information about "what is good" suggests that value is complex and fragile; therefore, an AI is unlikely to have any significant overlap with human values if that is not engineered in at significant cost. Therefore, a bad AI explosion is our default future.

its deeply weird to me how much of this stuff is out there, and how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.

Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:32 (eleven years ago) link

* How can we identify, understand, and reduce cognitive biases?
* How can institutional innovations such as prediction markets improve information aggregation and probabilistic forecasting?
* How should an ethically-motivated agent act under conditions of profound moral uncertainty?
* How can we correct for observation selection effects in anthropic reasoning?

http://www.fhi.ox.ac.uk/research/rationality_and_wisdom

Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:34 (eleven years ago) link

how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.

Some of us were traumatised by Servotron at a young age, OK?

Just noise and screaming and no musical value at all. (Colonel Poo), Sunday, 7 April 2013 00:54 (eleven years ago) link

"how much of this stuff is out there" : the big ideas are made by the same few people (yudkowsky, maybe bostrom) and the evangelization is made by about a dozens younger "lesser names" (that probably were hanging out on the sl4 mailing list) on 3 or 4 of their platforms that they rename / shuffle around every few years. they were "theorizing" about friendly a.i. way back then, i doubt they made any breakthroughs since then... how could they?

Sébastien, Sunday, 7 April 2013 01:06 (eleven years ago) link

in a way the "friendly a.i." advocates are like the epicureans who 2300 years ago conceptualized the atom only by using their bare eyes and their intuition: some time down the line we sort of prove them right but back then they really had no good understanding of how it worked. who knows in the (far) future it's possible some stuff they talk about in their conceptualizaiton of a friendly a.i. will be seen as useful and recuperated .

Sébastien, Sunday, 7 April 2013 02:25 (eleven years ago) link

three months pass...

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]

http://rationalwiki.org/wiki/Roko%27s_basilisk

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 01:49 (ten years ago) link

In LessWrong's Timeless Decision Theory (TDT),[3] this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.

sounds like a great theory to have, sooper sound

j., Thursday, 1 August 2013 02:55 (ten years ago) link

if you can believe that life is a simulation where you may be punished you might as well believe that there's a god who wants you to act a certain way

Mordy , Thursday, 1 August 2013 02:58 (ten years ago) link

and i thought these people were only slightly crazy

click here to start exploding (ledge), Thursday, 1 August 2013 10:46 (ten years ago) link

at least they seem to be nominally self-policing

j., Thursday, 1 August 2013 10:59 (ten years ago) link

i mean tdt and rokos basilisk basically turns the AI into an arbitrary and all powerful god, if it wasnt that already

max, Thursday, 1 August 2013 11:03 (ten years ago) link

A god who wants you to send all your money to a particular organisation - so Yudkowsky is the televangelist in this scenario.

Yeah it is interesting that a bunch of self proclaimed rationalists have managed to reason themselves into a belief system functionally identical to one they would decry as irrational.

click here to start exploding (ledge), Thursday, 1 August 2013 11:49 (ten years ago) link

i love it! its borgesian

max, Thursday, 1 August 2013 11:52 (ten years ago) link

yeah i think the aporia there might be the "self proclaimed" bit

phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 12:10 (ten years ago) link

If I were a future AI, I'd recreate the LessWrong memeplex, except I'd call it LessRong. Why? For the lulz...

slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:39 (ten years ago) link

Roko's basilisk sounds like the name of an Italian prog rock band.

slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:48 (ten years ago) link

Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted.[4] Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 12:49 (ten years ago) link

that wiki article is like half of a great ted chiang short story

max, Thursday, 1 August 2013 12:55 (ten years ago) link

Found the following beautiful sentence at the bottom of the LessWrong page:

The basilisk kerfuffle has also alienated fellow cryonicists.

click here to start exploding (ledge), Thursday, 1 August 2013 12:58 (ten years ago) link

Why a basilisk?

wombspace (abanana), Thursday, 1 August 2013 13:03 (ten years ago) link

I'm not sure you should give these guys what they want and proclaim them to be the vangaurd of Hard AI proponents... I really don't think anyone who has seriously grappled with the philosophical implications of, say, the physical symbol system hypothesis, could ever proclaim any development or avenue of research to be "provably friendly."

Furthermore I think its not very fair to suggest any and all fans, theorists or proponents of AI are similarly robotic in their thinking as these LessWrong people.

Kissin' Cloacas (Viceroy), Thursday, 1 August 2013 13:41 (ten years ago) link

this is the best part

[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half).

because it means that one of the things that caused "severe psychological distress" was the suggestion that posters on rationalism message boards would in the future be punished for being smarter than everyone

what a terrifying perversion of one's value system

These fools are the enemy of the true cybernetic revolution.

Banaka™ (banaka), Thursday, 1 August 2013 17:15 (ten years ago) link

ok sam harris doesn't really belong here but c'mon

http://www.samharris.org/blog/item/free-will-and-the-reality-of-love

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:10 (ten years ago) link

Consider the present moment from the point of view of my conscious mind: I have decided to write this blog post, and I am now writing it. I almost didn’t write it, however. In fact, I went back and forth about it: I feel that I’ve said more or less everything I have to say on the topic of free will and now worry about repeating myself. I started the post, and then set it aside. But after several more emails came in, I realized that I might be able to clarify a few points. Did I choose to be affected in this way? No. Some readers were urging me to comment on depressing developments in “the Arab Spring.” Others wanted me to write about the practice of meditation. At first I ignored all these voices and went back to working on my next book. Eventually, however, I returned to this blog post. Was that a choice? Well, in a conventional sense, yes. But my experience of making the choice did not include an awareness of its actual causes. Subjectively speaking, it is an absolute mystery to me why I am writing this.

this is sub david brooks

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:11 (ten years ago) link

this is not going to shock anyone but this frame of mind/crew of people trends very strongly into some supremely nasty politics

R'LIAH (goole), Thursday, 1 August 2013 19:12 (ten years ago) link

http://lesswrong.com/lw/hcy/link_more_right_launched/

R'LIAH (goole), Thursday, 1 August 2013 19:13 (ten years ago) link

ahahaa "Just so long as we don't end up with an asymmetrical effect, where the PUAs leave but the feminists stay."

stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:32 (ten years ago) link

ah god i don't think i've seen the term "race realism" before

phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 20:04 (ten years ago) link

The all-important gap between labeling yourself as a rationalist and actually using your reason; between labeling yourself as an empiricist and actually studying phenomena.

cardamon, Thursday, 1 August 2013 21:21 (ten years ago) link

AIUI longtermism is one branch of effective altruists. Yes, there are effective altruists who are more into sending deworming pills to schools in Africa and the like.

death generator (lukas), Tuesday, 6 September 2022 15:40 (one year ago) link

They both spring from the same error though, which is that we just need to get some smart people to figure out things for the rest of us.

death generator (lukas), Tuesday, 6 September 2022 15:41 (one year ago) link

Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism.

I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah? Or am I out of line here.

recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:50 (one year ago) link

Maybe I don’t even believe that tbh.

recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:51 (one year ago) link

I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah?

I think if we avoid destroying the earth, yeah I'd agree with this.

Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism.

I had something more like "minimize human domination over other humans" in mind but this works too.

death generator (lukas), Tuesday, 6 September 2022 15:56 (one year ago) link

So here's an effective altruist arguing that longtermism is bs, basically saying your little toy model of the future is useless: https://forum.effectivealtruism.org/posts/RRyHcupuDafFNXt6p/longtermism-and-computational-complexity

Someone makes a brilliant point in the comments: "Loved this post - reminds me a lot of intractability critiques of central economic planning, except now applied to consequentialism writ large."

Given that most EAs are kinda libertarian-leaning (hate central planning when applied to real-world economies) this is ... devastating.

death generator (lukas), Tuesday, 6 September 2022 15:58 (one year ago) link

xps yeah I didn't realise how much the official EA organisation had been taken over:

https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism

ledge, Tuesday, 6 September 2022 16:10 (one year ago) link

xp that is an exceedingly rigorous formulation of what is a very obvious and common sense objection. (hence far more effective for the intended audience.)

ledge, Tuesday, 6 September 2022 16:23 (one year ago) link

I had something more like "minimize human domination over other humans" in mind but this works too.

Right. Am I perhaps fundamentally misunderstanding rationalism? (Genuine question, I come to these kinds of threads to learn — I may not be totally out of line but I am mostly out my depth.)

My suggestion was focused on the process while yours seems more goals-oriented. Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?

recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 16:26 (one year ago) link

Well mine is process-oriented too I think ... one of the reasons to oppose human domination over other humans is everyone has a limited view of the world, everyone sees based on their own experiences and interests, so process-wise you should avoid having people make decisions for other people, regardless of how well-meaning they might be.

I may not be totally out of line but I am mostly out my depth.

lol trust me I have a very shallow understanding of this stuff as well. My indignation, however, is bottomless.

Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?

Utilitarianism, right? (which is related to but I think not the same as consequentialism, but I don't understand the difference)

death generator (lukas), Tuesday, 6 September 2022 16:35 (one year ago) link

Consequentialism just says that the morality of an action resides in its consequences, as opposed to how well it follows some (e.g. god given) rules or whether it's inherently virtuous (whatever that means).Utilitarianism specifies what the consequences should be.

ledge, Tuesday, 6 September 2022 16:48 (one year ago) link

Which is partly why utilitarianism is so tempting - consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?

ledge, Tuesday, 6 September 2022 17:21 (one year ago) link

Consequentialism just says that the morality of an action resides in its consequences

Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure.

more difficult than I look (Aimless), Tuesday, 6 September 2022 17:30 (one year ago) link

consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?

my uneducated answer here is that if you've arrived at a situation where other people are pawns in your game - even if you mean them well - something has gone wrong upstream.

obviously there are situations where you need to guess what is best for someone else, but we should try to minimize them. it shouldn't be the paradigm example of moral reasoning.

death generator (lukas), Tuesday, 6 September 2022 18:24 (one year ago) link

btw, effective altruism has its own ilx thread.

art is a waste of time; reducing suffering is all that matters

more difficult than I look (Aimless), Tuesday, 6 September 2022 18:38 (one year ago) link

xp
yes, which is why the answer to the Enlightenment: good/bad? question differs depending where in the world you ask it

rob, Tuesday, 6 September 2022 18:39 (one year ago) link

well what could be wrong with maximising happiness?

This was rhetorical but yes treating people as pawns is one major problem, as is the fact that happiness, or whatever your unit of utility is, is not the kind of thing that you can do calculations with. One hundred and one people who are all one percent happy is not at all a better state of affairs than one person who is one hundred percent happy. (Not that there isn't a place for e.g. quality adjusted life years calculations in certain institutional settings.)

ledge, Tuesday, 6 September 2022 18:59 (one year ago) link

Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure

I think "the end justifies the means" is a bit more slippery - it's often used to weigh one set of consequences more heavily than another, e.g. bombing hiroshima to end the war. And, well we're talking about human actions and human consequences, I think its fair to restrcit it to humanly measurable ones.

ledge, Tuesday, 6 September 2022 19:12 (one year ago) link

Even human consequences extend indefinitely. Identifying an end point is an arbitrary imposition upon a ceaseless flow, the rough equivalent of ending a story with "and they all lived happily ever after".

more difficult than I look (Aimless), Tuesday, 6 September 2022 20:11 (one year ago) link

so do you never consider the consequences of your actions or do you have trouble getting up in the morning?

ledge, Tuesday, 6 September 2022 20:43 (one year ago) link

I am not engaged in a program of identifying a universal moral framework based upon the consequences of my actions when I get up in the morning, which certainly makes it easier to choose what to wear.

more difficult than I look (Aimless), Tuesday, 6 September 2022 20:47 (one year ago) link

touche!

ledge, Tuesday, 6 September 2022 21:08 (one year ago) link

This is the ideal utilitarian form. You may not like it, but this is what peak performance looks like pic.twitter.com/uHvCp2Cq7y

— MHR (@SpacedOutMatt) September 16, 2022

𝔠𝔞𝔢𝔨 (caek), Saturday, 17 September 2022 16:30 (one year ago) link

incredible

death generator (lukas), Sunday, 25 September 2022 23:20 (one year ago) link

one year passes...

Read this a few days ago. As AI burns through staggering amounts of money with no reasonable use case so far, all your fave fascist tech moguls are gonna hitch themselves to a government gravy train under a Trump administration (gift link): https://wapo.st/3wllikQ

Are you addicted to struggling with your horse? (Boring, Maryland), Sunday, 5 May 2024 14:35 (two weeks ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.