How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
http://yudkowsky.net/rational/virtueshttp://wiki.lesswrong.com/wiki/FAQhttp://wiki.lesswrong.com/wiki/Sequences
A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no? (The Parable of the Dagger.)
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept". (How An Algorithm Feels From Inside.)
i don't even know what's going on with these people. apparently there are many of them at g00gle.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:15 (eleven years ago) link
and then they hired this guy:
http://www.kurzweilai.net/singularity-q-a
Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), “experience beaming” (like “Being John Malkovich”), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:19 (eleven years ago) link
In particular, if you want to do any of the following, consider doing lots of homework and ensure you're not making any standard mistakes:* claim your god exists* argue for a universally compelling morality* claim you have an easy way to make superintelligent AI safe
* claim your god exists* argue for a universally compelling morality* claim you have an easy way to make superintelligent AI safe
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:21 (eleven years ago) link
interesting, i'll have to check out these links when i get home. i'm enjoying how things are turning out these days, everything's looking more and more like some techno-dystopia novel
― Spectrum, Friday, 5 April 2013 17:21 (eleven years ago) link
Yudkowsky has also written several works[18] of science fiction and other fiction. His Harry Potter fan fiction story Harry Potter and the Methods of Rationality illustrates topics in cognitive science and rationality (The New Yorker described it as "a thousand-page online 'fanfic' text called 'Harry Potter and the Methods of Rationality', which recasts the original story in an attempt to explain Harry's wizardry through the scientific method"[19])
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:22 (eleven years ago) link
i want holobeer
― ciderpress, Friday, 5 April 2013 17:23 (eleven years ago) link
i have some friends who were into that harry potter thing, had no idea that's what it was about but makes more sense now
robin hanson at www.overcomingbias.com is like their 'respectable' academic figure, he thinks prediction markets can solve ever human problem. also peter thiel gives these people money.
― iatee, Friday, 5 April 2013 17:26 (eleven years ago) link
has anyone met these people irl? what are they like? they're speaking like some strange sci-fi language like aliens-of-the-week on star trek voyager or something. do they wear star trek shirts?
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:30 (eleven years ago) link
I'm friends w/one of these guys, he is nice, very socially awkward, believes that the singularity is just round the corner
― c21m50nm3x1c4n (wins), Friday, 5 April 2013 17:41 (eleven years ago) link
he explained what it was all about to me once, it sounded cuckoo but I believe his heart is in the right place
― c21m50nm3x1c4n (wins), Friday, 5 April 2013 17:43 (eleven years ago) link
Logic and rationality are just a nice set of tools among many. Used alone they cannot supply you with a reason for doing anything until you have a set of arbitrary axioms defining what is good. This is nicely illustrated by the childish game of replying to whatever someone says by asking "why?"
imo people who worship rationality as if it were some infallible god are disgusting savages.
― Aimless, Friday, 5 April 2013 17:47 (eleven years ago) link
Aimless, I think you are only half-right, if I may say so.
I believe our shared reality can be divided thusly:
0. Ontology i. Objectivity i. Subjectivity
0. Epistemology i. Objectivity i. Subjectivity
Where everything that is '0' is on a same level (not hierarchical) and 'i' falls into these categories while still being on the same level with the rest of the i's.
The difficult arises when trying to define what is ontologically subjective or objective and epistemologically subjective or objective.
Not everything has a truth value. Just as we cannot speak in terms of 'good' and/or 'bad' about everything.
― c21m50nh3x460n, Friday, 5 April 2013 17:54 (eleven years ago) link
Why would anyone try to define what is ontologically subjective or objective and epistemologically subjective or objective?
― Aimless, Friday, 5 April 2013 18:01 (eleven years ago) link
Are you being genuine or "playing a childish game"?
― c21m50nh3x460n, Friday, 5 April 2013 18:03 (eleven years ago) link
well, this is taking a turn.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:03 (eleven years ago) link
oh lord, let's just stick to how freaky these guys are
― Spectrum, Friday, 5 April 2013 18:03 (eleven years ago) link
So while beliefs about the best sport or music may vary by culture, for the purpose of picking good mates or allies you can’t go too wrong by being impressed by whomever impresses folks from other cultures, and you have incentives not to make mistakes. For example, if you are mistakenly impressed by and mate with someone without real sport or music abilities, you who may end up with kids who lack those abilities, and fail to impress the next generation.
bet these guys all love the ladder theory
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:05 (eleven years ago) link
bet in high school they all wore geordi glasses and tried to talk like data to their friends.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:06 (eleven years ago) link
P.S. You don't really have to answer that. My only point is that the answer to this question must eventually rest upon a motivation that may be described rationally, but cannot be derived rationally.
― Aimless, Friday, 5 April 2013 18:06 (eleven years ago) link
they also really like bayesian inference
― iatee, Friday, 5 April 2013 18:07 (eleven years ago) link
The eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors.
― the late great, Friday, 5 April 2013 18:30 (eleven years ago) link
that's not a good definition of humility imo
― the late great, Friday, 5 April 2013 18:31 (eleven years ago) link
Aimless, does it really matter that a motive can only be described rationally while no motive may be derived from things rationally?
It's all nice as a theoretical and intellectual exercise, but I question its practicality and real-world application, with all due respect.
― c21m50nh3x460n, Friday, 5 April 2013 18:34 (eleven years ago) link
xp
Taking specific actions in anticipation of your own errors is certainly an act requiring a measure of humility. Perhaps this is the only act of humility he has any familiarity with. This is a bit like saying "a bear is a large, brown, powerful, furry creature that lives in a den several miles from my house".
does it really matter that a motive can only be described rationally while no motive may be derived from things rationally?
It only matters if you would like to understand the limitations of rationality as a tool and its proper sphere of functionality. Motives may seem to be a mere rump to most of our mental activity, especially if you value rationality above all else. After all, motives supply themselves in profusion whether you think much about them or not. I would submit that this peculiar fact requires the most careful and patient observation, and understanding the source of motives is far from being a mere theoretical and intellectual exercise.
― Aimless, Friday, 5 April 2013 18:48 (eleven years ago) link
― iatee, Friday, 5 April 2013 14:07 (1 hour ago) Bookmark Flag Post Permalink
haha this makes perfect sense
― flopson, Friday, 5 April 2013 19:54 (eleven years ago) link
Exhibit A that these folk are pure nutjobs
http://lesswrong.com/lw/kn/torture_vs_dust_specks/
― riverrun, past Steve and Adam's (ledge), Friday, 5 April 2013 22:27 (eleven years ago) link
Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX. This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX - XX = XX.
― Chuck E was a hero to most (s.clover), Saturday, 6 April 2013 15:39 (eleven years ago) link
fyi they are among us. there's already an active hidden rationalism AI cultist creep thread on ilx.
― Mordy, Saturday, 6 April 2013 15:43 (eleven years ago) link
https://www.youtube.com/watch?v=iFjd9IQfjZg
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 17:43 (eleven years ago) link
10 years ago i was reading about yudkowsky when he was "making" an a.i. from what i understand it's a somewhat murky field so self-made men can sort of rope people into their projects. lots of those "cultists" are into a.i. , i forgot most of their names but there was groetzel http://wp.novamente.net/ , i think a guy from http://www.cyc.com/faq-page#n496 etc
― Sébastien, Saturday, 6 April 2013 18:02 (eleven years ago) link
some of these guys managed to burn some millions on their projects so it was sort of exciting; i was reading that stuff as cool sf with the option of some results. it's been 10 years i haven't really heard of them , so...
― Sébastien, Saturday, 6 April 2013 18:07 (eleven years ago) link
http://www.the-rudy.com/images/iggy-pop_rational-ht2.jpg
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 18:13 (eleven years ago) link
IBM threw hueg resources into Deep Blue (chess) and Watson (Jeopardy) and came away with a ton of great publicity and some technical expertise it could generalize elsewhere, but you've probably noticed that IBM is not yet selling a version of HAL. AI enthusiasts without an NSA, IBM, Google, or Apple paying the freight are notorious overreachers.
― Aimless, Saturday, 6 April 2013 18:17 (eleven years ago) link
otm, more or less, although after the continued black eyes AI received many people seemed to drop down into subfields like machine learning and data mining which allowed them to focus on the technical tasks at hand and to avoid using the freighted term "AI" too much. So on the one hand technical successes of AI may live on under different names, on the other true believers of the most grandiose philosophical claims still fly the flannel and ask DO U SEE?
Had not known that James Lighthill was one of the first big critics.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:12 (eleven years ago) link
"but you've probably noticed that IBM is not yet selling a version of HAL"
i thought they were, except HAL is handling customer service phone trees instead of running space stations.
― Philip Nunez, Saturday, 6 April 2013 19:35 (eleven years ago) link
Which is, um, not quite as hard?
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:37 (eleven years ago) link
I'll say. HAL really fucked up that space gig.
― Philip Nunez, Saturday, 6 April 2013 19:49 (eleven years ago) link
1) HAL was doing fine until the unpredictability of his super-human intelligence made him psychotic2) HAL is a fictional construct3) Please provide the 1950s era paper in which someone, preferably Alan Turing, states that if in 50 years we have created a machine that can traverse a tree of extremely limited depth and width using a clearly synthetic or prerecorded voice then we can congratulate ourselves for having built something rivaling the human brain itself.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 20:22 (eleven years ago) link
Placing the fictional HAL beside the equally fictional construct of the "singularity", HAL seems to be the more probable.
― Aimless, Saturday, 6 April 2013 20:47 (eleven years ago) link
In the movie at least, they trade on the creepiness of HAL's anthropomorphomormomorphization but he's ultimately rendered as just another tool gone on the fritz (complete with bowman as frustrated sys-admin; bowman also demurs when the reporter asks if HAL has a soul), so to the extent that we have things today like apple maps giving terrifyingly bad directions, we have definitely delivered on the promise of HAL.
― Philip Nunez, Saturday, 6 April 2013 21:15 (eleven years ago) link
you know, if they would make their workshop into an ebook i would check it out. if it's less than 200 pages. http://appliedrationality.org/schedule/
― Sébastien, Saturday, 6 April 2013 21:39 (eleven years ago) link
Looks like the myriad achievements of poor HAL are ignored as he is shoe-horned into being the latest of a long line of ILX strawmen.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 21:41 (eleven years ago) link
http://singularityhub.com/about/http://lukeprog.com/SaveTheWorld.html
Hardware and software are improving, there are no signs that we will stop this, and human biology and biases indicate that we are far below the upper limit on intelligence. Economic arguments indicate that most AIs would act to become more intelligent. Therefore, intelligence explosion is very likely. The apparent diversity and irreducibility of information about "what is good" suggests that value is complex and fragile; therefore, an AI is unlikely to have any significant overlap with human values if that is not engineered in at significant cost. Therefore, a bad AI explosion is our default future.
its deeply weird to me how much of this stuff is out there, and how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.
― Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:32 (eleven years ago) link
* How can we identify, understand, and reduce cognitive biases?* How can institutional innovations such as prediction markets improve information aggregation and probabilistic forecasting?* How should an ethically-motivated agent act under conditions of profound moral uncertainty?* How can we correct for observation selection effects in anthropic reasoning?
http://www.fhi.ox.ac.uk/research/rationality_and_wisdom
― Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:34 (eleven years ago) link
how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.
Some of us were traumatised by Servotron at a young age, OK?
― Just noise and screaming and no musical value at all. (Colonel Poo), Sunday, 7 April 2013 00:54 (eleven years ago) link
"how much of this stuff is out there" : the big ideas are made by the same few people (yudkowsky, maybe bostrom) and the evangelization is made by about a dozens younger "lesser names" (that probably were hanging out on the sl4 mailing list) on 3 or 4 of their platforms that they rename / shuffle around every few years. they were "theorizing" about friendly a.i. way back then, i doubt they made any breakthroughs since then... how could they?
― Sébastien, Sunday, 7 April 2013 01:06 (eleven years ago) link
in a way the "friendly a.i." advocates are like the epicureans who 2300 years ago conceptualized the atom only by using their bare eyes and their intuition: some time down the line we sort of prove them right but back then they really had no good understanding of how it worked. who knows in the (far) future it's possible some stuff they talk about in their conceptualizaiton of a friendly a.i. will be seen as useful and recuperated .
― Sébastien, Sunday, 7 April 2013 02:25 (eleven years ago) link
Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]
http://rationalwiki.org/wiki/Roko%27s_basilisk
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 01:49 (ten years ago) link
In LessWrong's Timeless Decision Theory (TDT),[3] this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.
sounds like a great theory to have, sooper sound
― j., Thursday, 1 August 2013 02:55 (ten years ago) link
(will it be less weird than reza negarestani tho)
― goole, Monday, 16 May 2016 15:09 (seven years ago) link
i think philosophy + art theory guy simon o'sullivan is also working on something in that area, maybe developing from this interesting article from a couple of years ago - http://www.metamute.org/editorial/articles/missing-subject-accelerationism
― lazy rascals, spending their substance, and more, in riotous living (Merdeyeux), Monday, 16 May 2016 15:12 (seven years ago) link
i like sandifer's tardis eruditorum. he writes way too much though, like a new essay every day.
― remove butt (abanana), Monday, 16 May 2016 15:28 (seven years ago) link
I've been learning Bayesian methods for work, and these guys have completely co-opted the phrase "Bayesian" across the entire internet. To them it's just an empty tribal indicator--they tie themselves in knots with their endless discussions (the number one hallmark of this kind of guy: BLOVIATION), and obviously have no conception of using bayesian stats to actually, like, Do Science. They're all obsessed with E.T Jaynes because Eliezer is; and I'm sure Jaynes is a great thinker (haven't read him), but a mention of him is an easy way to tell when someone is full of shit.
― Dan I., Tuesday, 2 August 2016 16:33 (seven years ago) link
cleanse your palate by reading andrew gelman, who's bayesian as hell and in no way affiliated with rationalist AI cultist creeps
― Guayaquil (eephus!), Tuesday, 2 August 2016 16:44 (seven years ago) link
Gelman on Jaynes:
"E. T. Jaynes was a physicist who applied Bayesian inference to problems in statistical mechanics and signal processing. He was an excellent writer with a dramatic style, and some of his work inspired me greatly. In particular, I like his approach of assuming a strong model and then fixing it when it does not fit the data. (This sounds obvious, but the standard Bayesian methodology of 20 years ago did not allow for this.) I don’t think Jaynes ever stated this principle explicitly but he followed it in his examples. I remember one example of the probability of getting 1,2,3,4,5,6 on a roll of a die, where he discussed how various imperfections of the die would move you away from a uniform distribution. It was an interesting example because he didn’t just try to fit the data; rather, he used model misfit as information to learn more about the physical system under study.
That said, I think there’s an unfortunate tendency among some physicists and others to think of Jaynes as a guru and to think his pronouncements are always correct. (See the offhand mentions here, for example.) I’d draw an analogy to another Ed: I’m thinking here of Tufte, who made huge contributions in statistical graphics and also has a charismatic, oracular style of writing. Anyway, back to Jaynes: I firmly believe that much of one’s statistical tastes are formed by exposures to particular applications, and I could imagine that Jaynes’s methods worked particularly well for his problems but wouldn’t directly apply, for example, to data analyses in economics and political science. The general principles still hold—certainly, our modeling advice starting on page 3 of Bayesian Data Analysis is inspired by Jaynes as well as other predecessors—but I wouldn’t treat his specific words (or anyone else’s, including ours) as gospel."
― Guayaquil (eephus!), Tuesday, 2 August 2016 16:45 (seven years ago) link
a onetime student of mine
i feel like i must have posted this on ANOTHER one of the threads we have for bonkers/fearsome bay-area tech-spirit jibber jabber, but said student has also since attended meetings for some kind of self-optimization circle, a 'total honesty' kind of thing that would have probably included dropping acid and wearing a robe in its 60s version
probably after i mentioned that previously crim5on h3xag0n said 'no those groups can be quite useful'
― j., Tuesday, 2 August 2016 16:54 (seven years ago) link
I've read Gelman's blog religiously for years but for some reason have never read BDA, though I did read parts of his mixed modeling book. Since I'm new to all this, I've been reading Kruschke 2nd ed. (an intro level book) and just picked up Statistical Rethinking which has been getting rave reviews.
― Dan I., Tuesday, 2 August 2016 16:56 (seven years ago) link
reading bayesian statistics religiously is exactly how we got into this mess bro
― Guayaquil (eephus!), Tuesday, 2 August 2016 21:53 (seven years ago) link
so on-brand that when effective altruists discuss possible reasons for time discounting, they ignore the possibility that our models and predictions might be wrong
https://concepts.effectivealtruism.org/concepts/discounting-the-future/
― lukas, Thursday, 22 July 2021 23:34 (two years ago) link
This person has spent the last, like, 20 years planning ways to outsmart skynet: pic.twitter.com/Z1wf1ACD3y— john stuart millennial 🥑 (@js_thrill) August 8, 2021
― Believe me, grow a lemon tree. (ledge), Monday, 9 August 2021 07:38 (two years ago) link
looool
― Clara Lemlich stan account (silby), Monday, 9 August 2021 17:03 (two years ago) link
So you know the thing these guys do, when they reductio ad absurdum themselves without realizing it? ("We probably live in a simulation", "the only rational thing to do is maximize the number of insects", whatever. Boltzmann brains had a separate genesis but I'll allow that concept here.)I wonder if it's possible to prove, maybe inductively, the existence of an infinite number of these superficially rational conclusions.
― death generator (lukas), Monday, 5 September 2022 20:54 (one year ago) link
Maybe I'm doing them a disservice (lol) (I'm not about to start digging into the forums) but their big idea that if everyone were rational we could build a utopia doesn't seem to take into account the perfectly rational idea of perfectly rational sociopaths.
― ledge, Tuesday, 6 September 2022 07:46 (one year ago) link
God, grant me the serenity to accept the people I cannot change;The courage to change the person I can;And the wisdom to know: It's me.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:32 (one year ago) link
If I am not the problem, there is no solution.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:33 (one year ago) link
the existence of an infinite number of these superficially rational conclusions.
i suggest training an AI model to generate them
― ufo, Tuesday, 6 September 2022 11:35 (one year ago) link
believing that you are an entirely rational being is a greater leap of faith than anything found in any major world religion.
― link.exposing.politically (Camaraderie at Arms Length), Tuesday, 6 September 2022 12:01 (one year ago) link
"the only rational thing to do is maximize the number of insects"
What is this in reference to?
― peace, man, Tuesday, 6 September 2022 12:32 (one year ago) link
ah this is the thread for https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/
― TWELVE Michelob stars?!? (seandalai), Tuesday, 6 September 2022 14:05 (one year ago) link
xp lol bernard
― Karl Malone, Tuesday, 6 September 2022 14:19 (one year ago) link
See also seandalai's link but:
(1) effective altruists are almost always utilitarians(2) they kinda ignore negative utility(3) so for them, the best thing to do is maximize the number of sentient beings, because more utils
but yeah per seandalai's link, they consider simulated beings just as good as actual beings, so we should aim for a future with lots of computers simulating people etc etc
― death generator (lukas), Tuesday, 6 September 2022 15:22 (one year ago) link
xp I thought the subheading would be enough to give me a grasp of how stupid and sad this is, but no it got much dumber:
Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:25 (one year ago) link
“Rationalism”. Ever since I’ve realized this is an obsession of goons on the dark enlightenment spectrum I use it against them as much as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:28 (one year ago) link
Not to be captain save-a-rationalist but I'm not sure about the overlap between effective altruism and longtermism/transhumanism. The former might typically be utilitarian adjacent but I don't think it's necessarily tied in with the latter, and isn't exclusively the domain of rationalist weirdos.
― ledge, Tuesday, 6 September 2022 15:32 (one year ago) link
AIUI longtermism is one branch of effective altruists. Yes, there are effective altruists who are more into sending deworming pills to schools in Africa and the like.
― death generator (lukas), Tuesday, 6 September 2022 15:40 (one year ago) link
They both spring from the same error though, which is that we just need to get some smart people to figure out things for the rest of us.
― death generator (lukas), Tuesday, 6 September 2022 15:41 (one year ago) link
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism. I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah? Or am I out of line here.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:50 (one year ago) link
Maybe I don’t even believe that tbh.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:51 (one year ago) link
I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah?
I think if we avoid destroying the earth, yeah I'd agree with this.
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism.
I had something more like "minimize human domination over other humans" in mind but this works too.
― death generator (lukas), Tuesday, 6 September 2022 15:56 (one year ago) link
So here's an effective altruist arguing that longtermism is bs, basically saying your little toy model of the future is useless: https://forum.effectivealtruism.org/posts/RRyHcupuDafFNXt6p/longtermism-and-computational-complexity
Someone makes a brilliant point in the comments: "Loved this post - reminds me a lot of intractability critiques of central economic planning, except now applied to consequentialism writ large."
Given that most EAs are kinda libertarian-leaning (hate central planning when applied to real-world economies) this is ... devastating.
― death generator (lukas), Tuesday, 6 September 2022 15:58 (one year ago) link
xps yeah I didn't realise how much the official EA organisation had been taken over:https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
― ledge, Tuesday, 6 September 2022 16:10 (one year ago) link
xp that is an exceedingly rigorous formulation of what is a very obvious and common sense objection. (hence far more effective for the intended audience.)
― ledge, Tuesday, 6 September 2022 16:23 (one year ago) link
I had something more like "minimize human domination over other humans" in mind but this works too.Right. Am I perhaps fundamentally misunderstanding rationalism? (Genuine question, I come to these kinds of threads to learn — I may not be totally out of line but I am mostly out my depth.)My suggestion was focused on the process while yours seems more goals-oriented. Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 16:26 (one year ago) link
Well mine is process-oriented too I think ... one of the reasons to oppose human domination over other humans is everyone has a limited view of the world, everyone sees based on their own experiences and interests, so process-wise you should avoid having people make decisions for other people, regardless of how well-meaning they might be.
I may not be totally out of line but I am mostly out my depth.
lol trust me I have a very shallow understanding of this stuff as well. My indignation, however, is bottomless.
Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
Utilitarianism, right? (which is related to but I think not the same as consequentialism, but I don't understand the difference)
― death generator (lukas), Tuesday, 6 September 2022 16:35 (one year ago) link
Consequentialism just says that the morality of an action resides in its consequences, as opposed to how well it follows some (e.g. god given) rules or whether it's inherently virtuous (whatever that means).Utilitarianism specifies what the consequences should be.
― ledge, Tuesday, 6 September 2022 16:48 (one year ago) link
Which is partly why utilitarianism is so tempting - consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
― ledge, Tuesday, 6 September 2022 17:21 (one year ago) link
Consequentialism just says that the morality of an action resides in its consequences
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 17:30 (one year ago) link
consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
my uneducated answer here is that if you've arrived at a situation where other people are pawns in your game - even if you mean them well - something has gone wrong upstream.
obviously there are situations where you need to guess what is best for someone else, but we should try to minimize them. it shouldn't be the paradigm example of moral reasoning.
― death generator (lukas), Tuesday, 6 September 2022 18:24 (one year ago) link
btw, effective altruism has its own ilx thread.
art is a waste of time; reducing suffering is all that matters
― more difficult than I look (Aimless), Tuesday, 6 September 2022 18:38 (one year ago) link
xpyes, which is why the answer to the Enlightenment: good/bad? question differs depending where in the world you ask it
― rob, Tuesday, 6 September 2022 18:39 (one year ago) link
well what could be wrong with maximising happiness?This was rhetorical but yes treating people as pawns is one major problem, as is the fact that happiness, or whatever your unit of utility is, is not the kind of thing that you can do calculations with. One hundred and one people who are all one percent happy is not at all a better state of affairs than one person who is one hundred percent happy. (Not that there isn't a place for e.g. quality adjusted life years calculations in certain institutional settings.)
― ledge, Tuesday, 6 September 2022 18:59 (one year ago) link
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measureI think "the end justifies the means" is a bit more slippery - it's often used to weigh one set of consequences more heavily than another, e.g. bombing hiroshima to end the war. And, well we're talking about human actions and human consequences, I think its fair to restrcit it to humanly measurable ones.
― ledge, Tuesday, 6 September 2022 19:12 (one year ago) link
Even human consequences extend indefinitely. Identifying an end point is an arbitrary imposition upon a ceaseless flow, the rough equivalent of ending a story with "and they all lived happily ever after".
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:11 (one year ago) link
so do you never consider the consequences of your actions or do you have trouble getting up in the morning?
― ledge, Tuesday, 6 September 2022 20:43 (one year ago) link
I am not engaged in a program of identifying a universal moral framework based upon the consequences of my actions when I get up in the morning, which certainly makes it easier to choose what to wear.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:47 (one year ago) link
touche!
― ledge, Tuesday, 6 September 2022 21:08 (one year ago) link
This is the ideal utilitarian form. You may not like it, but this is what peak performance looks like pic.twitter.com/uHvCp2Cq7y— MHR (@SpacedOutMatt) September 16, 2022
― 𝔠𝔞𝔢𝔨 (caek), Saturday, 17 September 2022 16:30 (one year ago) link
incredible
― death generator (lukas), Sunday, 25 September 2022 23:20 (one year ago) link
Read this a few days ago. As AI burns through staggering amounts of money with no reasonable use case so far, all your fave fascist tech moguls are gonna hitch themselves to a government gravy train under a Trump administration (gift link): https://wapo.st/3wllikQ
― Are you addicted to struggling with your horse? (Boring, Maryland), Sunday, 5 May 2024 14:35 (yesterday) link