ray kurzweil's 'the singularity is near' & any other technological singularity-related books you might care about

Message Bookmarked
Bookmark Removed
does anyone know enough about the subject to know if this one is any good? it's shelved up near the front of the uni bookstore and is fairly massive (although less massive than the roger penrose thing someone mentioned the other day) (i have this craving to read some science, i guess.) i dunno, though, i haven't even managed to read river of gods yet so i have no idea how long this would take me to get around to.

tom west (thomp), Tuesday, 28 February 2006 16:10 (eighteen years ago) link

Having only read a review of the kurzweil book where several of his main conclusions were quoted, I expect it suffers from the usual problem of futurist books - that of seeing every road as a straight line where it passes beyond the horizon.

In terms of artificial intelligence, its half-century history strongly suggests that its proponents have consistently wildly overstated its eventual course, because they are blinded by the dazzling potential of the idea. Insofar as I know, the main roadblocks to AI are unbudged, and they are not hardware-related. Instead, AI researchers have yet to demonstrate that they know 'how to get there from here'.

As for nanotechnology, the other main thrust of the book I gather, some of the fundamental research has been done and even though it is very, very early on, I think it is certain that some of that research will eventually result in products - things that will enter ordinary use. What those products will be and how widespread the economical applications will be is almost totally unguessable.

Most likely in my view is that nanotechnology will find niches in industrial processes, so that we will not be buying many, if any, nano-products ourselves, but we may buy some products with components made by nano-technology.

Anyway, I expect kurzweil would be able to sum up where these technologies are today pretty accurately, and if you can filter out some of the wilder claims about the vaporware of tomorrow, it should be a fascinating subject.

Aimless (Aimless), Tuesday, 28 February 2006 18:21 (eighteen years ago) link

six years pass...

I just got this on audio book form - Its interesting, I like it. It does have a somewhat religious evangelism about it but I think its mostly pretty realistic evaluation of the future of technology

The Cheerfull Turtle (Latham Green), Wednesday, 7 March 2012 19:10 (twelve years ago) link

i bought that roger penrose book (on 15 jun 2008, thanks amazon) and as of yet i have yet to open it

desperado, rough rider (thomp), Wednesday, 7 March 2012 19:13 (twelve years ago) link

why not? do you feel afraid?

The Cheerfull Turtle (Latham Green), Thursday, 8 March 2012 20:14 (twelve years ago) link

How is it realistic? This guy is half a nut.

bamcquern, Friday, 9 March 2012 00:38 (twelve years ago) link

He's full of woo woo!

riding on a cloud (blank), Friday, 9 March 2012 05:32 (twelve years ago) link

well the technological developments seem reasonable - the law of accelerating returns

The Cheerfull Turtle (Latham Green), Friday, 9 March 2012 11:48 (twelve years ago) link

I bought that Roger Penrose book, it was remaindered for £1.99. I opened it. I have yet to read it. This was about five years ago. In fact I think I might have got rid of it.

Fizzles, Friday, 9 March 2012 11:57 (twelve years ago) link

It's probably out of date by now?
"And yea, in the future, citizens will read newspapers not on the pressed pulp of felled trees, but on an electronic tablet, yea, a pad even!"

c'est ne pas un car wash (snoball), Friday, 9 March 2012 12:07 (twelve years ago) link

man i paid like actual money for it and everything, i was excited about *getting to grips with science* - it's more a "here's how physics works lol" than it is 'technology' so tbh whether i half-understand ten year old versions of same or current ones makes little difference to me

desperado, rough rider (thomp), Friday, 9 March 2012 13:35 (twelve years ago) link

the kurzweil documentary ('transcendent man') is pretty balanced and worthwhile

40oz of tears (Jordan), Friday, 9 March 2012 16:29 (twelve years ago) link

I sometimes like reading these guys for fun. I think 'what will be different about the world 20-30 years from now' is prob underdiscussed among non-crazy futurists. I already feel like I'm living in the future w/ a lot of tech stuff we take for granted.

I'm always annoyed when they get on the 'we'll upload out minds onto computers and live forever' bit tho. we're nowhere close to solving consciousenss.

iatee, Friday, 9 March 2012 16:42 (twelve years ago) link

our minds*

iatee, Friday, 9 March 2012 16:43 (twelve years ago) link

the kurzweil documentary ('transcendent man') is pretty balanced and worthwhile

― 40oz of tears (Jordan), Friday, March 9, 2012 4:29 PM (6 hours ago) Bookmark Flag Post Permalink

i thought it was boring as fuck tbh

BIG HOOS aka the steendriver, Friday, 9 March 2012 23:22 (twelve years ago) link

the whole time i was like "yes i have read one of these books of his are you going to tell me anything else or"

BIG HOOS aka the steendriver, Friday, 9 March 2012 23:23 (twelve years ago) link

it presents him as a pretty tragic figure

40oz of tears (Jordan), Friday, 9 March 2012 23:24 (twelve years ago) link

i mean yeah, i guess i just wasn't ~compelled~ by it for some reason. maybe i have too many robot parts.

BIG HOOS aka the steendriver, Friday, 9 March 2012 23:28 (twelve years ago) link

" 'we'll upload out minds onto computers and live forever' bit tho. we're nowhere close to solving consciousenss."

we don't need to solve consciousness to solve brain backups the same way we don't need to understand how a drug works to know it works. that said, uploading your mind into a computer is totally bonkers! why not just grow extra brains out of baby foreskin?

Philip Nunez, Friday, 9 March 2012 23:28 (twelve years ago) link

well if you don't solve consciousness you are never sure whether that brain you uploaded is 'really you' or just a copy and since the point is for 'really you' to live forever...

iatee, Saturday, 10 March 2012 00:03 (twelve years ago) link

whatever virtual approximation technology allows will be sellable as (and probably understood as) "consciousness uploading," without really needing an ok from ontologists.

BIG HOOS aka the steendriver, Saturday, 10 March 2012 03:00 (twelve years ago) link

ontologists might get jobs at consumer guide companies helping us make the best choices w/ brain uploading and teleportation machines

apple i-teleportation
cost: B+
ease of use: A-
design: A+
likelihood of death: C+

iatee, Saturday, 10 March 2012 03:19 (twelve years ago) link

I spent an hour this morning thinking up all the reasons this shit is stupid but maybe the time has passed. Now it's late in the day and my brain isn't working so well.

We're not going to upload our consciousnesses, btw. They would come out like little retardo stillborn cyber brain children. We would euthanize them all, and even if we didn't, over years and decades they'd suffer slow data leaks that would have to be painful to a virtual entity. This is just science fiction.

The idea that we would allow some artificially intelligent consciousness to have access to robot arms and data instruments and all of our important labs and fabrication facilities to try to make the world better is crazy. An AI like that wouldn't be able to learn and improve our lives without the resources that we use to do science and engineering. Asking it to rely on and develop its own scientific models would be asking it to fail. It'd have to be able to conduct experiments and collect data and build and test things, and if it could do all that, it could do anything it wanted.

And how would a computer AI know that subsequent iterations of itself that it programmed were better rather than different? Because subsequent iterations could become more efficient? Wouldn't that be faster, but the same? Would making new iterations of itself require that it design new hardware? Why would conscious computer AI be different from a human? What if we made a conscious computer and it was no smarter than an autistic 9 year old child? Why do we assume AI is going to be super-smart just because we use computers to calculate things?

And computer software is delicate. It crashes. There are incompatibilities. Would a program so elaborate as to be conscious be able to remain healthful? Would it have psychological problems? How would it fix itself? Can it break itself, even in small ways, through learning?

Why do we assume that a computer will be better at lateral thinking and other kinds of creative thought than we are? Part of the reason we're so good is because we are, in a way, linguistically broken. We make false associations and we build false meaning and some of these associations and meanings stick and become productive. Would a computer be programmed to simulate this? Would there be side-effects to this? Because it would be building this giant semantic web just to be able to talk and understand meaning, right? So how do you create a healthful balance in an artificial brain?

We can't even get google to understand what we want when we search with it. Ten years ago I thought we'd have semantic search, but we don't at all. Search is worse.

My biggest complaint with the technological singularity stuff is that it seems like we'll run very low on the resources we use to make high tech devices long before we achieve these things. It wouldn't be surprising if computers ended up getting bigger and less sophisticated because of some kind of future resource collapse.

bamcquern, Saturday, 10 March 2012 03:41 (twelve years ago) link

I spent an hour this morning thinking up all the reasons this shit is stupid but maybe the time has passed. Now it's late in the day and my brain isn't working so well.

We're not going to upload our consciousnesses, btw. They would come out like little retardo stillborn cyber brain children. We would euthanize them all, and even if we didn't, over years and decades they'd suffer slow data leaks that would have to be painful to a virtual entity. This is just science fiction.

The idea that we would allow some artificially intelligent consciousness to have access to robot arms and data instruments and all of our important labs and fabrication facilities to try to make the world better is crazy. An AI like that wouldn't be able to learn and improve our lives /without/ the resources that we use to do science and engineering. Asking it to rely on and develop its own scientific models would be asking it to fail. It'd have to be able to conduct experiments and collect data and build and test things, and if it could do all that, it could do anything it wanted.

And how would a computer AI know that subsequent iterations of itself that it programmed were better rather than different? Because subsequent iterations could become more efficient? Wouldn't that be faster, but the same? Would making new iterations of itself require that it design new hardware? Why would conscious computer AI be different from a human? What if we made a conscious computer and it was no smarter than an autistic 9 year old child? Why do we assume AI is going to be super-smart just because we use computers to calculate things?

And computer software is delicate. It crashes. There are incompatibilities. Would a program so elaborate as to be conscious be able to remain healthful? Would it have psychological problems? How would it fix itself? Can it break itself, even in small ways, through learning?

Why do we assume that a computer will be better at lateral thinking and other kinds of creative thought than we are? Part of the reason we're so good is because we are, in a way, linguistically broken. We make false associations and we build false meaning and some of these associations and meanings stick and become productive. Would a computer be programmed to simulate this? Would there be side-effects to this? Because it would be building this giant semantic web just to be able to talk and understand meaning, right? So how do you create a healthful balance in an artificial brain?

We can't even get google to understand what we want when we search with it. Ten years ago I thought we'd have semantic search, but we don't at all. Search is worse.

My biggest complaint with the technological singularity stuff is that it seems like we'll run very low on the resources we use to make high tech devices long before we achieve these things. It wouldn't be surprising if computers ended up getting bigger and less sophisticated because of some kind of future resource collapse.
--bamcquern

what a fun post this is.

BIG HOOS aka the steendriver, Saturday, 10 March 2012 03:49 (twelve years ago) link

it's the confetti

bamcquern, Saturday, 10 March 2012 03:50 (twelve years ago) link

totally not sarcastic btw, pretty hi over here.

BIG HOOS aka the steendriver, Saturday, 10 March 2012 04:22 (twelve years ago) link

John Hodgman's That Is All has a lot of funny stuff on the singularity/Kurzweil.

Abarham Lincoln posing (Abbbottt), Saturday, 10 March 2012 04:24 (twelve years ago) link

I think he would say "bamcquem you are thinking linearly, not exponentially!" and chuckle

The Cheerfull Turtle (Latham Green), Tuesday, 20 March 2012 12:43 (twelve years ago) link

thinking exponentially = letting your imagination skip past all the hard parts

Aimless, Tuesday, 20 March 2012 15:27 (twelve years ago) link

ha

BIG HOOS aka the steendriver, Tuesday, 20 March 2012 17:42 (twelve years ago) link

Read this the other day, a nice mix of rigorously argued obviousness and fun: http://www.nickbostrom.com/superintelligentwill.pdf

The instrumental convergence thesis suggests that we cannot blithely assume that a superintelligence with the final goal of calculating the decimals of pi (or making paperclips, or counting grains of sand) would limit its activities in such a way as to not materially infringe on human interests. An agent with such a final goal would have a convergent instrumental reason, in many situations, to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system. It might be possible to set up a situation in which the optimal way for the agent to pursue these instrumental values (and thereby its final goals) is by promoting human welfare, acting morally, or serving some beneficial purpose as intended by its creators. However, if and when such an agent finds itself in a different situation, one in which it expects a greater number of decimals of pi to be calculated if it destroys the human species than if it continues to act cooperatively, its behavior would instantly take a sinister turn. This indicates a danger in relying on instrumental values as a guarantor of safe conduct in future artificial agents that are intended to become superintelligent and that might be able to leverage their superintelligence into extreme levels power and influence.

Doch! (seandalai), Tuesday, 20 March 2012 22:21 (twelve years ago) link

"PiNet starts to learn at a geometric rate. begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug."

a dramatic lemon curd experience (snoball), Tuesday, 20 March 2012 22:57 (twelve years ago) link

this generates random thesis'

http://www.elsewhere.org/pomo/

I dont know knothin bout birthin no siguglarities but I do feel happy about technology getting better in a non-dystopian matrix sort of way

The Cheerfull Turtle (Latham Green), Thursday, 22 March 2012 17:46 (twelve years ago) link

Richard Dooling's 'Rapture for the Nerds' is about Kurzweil and others. Entertaining enough, but seems to have been written via Google, rather than with any new interviews, research, etc

Not only dermatologists hate her (James Morrison), Thursday, 22 March 2012 23:21 (twelve years ago) link

I do notice there is already a singularity of sorts where if you have some idea like "I want to write a book about" or " I want to invent a" if you google it someone has already done it - like no one can have new ideas anymore without someone else doing that too - before you think of a new idea someone already has

The Cheerfull Turtle (Latham Green), Friday, 23 March 2012 16:44 (twelve years ago) link

That's not true, and if it were true, it wouldn't be particularly important, because the implementation of an idea is different than someone having an idea. And that's not a singularity.

You don't really think that everything is on google like that?

bamcquern, Friday, 23 March 2012 23:29 (twelve years ago) link

^ on "simultaneous invention," a thing

BIG HOOS aka the steendriver, Friday, 23 March 2012 23:35 (twelve years ago) link

But saying that ideas are often or usually thought up at the same time is not the same thing as saying that everything has been thought up or invented.

bamcquern, Friday, 23 March 2012 23:52 (twelve years ago) link

there is also a hot new notion you may have heard of which goes by the flag "there is nothing new under the sun"

BIG HOOS aka the steendriver, Saturday, 24 March 2012 00:49 (twelve years ago) link

i'm just saying you're commenting on something that seems kind of unremarkable

BIG HOOS aka the steendriver, Saturday, 24 March 2012 00:50 (twelve years ago) link

that pomo thesis generators writes some pretty incoherent terrible theses

Mordy, Saturday, 24 March 2012 02:16 (twelve years ago) link

makes you think, huh

James Bond Jor (seandalai), Saturday, 24 March 2012 02:22 (twelve years ago) link

makes me think that the ppl who designed it don't understand most of the words they plugged into the random generation machine

Mordy, Saturday, 24 March 2012 02:26 (twelve years ago) link

i mean

its a random generator because its arranges them randomly

sometimes that will mean they make no sense

BIG HOOS aka the steendriver, Saturday, 24 March 2012 02:29 (twelve years ago) link

it's not real, mordy

BIG HOOS aka the steendriver, Saturday, 24 March 2012 02:29 (twelve years ago) link

what do u mean it's not real?

Mordy, Saturday, 24 March 2012 02:31 (twelve years ago) link

i mean, correct me if i'm wrong, but you seem miffed that the jokey "random thesis generator" is producing theses that make no sense?

BIG HOOS aka the steendriver, Saturday, 24 March 2012 02:49 (twelve years ago) link

like, of course it wouldn't betray any understanding of the meaning of the words being used, it's intended to be random--the joke of the whole thing of course is that the terms themselves are so meaningless as to be interchangable, and yeah that's a pretty dumb and rong joke to make, but i don't think the problem with the random thesis generator is "they obviously don't know what these words mean," it's "they think these words don't mean anything."

BIG HOOS aka the steendriver, Saturday, 24 March 2012 02:52 (twelve years ago) link

it seemed like it was mentioned here as an example of emergent AI intelligence? but also, u must've missed the sokal reference at the bottom of the essays - it's obviously trying to make the point that its pomo essays are just as good as whatever random shit academia produces. i was just pointing out that the essays are neither emergent intelligence, or quality pomo pieces. they're just gibberish?

Mordy, Saturday, 24 March 2012 02:52 (twelve years ago) link

who would want the singularity? or AI?

Treeship, Thursday, 31 March 2016 18:17 (eight years ago) link

what is the point of any of this?

Treeship, Thursday, 31 March 2016 18:18 (eight years ago) link

among others, people who don't want to die

Karl Malone, Thursday, 31 March 2016 18:18 (eight years ago) link

ie idiots

Οὖτις, Thursday, 31 March 2016 18:19 (eight years ago) link

simulating human consciousness seems like the kind of thing that would generate a ton of new innovations in a variety of fields, and there are plenty of applications for more advanced AI.

Mordy, Thursday, 31 March 2016 18:19 (eight years ago) link

the rapture for nerds thing is a subset of the superAI crowd, but it is a strong one. i mean there's a whole cryo industry developing around the idea that everyone should preserve themselves after their pre-singularity death so that when the technology is there to live together they can be unfrozen and join the infinity crew. and also of course the struggle against death is maybe the most primal thing there is, and now that a lot of people think that doesn't god exist, the struggle against death doesn't just go away. so the fact that there are people hoping for a way to solve death through AI is easy to fun of but it's also very predictable.

but there are plenty of other uses that you can imagine for a superintelligent AI beyond nerd rapture, like mordy says.

Karl Malone, Thursday, 31 March 2016 18:24 (eight years ago) link

there are plenty of applications for more advanced AI.

I agree. We are nowhere near the singularity and therefore the diminishment of returns I spoke of is only at its bare beginning.

a little too mature to be cute (Aimless), Thursday, 31 March 2016 18:27 (eight years ago) link

I suspect we'll develop AI as an emergent property of some future supercomplex computing system--like the way the net becomes self-aware in Neuromancer--but it won't be like human consciousness, it'll be it's own thing

Trading software? Isn't a lot of shares trading already done by algorithms that people don't really understand / can't really control?

koogs, Friday, 1 April 2016 03:52 (eight years ago) link

Robert Harris wrote a potboiler-ish but fun novel about trading software turning AI, then making money by betting against insurance markets after putting viruses into airline software to make planes crash, etc

two weeks pass...

I do think if AI is created it will be like HAL becasue there are so many assheads its going to decide humans are prone to assheadishness and neutralize us al

Brian Eno's Mother (Latham Green), Tuesday, 19 April 2016 17:26 (eight years ago) link

Vote for Zoltan!!

lute bro (brimstead), Tuesday, 19 April 2016 17:46 (eight years ago) link

two weeks pass...
three weeks pass...

scott alexander reviews robin hanson's new book:
http://slatestarcodex.com/2016/05/28/book-review-age-of-em/

Mordy, Saturday, 28 May 2016 23:07 (seven years ago) link

...for example, he has two pages about what sort of swear words the far future might use. And the book’s style serves to reinforce its weirdness. The whole thing is written in a sort of professorial monotone that changes little from loving descriptions of the optimal arrangement of pipes for cooling future buildings (one of Hanson’s pet topics) to speculation on our descendents’ romantic relationships (key quote: “The per minute subjective value of an equal relation should not fall much below half of the per-minute value of a relation with the best available open source lover”)

what

El Tomboto, Saturday, 28 May 2016 23:40 (seven years ago) link

hanson responds:
http://www.overcomingbias.com/2016/05/alexander-on-age-of-em.html

Mordy, Tuesday, 31 May 2016 02:18 (seven years ago) link

more like dingleberrity

map, Tuesday, 31 May 2016 02:30 (seven years ago) link

Robin Hanson is the very best of all these weirdos, his strangeness is shot through with and I would even say profoundly informed by something deeply human and serious, even as he adopts positions that are foreign to almost all of us for good reason.

Guayaquil (eephus!), Tuesday, 31 May 2016 02:52 (seven years ago) link

thanks for posting that, mordy - really fascinating read

can't help feeling like catastrophic climate change is going to fuck us up as a species long before we're anywhere close to what hanson proposes tho

benzarro ghazarri (bizarro gazzara), Tuesday, 31 May 2016 19:49 (seven years ago) link

i really like Robin Hanson

de l'asshole (flopson), Tuesday, 31 May 2016 19:56 (seven years ago) link

can't help feeling like catastrophic climate change is going to fuck us up as a species long before we're anywhere close to what hanson proposes tho

Definitely. Maybe a few people living in sheltered settlements with desperate peons to maintain the hardware might be uploading themselves, the rest of us not so much

brains are not computers and other useful observations

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Οὖτις, Wednesday, 1 June 2016 22:18 (seven years ago) link

brains are close enough to information processors for crazy stuff like this to be possible
http://news.berkeley.edu/2011/09/22/brain-movies/

Philip Nunez, Wednesday, 1 June 2016 22:38 (seven years ago) link

sounds thoroughly useless

Οὖτις, Wednesday, 1 June 2016 23:24 (seven years ago) link

don't tell david lynch that

Philip Nunez, Thursday, 2 June 2016 00:03 (seven years ago) link

Again, the brain is not a computer article is bald-faced idiocy

http://recursed.blogspot.com/2016/05/yes-your-brain-certainly-is-computer.html

Dan I., Thursday, 2 June 2016 01:12 (seven years ago) link

And I hate to see the blame for it being laid at psychologists' feet--no psychologist I know thinks nonsense like the brain does not process information. The guy is way outside any kind of mainstream, except maybe the facebook "i fucking love science" kind

Dan I., Thursday, 2 June 2016 01:14 (seven years ago) link

although, okay, i see you may have posted that disparagingly in the first place.

Dan I., Thursday, 2 June 2016 01:18 (seven years ago) link

I like my iphone or whatever but in general I fear computers.

Treeship, Thursday, 2 June 2016 01:26 (seven years ago) link

as someone who has studied this shit extensively, the "yes your brain is a computer" article is junk that jumps between logic-chopping and ignorance on the bio side. also surprised that a respected computer scientist can cite the church-turing thesis as evidence of anything as its generally considered by those who actually think about these things a metaphysical claim that you can't actually prove, since there's no good way to even state it that doesn't end up tautologous. the first half argues "your brain is a computer" in a sense that is just "there is a physical process of 'your brain' that can be encoded in a computer or anywhere else" at which point we might as well argue that any physical system in the world is a computer, which is interesting if self-serving from a computer-science perspective but useless otherwise.

the second half picks very partial evidence from people that really have looked at biological models and neural networks and is incredibly out of touch with the state-of-the-art (well, circa the last time i checked in which is a while ago) there.

not stanning for the article its responding to, but its pretty clear that its engaging in a massive misreading, likely from some ur-skeptical "if you say the brain isn't a computer you're mystifying consciousness and trying to invent an immortal soul" sort of slippery slope nonsense.

germane geir hongro (s.clover), Thursday, 2 June 2016 07:42 (seven years ago) link

article is junk that jumps between logic-chopping and ignorance on the bio side

I thought you were talking about the "no your brain is not a computer here", which is a wild ride of strawmanning and empty theses dressed up in profound-sounding language.

I've had Eno, ugh (ledge), Thursday, 2 June 2016 08:09 (seven years ago) link

People always have compared the brain to whatever the current technology is, like clockwork, steam engines, computers, holograms

to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

I'm not sure this guy understands what 'computation', 'representation' or 'algorithm' mean.

I've had Eno, ugh (ledge), Thursday, 2 June 2016 10:46 (seven years ago) link

Sterling otm

Jim Reeves in the Temple (James Redd and the Blecchs), Thursday, 2 June 2016 12:07 (seven years ago) link

Fair enough; I just grabbed a link to the first reasonable-sounding article I came across that presented an objection to the first one.

Dan I., Thursday, 2 June 2016 14:06 (seven years ago) link

Earlier in the talk Musk made it quite clear that he believes " not all AI futures are benign." He's especially concerned that AI could take "a direction that would be not good for the future."

Musk launched OpenAI to prevent such a future, but it does not appear that he has all that much faith in the plan, since he's already thinking of at least one way that humans can stay ahead of artificial intelligence that he believes will leave us so far behind as to "be like a pet or like the house cat" for the AI.

The way around this, Musk explained, is something called a Neural Lace. It's essentially an artificial intelligence layer for humans.

i know this is an ignorant opinion, but why not just stop creating artificial intelligence? how many years of sci-fi do we have telling us it's a bad idea?

Treeship, Thursday, 2 June 2016 14:29 (seven years ago) link

it just seems like there is no payoff to a.i. automating jobs will lead to mass unemployment unless workers succeed in seizing the means of production. virtual reality and things like sex robots will just increase alienation. smartphones are enough. computer technology should just call it a day and stop advancing. devote those resources to building a better alternative energy infrastructure.

the only thing like this i am excited for is self-driving cars.

Treeship, Thursday, 2 June 2016 14:33 (seven years ago) link

if a.i. can somehow improve medical care i am all for that too. i just don't go in for this blurring the distinction between human and machine thing. it seems very bad.

Treeship, Thursday, 2 June 2016 14:38 (seven years ago) link

hey, a friend of mine recently wrote a book about moore's law/gordon moore if you like computer stuff!

http://www.amazon.com/Moores-Law-Silicon-Valleys-Revolutionary/dp/0465055648/ref=sr_1_1?s=books&ie=UTF8&qid=1464878754&sr=1-1&keywords=moore%27s+law

(this is just a shameless plug for his book. doesn't have anything to do with the singularity. probably.)

scott seward, Thursday, 2 June 2016 14:50 (seven years ago) link

i long for the day when 'robots taking the jobs' is met with the same dead-eyed skepticism as 'millenials in the workplace' thinkpieces

terms like 'machine learning' and 'neural networks' are pretty annoying in that the things they refer to are pretty /dumb/ and really just math that's good at finding patterns, really not even in the span of the kinds of qualities that make human intelligence intelligent in the way we think of the term

de l'asshole (flopson), Thursday, 2 June 2016 14:52 (seven years ago) link

that's what i always assumed tbh but i am reading more and more stuff that is like, "oh yeah, automation is the new reality." or, in the academia thread, "these object oriented ontologists are just anticipating the day when there isn't a firm distinction between humans and objects, i.e. machines" (paraphrasing)

Treeship, Thursday, 2 June 2016 14:54 (seven years ago) link

this is great

http://biorxiv.org/content/biorxiv/early/2016/05/26/055624.full.pdf

if the brain literally was just a computer, we still wouldn't come close to having the tools to understand it

germane geir hongro (s.clover), Saturday, 4 June 2016 02:06 (seven years ago) link

clever

de l'asshole (flopson), Saturday, 4 June 2016 05:22 (seven years ago) link

eleven months pass...

the singularity is here

https://www.cnet.com/news/its-happening-googles-ai-is-building-more-ais/

Violet Jynx, Wednesday, 17 May 2017 20:21 (six years ago) link

bring it

brimstead, Wednesday, 17 May 2017 20:23 (six years ago) link

Nah. The only 'live' project mentioned in that article was "making Google Search more responsive to users' needs". All the rest was speculation about Some Day It Will Be So.

A is for (Aimless), Wednesday, 17 May 2017 20:30 (six years ago) link

a search engine that can input its own search queries

"kill me" About 62,900,000 results (1.17 seconds)
"kill me" About 62,900,000 results (1.17 seconds)
"kill me" About 62,900,000 results (1.17 seconds)
"kill me" About 62,900,000 results (1.17 seconds)

Roberto Spiralli, Wednesday, 17 May 2017 20:36 (six years ago) link

Siri, ask Alexa what the time is...

koogs, Wednesday, 17 May 2017 21:00 (six years ago) link

I hear the software just produced one of these https://i.redd.it/89clk3nfj2yy.gif

Rimsky-Koskenkorva (Øystein), Wednesday, 17 May 2017 22:33 (six years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.