New Yorker magazine alert thread

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6071 of them)

is this the woman who was tweeting that she hoped Goldsmith was murdered?

soref, Friday, 2 October 2015 10:10 (eight years ago) link

cmon man

crime breeze (schlump), Saturday, 3 October 2015 04:19 (eight years ago) link

https://pbs.twimg.com/media/CAQpqqNUcAE5FzS.png

insight into the mind of a killer

crime breeze (schlump), Saturday, 3 October 2015 04:21 (eight years ago) link

"favourites" what is this canadian twitter

go hang a salami I'm a canal, adam (silby), Saturday, 3 October 2015 04:36 (eight years ago) link

three weeks pass...

the first New Yorker Radio Hour podcast came out a few days ago. Starts with an interview with TNC on James Baldwin

Why because she True and Interesting (President Keyes), Tuesday, 27 October 2015 17:41 (eight years ago) link

that's cool. for anyone jonesing for a baldwin kick, american masters is currently streaming a short 1963 film where baldwin interviews people in San Francisco: http://www.pbs.org/wnet/americanmasters/james-baldwin-am-archive-take-this-hammer/2332/

1999 ball boy (Karl Malone), Tuesday, 27 October 2015 18:35 (eight years ago) link

three weeks pass...

i guess i was primed for this article on nick bostrom and superAI because i read his book a few months ago and have been kind of obsessed with the topic lately, but i liked this a lot:

http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

it's kind of a long one, divided up into 3 parts, so for those who want a shorter reader experience (read something besides the new yorker), part 2 could work as a standalone

Karl Malone, Thursday, 19 November 2015 22:57 (eight years ago) link

Very interesting

Why,though, does bostrom WANT to live forever? Never sees his wife and son, eats his disgusting meals out of a blender, spends all his time worrying about ludicrous non-problems... Who wants an infinity of THAT?

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 00:08 (eight years ago) link

it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction.

how do you guys even make it past this sentence - this is like the premise of a billion scifi novels from the 50s on

Οὖτις, Friday, 20 November 2015 00:18 (eight years ago) link

That's what i mean by a non-problem

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 00:20 (eight years ago) link

haha yes

I guess my question is more directed at the people that made his book a best-seller

Οὖτις, Friday, 20 November 2015 00:22 (eight years ago) link

heheh Lanier OTM:

Jaron Lanier, a Microsoft researcher and tech commentator, told me that even framing the differing views as a debate was a mistake. “This is not an honest conversation,” he said. “People think it is about technology, but it is really about religion, people turning to metaphysics to cope with the human condition. They have a way of dramatizing their beliefs with an end-of-days scenario—and one does not want to criticize other people’s religions.”

Οὖτις, Friday, 20 November 2015 00:25 (eight years ago) link

find the section about the great filter v interesting.

Karl Rove Knausgård (jim in glasgow), Friday, 20 November 2015 00:27 (eight years ago) link

I find the idea that a piece of software created by humans could somehow not be full of all kinds of dumb problems and self-destructing errors to be ridiculous on its face. Anybody who really believes that we're on the verge of coding up a resilient, invulnerable, self-correcting computer program has clearly been out of touch with the actual progress of information technology for, like, a pretty long time.

El Tomboto, Friday, 20 November 2015 00:40 (eight years ago) link

I mean the best and most advanced pieces of software in the world today require armies of people to constantly maintain them or else they collapse. Not to mention the poor hardware guys who have to shuttle all over data centers replacing rack components.

El Tomboto, Friday, 20 November 2015 00:45 (eight years ago) link

or maybe we've actually already created the superintelligence multiple times but successive versions of iOS keep breaking it. Thanks Apple for saving us!

El Tomboto, Friday, 20 November 2015 00:53 (eight years ago) link

well, there's the plausibility of creating superintelligence in the first place, and then there are the concerns about security/containment if it did exist. if you accept the former, i'm not sure why anyone wouldn't also accept the latter. but most people just write-off the possibility. tombot, even if you're not into the whole AI thing i figure you'd still actually enjoy bostrom's book. iirc you were doing IT security stuff at some point (still are?), right? at least half of superintelligence is all about containment of an unprecedented security risk. it's just inherently interesting stuff, i think.

lanier's not wrong about a lot of the transhuman crowd. the rapture for nerds element is definitely strong with a lot of people. but the possibility of real superintelligence doesn't seem implausible to me. one thing i liked about bostrom's book is that he assesses 5 or 6 different strands of AI research separately. i feel like when people think about AI they are usually focused in on their own idea of what the word means and the pathway to it that makes the most sense to them. like the way that tombot (and most people i guess) talks about it just assumes that AI research is based on programming. but there are other ways that could achieve the same goal of replicating human intelligence. there's the idea of emulating the brain using a computer, or going the other way, augmenting brains. or there's the machine learning path. or combos of those things, along with traditional programming AI path. i dunno. the challenges of the programming pathway (like defining basic human terms like happiness) seem potentially impossible to overcome, but the modeling/emulation/augmentation paths just seem like a matter of time, even if it's a long time. if you could ever create even a basic level of self-learning potential, adding processing power x the always-on capability x the access to the internet would do the rest.

his book summarizes several separate paths to AI that seem like they have a non-zero chance of succeeding, and with different research communities:

machine learning
emulating brains
brain augmentation
department of commerce
master algorithm/programming
department of commerce

even if it seems improbable, it doesn't seem impossible. and if you grant even a small margin of probability to any single one of them, the potential consequences are insane.

Karl Malone, Friday, 20 November 2015 03:04 (eight years ago) link

I can definitely see that. Its just that it seems much more likely we'll be living in tent cities drinking our own urine in 50 years rather than worrying about rogue AI

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 08:08 (eight years ago) link

guy is a freak, but a much better and more interesting one than previous new yorker examples like elon musk or marc andreessen

mookieproof, Friday, 20 November 2015 14:49 (eight years ago) link

if you could ever create even a basic level of self-learning potential, adding processing power x the always-on capability x the access to the internet would do the rest.

I'm pretty sure the last one there is what kills it, not even kidding.

El Tomboto, Friday, 20 November 2015 16:16 (eight years ago) link

If it started out on 4chan or something it would definitely be dangerous!

But what if most of the world's books were digitized and it could read the equivalent of a public library every night?

Karl Malone, Friday, 20 November 2015 16:23 (eight years ago) link

It would end up with some fucked up ideas! And probably break.

El Tomboto, Friday, 20 November 2015 16:26 (eight years ago) link

If a human was able to read an entire library over the course of their entire life, would they be fucked up? Probably yes, but just emotionally and socially and psychologically, due them maniacally reading every second of their waking life until they died. But intellectually, I'm not sure.

I feel like it's a mistake to anthropomorphize computers and AI, too, so my bad there

Karl Malone, Friday, 20 November 2015 16:30 (eight years ago) link

I really need to get around to the Jill Lepore piece about polling - I heard her discussing it on NPR and she was great.

on entre O.K. on sort K.O. (man alive), Friday, 20 November 2015 16:32 (eight years ago) link

Unconstrained input is bad for people and it's bad for computers too. I don't think we are anywhere near being able to engineer a system or system-of-systems that is going to be able to safely ingest and usefully process the world.

El Tomboto, Friday, 20 November 2015 16:45 (eight years ago) link

"I started reading the article because the title implied something interesting. But it's just about this guy who's fucking crazy."

El Tomboto, Friday, 20 November 2015 20:41 (eight years ago) link

AI like Watson can read the equivalent of a public library rn. What it lacks is the exponentially larger data set that is acquired by living in a mobile body with amazingly acute sense apparatus, and interacting in a society and with the physical world, for decades.

Aimless, Friday, 20 November 2015 20:47 (eight years ago) link

And by reading a public library, Watson was able to conclude that Toronto is a city in the US.

El Tomboto, Friday, 20 November 2015 20:50 (eight years ago) link

And actual understanding ... xp

ledge, Friday, 20 November 2015 20:51 (eight years ago) link

to forestall any philosophical objections to the above:

When I asked Poggio about the results, he dismissed them as automatic associations between objects and language; the system did not understand what it saw. “Maybe human intelligence is the same thing, in which case I am wrong, or not, in which case I was right,” he told me. “How do you decide?”

ledge, Friday, 20 November 2015 21:19 (eight years ago) link

Not really a fan of Bostrom, especially when he tries to derive moral theories from wildly speculative probabilities, but I do like his simulation argument which does a pretty effective job of proving that we are probably living in a simulation (google it for more details, it's a doozy).

ledge, Friday, 20 November 2015 21:33 (eight years ago) link

When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand-up comedy

this guy

mookieproof, Saturday, 21 November 2015 00:22 (eight years ago) link

I find the idea that a piece of software created by humans could somehow not be full of all kinds of dumb problems and self-destructing errors to be ridiculous on its face. Anybody who really believes that we're on the verge of coding up a resilient, invulnerable, self-correcting computer program has clearly been out of touch with the actual progress of information technology for, like, a pretty long time.

Not sure if we still say OTM, but OTM. Here's a trenchantly funny take on the current state of brain simulation research:

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

o. nate, Saturday, 21 November 2015 02:01 (eight years ago) link

proving that we are probably living in a simulation

Upon consideration, this assertion doesn't appear to mean anything. If all-there-is is simulated, it could not possibly be all-there-is. But proving the existence of something that is not contained in all-there-is is a contradiction in terms.

Aimless, Saturday, 21 November 2015 02:33 (eight years ago) link

i guess i find myself in the oddly familiar position of defending the idea that there's a nonzero probability of something coming to pass in the distant future, but:

i think very few people are saying we're "on the verge" of discovery. among those in the field, it seems like the median estimate of superintelligence arising comes around 2050-60. there are plenty of researchers who believe it will never happen. and then there are some on the other side of the curve, too, who say maybe 10-15 years. i think people have the idea that everyone who is into AI is like ray kurzweil. from what i've observed, i think the kurzeil people are just the most loudest, obnoxious and visible edge to a community of practice that is much more varied.

i don't really trust anyone who is willing to put a 0% probability on things like this ever occurring. the only way you'd ever be able to do that is if you knew more than everyone else about the topic, which is hard to do because there are so many angles and approaches that people are taking and so many different disciplines that are involved. to reject it out of hand you have to look at a crazy talented field of researchers and phds and bona fide geniuses and textbook writers and philosophers and say, "i know more about this than all of them. every single one of them are wrong. there is a zero percent chance of it happening." i also think that it makes sense that it doesn't ~feel~ like we're close to solving AI. if there was already a semi-functional AI that could do some basic self-learning, we'd be on the very cusp of superAI, because it's knowledge growth would be exponential, not linear. it seems unlikely that a low-level AI will be developed that then gets "smarter" in an incremental, predictable way. it seems much more likely that it would come out of nowhere.

http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-content/uploads/2015/01/PPTExponentialGrowthof_Computing-1.jpg
http://waitbutwhy.com/wp-content/uploads/2015/01/gif

the important part of bostrom's new book is about the containment/security problem of superintelligence and how it needs to be addressed early on in the research cycle (bostrom talks a lot about why it can't wait until later in his book), and i haven't seen anyone say that it wouldn't be an enormous problem IF superintelligence were developed. if there's even a small chance of superintelligence being developed during any of our lifetimes, then i think it's worth thinking about. the consequences are enormous in terms of net present value, because even a tiny probability of it happening would have to be multiplied by the huge effect it create. it's kind of like a version of pascal's wager, only with something that's actually possible instead of hell/heaven.

Karl Malone, Saturday, 21 November 2015 02:37 (eight years ago) link

however near/far/impossible, it seems like something worth thinking about. (tbh i'm more 'worried' about a cane toads-like mistake with various bioengineering choices)

otm is always otm, and more o. nate is always good

mookieproof, Saturday, 21 November 2015 02:47 (eight years ago) link

is moore's law still operating?

Mordy, Saturday, 21 November 2015 02:54 (eight years ago) link

i keep read references to rumors that it will slow soon, but still operating for now, 50 years on

Karl Malone, Saturday, 21 November 2015 03:03 (eight years ago) link

It's already yielding diminishing returns, because the improvements in recent years have been about multiple cores on a chip, not making those cores run faster. That's harder to take advantage of.

o. nate, Saturday, 21 November 2015 03:06 (eight years ago) link

http://www.economist.com/blogs/economist-explains/2015/04/economist-explains-17


If Moore’s law has started to flag, it is mainly because of economics. As originally stated by Mr Moore, the law was not just about reductions in the size of transistors, but also cuts in their price. A few years ago, when transistors 28nm wide were the state of the art, chipmakers found their design and manufacturing costs beginning to rise sharply. New “fabs” (semiconductor fabrication plants) now cost more than $6 billion. In other words: transistors can be shrunk further, but they are now getting more expensive. And with the rise of cloud computing, the emphasis on the speed of the processor in desktop and laptop computers is no longer so relevant. The main unit of analysis is no longer the processor, but the rack of servers or even the data centre. The question is not how many transistors can be squeezed onto a chip, but how many can be fitted economically into a warehouse. Moore's law will come to an end; but it may first make itself irrelevant.

Karl Malone, Saturday, 21 November 2015 03:08 (eight years ago) link

I'm pretty sure that whether or not 'strong' AI can ever be achieved, someone will continue to pursue it. It's too deeply connected to the will to power ever to be laid aside. It's as alluring as perpetual motion.

Aimless, Saturday, 21 November 2015 03:09 (eight years ago) link

remember when that AI beat a turing test last year bc it mimicked teen-speak

Mordy, Saturday, 21 November 2015 03:21 (eight years ago) link

Also, the same guy scared of AI wants to upload himself into computers. At which point what does he think he will be?

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Saturday, 21 November 2015 03:33 (eight years ago) link

it's kind of like a version of pascal's wager, only with something that's actually possible instead of hell/heaven.

Same could be said of go all warming, look how well we've done in mitigating that threat.

remember when that AI beat a turing test last year

lol

ledge, Saturday, 21 November 2015 03:44 (eight years ago) link

go all warming

Alternatively, global

ledge, Saturday, 21 November 2015 03:45 (eight years ago) link

With a purposefully constrained input range and clearly defined objective (win at chess, win at Jeopardy!, pass the Turing Test, navigate a highway) computing is capable of amazing things. Regardless of horsepower, a "superintelligence" that could possibly be a greater existential threat to humankind than climate change, nuclear war or an errant space rock would demand a cosmic leap in information processing techniques so that it doesn't *ever* throw an uncaught exception and fatally shit the bed. Bostrom's a head case.

Kind of like I was saying on the presidential race thread, these people get profiled and written about and people get drawn in because on one level it can be interesting to listen to a sufficiently intelligent nutjob explain their rationale for moving to cloud cuckoo country, but on another level I think writers of these types of pieces (not to mention their conde nast taskmasters, and their readers) are completely sick and tired of climate science, there's no news other than "it's not looking good" and readers are frankly bored by it. So space tycoons and futurist lunatics are great for filling up 17 pages in a magazine. It's not that nobody cares about what's actually most likely going to kill us, we just all have fatigue and need distractions, and a cat like Bostrom fits nicely into the But What If? feature category.

El Tomboto, Saturday, 21 November 2015 03:54 (eight years ago) link

thanks for the new DN there

Eugene Goostman (forksclovetofu), Saturday, 21 November 2015 05:50 (eight years ago) link

Had my suspicions tbh

ledge, Saturday, 21 November 2015 08:52 (eight years ago) link

at the risk of being facile, computers aren't humans. which is to say, that when computers make mistakes, they are not the same kind of mistakes computers make. machine intelligence is first and foremost logical, so for instance you would never see a computer which supports donald trump. a computer might, on the other hand, award gaz coombes the mercury prize. a sentient machine also has a non-trivial chance of going all forbin project on us.

rushomancy, Saturday, 21 November 2015 11:01 (eight years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.