New Yorker magazine alert thread

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6071 of them)

given that Goldsmith is a NYer contributor so they may have reason to take his side

Why because she True and Interesting (President Keyes), Monday, 28 September 2015 18:44 (eight years ago) link

There's a New Movement in American Poetry and It's Not Kenneth Goldsmith

...The more interesting, relevant, and current story is that the poetry world has been riven by a crisis where the old guard—epitomized by Goldsmith—has collapsed. I thought it was essential to contextualize Goldsmith’s scandal within a new movement in American poetry, a movement galvanized by the activism of Black Lives Matter, spearheaded by writers of color who are at home in social media activism and print magazines; some poets are redefining the avant-garde while others are fueling a raw politics into the personal lyric. Their aesthetic may be divergent, but they share a common belief that as poets, they must engage in social practice, whether it is protesting against police brutality or calling out Goldsmith himself who thought it would be a “provocative gesture” to recite an autopsy report of Michael Brown’s body at Brown University.

Of course, it became clear to me in the interview that Wilkinson didn’t want to write about that. His take on Goldsmith was that his Conceptual Poetry represented a new “revolutionary poetry movement,” as he put it in his published piece. But Conceptual Poetry is already dead, I told him. And to write about the scandal, one had to consider the racial unrests that have swept up America and invaded the arts. Poets are challenging the structural inequities within literature. The pushback against Goldsmith was symptomatic of this broader crisis and he did not create this maelstrom.

In fact, even before the performance, Goldsmith’s “brand” was in trouble. His PoMo for Dummies “no history because of the internet” declarations became absurdly irrelevant when black men were dying at the hands of cops. Goldsmith, who previously exhibited zero interest in race, saw that racism was a trending topic and decided to exploit it to foist himself back in the center and people roared back in response. Goldsmith, I kept saying, is one factor to this turbulent rift in the cultural landscape. Writers of color are not bit players in this man’s drama. Don’t whitewash this story, I urged him.

Wilkinson distilled my long interview down to two quotes:

“I am hoping that there has been enough anger that he won’t survive,” Cathy Park Hong, at Sarah Lawrence, told me. “Maybe he really did mean to be sympathetic, who knows. Two, three years ago, it would have been ‘That’s Kenny being Kenny,’ but in this racial climate you don’t get away with it.”
This is how he framed my views:

“He’s received more attention lately than any other living poet,” Cathy Park Hong, a poet and professor at Sarah Lawrence, told me resentfully. (Italics mine.)

1997 ball boy (Karl Malone), Thursday, 1 October 2015 19:58 (eight years ago) link

i actually dearly love poetry but there is little more embarrassing than academic poetry being written about in a way that flatters its pretensions to have any political relevance whatsoever

wizzz! (amateurist), Thursday, 1 October 2015 20:01 (eight years ago) link

and this whole tempest in a teapot is definitely one of those "lock these folks in a room together and toss away the key" affairs

wizzz! (amateurist), Thursday, 1 October 2015 20:02 (eight years ago) link

I made my points calmly, but translated in print, I become resentful (or whiny or hostile or, if I raise my voice slightly, hysterical). Wilkinson discredits my lucid points about institutional inequality by characterizing me as envious of the attention Goldsmith received. Envy is an emotion that is—according to the scholar Sianne Ngai, in her book Ugly Feelings—“unjustified, frustrated, and effete,” a “private dissatisfaction” or “psychological flaw.”

is resentfully/resentment generally seen as a loaded phrase? I guess 'envious' does seem dismissive, boiling down someone's objections to a personal rather than political issue, but resentful doesn't seem to be an exact synonym? but Hong does seem to resent the attention Goldsmith has received over this (I don't mean that in a pejorative sense, I can see why resentment would be justified), that is whole thrust of article, isn't it? what would be a better way for the NYer have put that?

soref, Thursday, 1 October 2015 21:14 (eight years ago) link

i haven't read the nyer thing yet, have only skimmed parts like kg saying that the deep feels of an artist are worth the drowning of one thousand children, but

the most damning thing was that he broke away from his own method of pure appropriation by changing the structure of the autopsy report to feature the description of brown's genitalia as the closing line. his only line of defense was his strict adherence to the concept. without it, the criticism that he was effectively exploiting a black man's tragedy for his own personal gain is convincing. maybe that wasn't his goal, but that was the result.

this is super otm & worse is understated; he also made plain the language of the piece, iirc, like didn't read the text of the report as written, its medical vernacular, but broke it into plain english; beyond everything else this feels aesthetically inferior, to me, like the point of language is its technicality, but it's also just another blunt, shapeless, lazy face of his dull appropriation act. i used to be such a booster for kg because he's ubuweb but it's such offensive, boring, faux-boho garbage i think. the most valuable act of appropriation he could perform is restoring the text of cassandra gillig's twitter account which was deleted either on his account or else generally under his terrible trailing cloak. fuck kenny g imo.

crime breeze (schlump), Friday, 2 October 2015 04:16 (eight years ago) link

is this the woman who was tweeting that she hoped Goldsmith was murdered?

soref, Friday, 2 October 2015 10:10 (eight years ago) link

cmon man

crime breeze (schlump), Saturday, 3 October 2015 04:19 (eight years ago) link

https://pbs.twimg.com/media/CAQpqqNUcAE5FzS.png

insight into the mind of a killer

crime breeze (schlump), Saturday, 3 October 2015 04:21 (eight years ago) link

"favourites" what is this canadian twitter

go hang a salami I'm a canal, adam (silby), Saturday, 3 October 2015 04:36 (eight years ago) link

three weeks pass...

the first New Yorker Radio Hour podcast came out a few days ago. Starts with an interview with TNC on James Baldwin

Why because she True and Interesting (President Keyes), Tuesday, 27 October 2015 17:41 (eight years ago) link

that's cool. for anyone jonesing for a baldwin kick, american masters is currently streaming a short 1963 film where baldwin interviews people in San Francisco: http://www.pbs.org/wnet/americanmasters/james-baldwin-am-archive-take-this-hammer/2332/

1999 ball boy (Karl Malone), Tuesday, 27 October 2015 18:35 (eight years ago) link

three weeks pass...

i guess i was primed for this article on nick bostrom and superAI because i read his book a few months ago and have been kind of obsessed with the topic lately, but i liked this a lot:

http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

it's kind of a long one, divided up into 3 parts, so for those who want a shorter reader experience (read something besides the new yorker), part 2 could work as a standalone

Karl Malone, Thursday, 19 November 2015 22:57 (eight years ago) link

Very interesting

Why,though, does bostrom WANT to live forever? Never sees his wife and son, eats his disgusting meals out of a blender, spends all his time worrying about ludicrous non-problems... Who wants an infinity of THAT?

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 00:08 (eight years ago) link

it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction.

how do you guys even make it past this sentence - this is like the premise of a billion scifi novels from the 50s on

Οὖτις, Friday, 20 November 2015 00:18 (eight years ago) link

That's what i mean by a non-problem

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 00:20 (eight years ago) link

haha yes

I guess my question is more directed at the people that made his book a best-seller

Οὖτις, Friday, 20 November 2015 00:22 (eight years ago) link

heheh Lanier OTM:

Jaron Lanier, a Microsoft researcher and tech commentator, told me that even framing the differing views as a debate was a mistake. “This is not an honest conversation,” he said. “People think it is about technology, but it is really about religion, people turning to metaphysics to cope with the human condition. They have a way of dramatizing their beliefs with an end-of-days scenario—and one does not want to criticize other people’s religions.”

Οὖτις, Friday, 20 November 2015 00:25 (eight years ago) link

find the section about the great filter v interesting.

Karl Rove Knausgård (jim in glasgow), Friday, 20 November 2015 00:27 (eight years ago) link

I find the idea that a piece of software created by humans could somehow not be full of all kinds of dumb problems and self-destructing errors to be ridiculous on its face. Anybody who really believes that we're on the verge of coding up a resilient, invulnerable, self-correcting computer program has clearly been out of touch with the actual progress of information technology for, like, a pretty long time.

El Tomboto, Friday, 20 November 2015 00:40 (eight years ago) link

I mean the best and most advanced pieces of software in the world today require armies of people to constantly maintain them or else they collapse. Not to mention the poor hardware guys who have to shuttle all over data centers replacing rack components.

El Tomboto, Friday, 20 November 2015 00:45 (eight years ago) link

or maybe we've actually already created the superintelligence multiple times but successive versions of iOS keep breaking it. Thanks Apple for saving us!

El Tomboto, Friday, 20 November 2015 00:53 (eight years ago) link

well, there's the plausibility of creating superintelligence in the first place, and then there are the concerns about security/containment if it did exist. if you accept the former, i'm not sure why anyone wouldn't also accept the latter. but most people just write-off the possibility. tombot, even if you're not into the whole AI thing i figure you'd still actually enjoy bostrom's book. iirc you were doing IT security stuff at some point (still are?), right? at least half of superintelligence is all about containment of an unprecedented security risk. it's just inherently interesting stuff, i think.

lanier's not wrong about a lot of the transhuman crowd. the rapture for nerds element is definitely strong with a lot of people. but the possibility of real superintelligence doesn't seem implausible to me. one thing i liked about bostrom's book is that he assesses 5 or 6 different strands of AI research separately. i feel like when people think about AI they are usually focused in on their own idea of what the word means and the pathway to it that makes the most sense to them. like the way that tombot (and most people i guess) talks about it just assumes that AI research is based on programming. but there are other ways that could achieve the same goal of replicating human intelligence. there's the idea of emulating the brain using a computer, or going the other way, augmenting brains. or there's the machine learning path. or combos of those things, along with traditional programming AI path. i dunno. the challenges of the programming pathway (like defining basic human terms like happiness) seem potentially impossible to overcome, but the modeling/emulation/augmentation paths just seem like a matter of time, even if it's a long time. if you could ever create even a basic level of self-learning potential, adding processing power x the always-on capability x the access to the internet would do the rest.

his book summarizes several separate paths to AI that seem like they have a non-zero chance of succeeding, and with different research communities:

machine learning
emulating brains
brain augmentation
department of commerce
master algorithm/programming
department of commerce

even if it seems improbable, it doesn't seem impossible. and if you grant even a small margin of probability to any single one of them, the potential consequences are insane.

Karl Malone, Friday, 20 November 2015 03:04 (eight years ago) link

I can definitely see that. Its just that it seems much more likely we'll be living in tent cities drinking our own urine in 50 years rather than worrying about rogue AI

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Friday, 20 November 2015 08:08 (eight years ago) link

guy is a freak, but a much better and more interesting one than previous new yorker examples like elon musk or marc andreessen

mookieproof, Friday, 20 November 2015 14:49 (eight years ago) link

if you could ever create even a basic level of self-learning potential, adding processing power x the always-on capability x the access to the internet would do the rest.

I'm pretty sure the last one there is what kills it, not even kidding.

El Tomboto, Friday, 20 November 2015 16:16 (eight years ago) link

If it started out on 4chan or something it would definitely be dangerous!

But what if most of the world's books were digitized and it could read the equivalent of a public library every night?

Karl Malone, Friday, 20 November 2015 16:23 (eight years ago) link

It would end up with some fucked up ideas! And probably break.

El Tomboto, Friday, 20 November 2015 16:26 (eight years ago) link

If a human was able to read an entire library over the course of their entire life, would they be fucked up? Probably yes, but just emotionally and socially and psychologically, due them maniacally reading every second of their waking life until they died. But intellectually, I'm not sure.

I feel like it's a mistake to anthropomorphize computers and AI, too, so my bad there

Karl Malone, Friday, 20 November 2015 16:30 (eight years ago) link

I really need to get around to the Jill Lepore piece about polling - I heard her discussing it on NPR and she was great.

on entre O.K. on sort K.O. (man alive), Friday, 20 November 2015 16:32 (eight years ago) link

Unconstrained input is bad for people and it's bad for computers too. I don't think we are anywhere near being able to engineer a system or system-of-systems that is going to be able to safely ingest and usefully process the world.

El Tomboto, Friday, 20 November 2015 16:45 (eight years ago) link

"I started reading the article because the title implied something interesting. But it's just about this guy who's fucking crazy."

El Tomboto, Friday, 20 November 2015 20:41 (eight years ago) link

AI like Watson can read the equivalent of a public library rn. What it lacks is the exponentially larger data set that is acquired by living in a mobile body with amazingly acute sense apparatus, and interacting in a society and with the physical world, for decades.

Aimless, Friday, 20 November 2015 20:47 (eight years ago) link

And by reading a public library, Watson was able to conclude that Toronto is a city in the US.

El Tomboto, Friday, 20 November 2015 20:50 (eight years ago) link

And actual understanding ... xp

ledge, Friday, 20 November 2015 20:51 (eight years ago) link

to forestall any philosophical objections to the above:

When I asked Poggio about the results, he dismissed them as automatic associations between objects and language; the system did not understand what it saw. “Maybe human intelligence is the same thing, in which case I am wrong, or not, in which case I was right,” he told me. “How do you decide?”

ledge, Friday, 20 November 2015 21:19 (eight years ago) link

Not really a fan of Bostrom, especially when he tries to derive moral theories from wildly speculative probabilities, but I do like his simulation argument which does a pretty effective job of proving that we are probably living in a simulation (google it for more details, it's a doozy).

ledge, Friday, 20 November 2015 21:33 (eight years ago) link

When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand-up comedy

this guy

mookieproof, Saturday, 21 November 2015 00:22 (eight years ago) link

I find the idea that a piece of software created by humans could somehow not be full of all kinds of dumb problems and self-destructing errors to be ridiculous on its face. Anybody who really believes that we're on the verge of coding up a resilient, invulnerable, self-correcting computer program has clearly been out of touch with the actual progress of information technology for, like, a pretty long time.

Not sure if we still say OTM, but OTM. Here's a trenchantly funny take on the current state of brain simulation research:

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

o. nate, Saturday, 21 November 2015 02:01 (eight years ago) link

proving that we are probably living in a simulation

Upon consideration, this assertion doesn't appear to mean anything. If all-there-is is simulated, it could not possibly be all-there-is. But proving the existence of something that is not contained in all-there-is is a contradiction in terms.

Aimless, Saturday, 21 November 2015 02:33 (eight years ago) link

i guess i find myself in the oddly familiar position of defending the idea that there's a nonzero probability of something coming to pass in the distant future, but:

i think very few people are saying we're "on the verge" of discovery. among those in the field, it seems like the median estimate of superintelligence arising comes around 2050-60. there are plenty of researchers who believe it will never happen. and then there are some on the other side of the curve, too, who say maybe 10-15 years. i think people have the idea that everyone who is into AI is like ray kurzweil. from what i've observed, i think the kurzeil people are just the most loudest, obnoxious and visible edge to a community of practice that is much more varied.

i don't really trust anyone who is willing to put a 0% probability on things like this ever occurring. the only way you'd ever be able to do that is if you knew more than everyone else about the topic, which is hard to do because there are so many angles and approaches that people are taking and so many different disciplines that are involved. to reject it out of hand you have to look at a crazy talented field of researchers and phds and bona fide geniuses and textbook writers and philosophers and say, "i know more about this than all of them. every single one of them are wrong. there is a zero percent chance of it happening." i also think that it makes sense that it doesn't ~feel~ like we're close to solving AI. if there was already a semi-functional AI that could do some basic self-learning, we'd be on the very cusp of superAI, because it's knowledge growth would be exponential, not linear. it seems unlikely that a low-level AI will be developed that then gets "smarter" in an incremental, predictable way. it seems much more likely that it would come out of nowhere.

http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-content/uploads/2015/01/PPTExponentialGrowthof_Computing-1.jpg
http://waitbutwhy.com/wp-content/uploads/2015/01/gif

the important part of bostrom's new book is about the containment/security problem of superintelligence and how it needs to be addressed early on in the research cycle (bostrom talks a lot about why it can't wait until later in his book), and i haven't seen anyone say that it wouldn't be an enormous problem IF superintelligence were developed. if there's even a small chance of superintelligence being developed during any of our lifetimes, then i think it's worth thinking about. the consequences are enormous in terms of net present value, because even a tiny probability of it happening would have to be multiplied by the huge effect it create. it's kind of like a version of pascal's wager, only with something that's actually possible instead of hell/heaven.

Karl Malone, Saturday, 21 November 2015 02:37 (eight years ago) link

however near/far/impossible, it seems like something worth thinking about. (tbh i'm more 'worried' about a cane toads-like mistake with various bioengineering choices)

otm is always otm, and more o. nate is always good

mookieproof, Saturday, 21 November 2015 02:47 (eight years ago) link

is moore's law still operating?

Mordy, Saturday, 21 November 2015 02:54 (eight years ago) link

i keep read references to rumors that it will slow soon, but still operating for now, 50 years on

Karl Malone, Saturday, 21 November 2015 03:03 (eight years ago) link

It's already yielding diminishing returns, because the improvements in recent years have been about multiple cores on a chip, not making those cores run faster. That's harder to take advantage of.

o. nate, Saturday, 21 November 2015 03:06 (eight years ago) link

http://www.economist.com/blogs/economist-explains/2015/04/economist-explains-17


If Moore’s law has started to flag, it is mainly because of economics. As originally stated by Mr Moore, the law was not just about reductions in the size of transistors, but also cuts in their price. A few years ago, when transistors 28nm wide were the state of the art, chipmakers found their design and manufacturing costs beginning to rise sharply. New “fabs” (semiconductor fabrication plants) now cost more than $6 billion. In other words: transistors can be shrunk further, but they are now getting more expensive. And with the rise of cloud computing, the emphasis on the speed of the processor in desktop and laptop computers is no longer so relevant. The main unit of analysis is no longer the processor, but the rack of servers or even the data centre. The question is not how many transistors can be squeezed onto a chip, but how many can be fitted economically into a warehouse. Moore's law will come to an end; but it may first make itself irrelevant.

Karl Malone, Saturday, 21 November 2015 03:08 (eight years ago) link

I'm pretty sure that whether or not 'strong' AI can ever be achieved, someone will continue to pursue it. It's too deeply connected to the will to power ever to be laid aside. It's as alluring as perpetual motion.

Aimless, Saturday, 21 November 2015 03:09 (eight years ago) link

remember when that AI beat a turing test last year bc it mimicked teen-speak

Mordy, Saturday, 21 November 2015 03:21 (eight years ago) link

Also, the same guy scared of AI wants to upload himself into computers. At which point what does he think he will be?

as verbose and purple as a Peter Ustinov made of plums (James Morrison), Saturday, 21 November 2015 03:33 (eight years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.