A Guy Trained a Machine To "Watch" Blade Runner. Then Things Got Seriously Sci-Fi.

Aja Romano, writing for Vox:

Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn't actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.

Instead, it was part of a unique machine-learned encoding project, one that had attempted to reconstruct the classic Philip K. Dick android fable from a pile of disassembled data.

In other words: Warner had just DMCA'd an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn't distinguish between the simulation and the real thing.

I don't understand—couldn't they have picked like, Home Alone? Why Blade Runner, of all the movies for this specific project? Oh, I see:

In other words, using Blade Runner had a deeply symbolic meaning relative to a project involving artificial recreation. "I felt like the first ever film remade by a neural network had to be Blade Runner," Broad told Vox.

Mark this one down in the event that it's the beginning of the end.

§

Elon Musk Thinks We All Live in a Video Game. So What If We Do?

David Roberts, writing for Vox:

Everything we know about the world comes to us through our five senses, which we experience internally (as neurons firing, though Descartes wouldn't have put it that way). How do we know those firing neurons correspond to anything real out in the world?

After all, if our senses were being systematically and ubiquitously deceived, whether by demon or daemon, we would have no way of knowing. How would we? We have no tools other than our senses with which to fact-check our senses.

Because we can't rule out the possibility of such deception, we can't know for certain that our world is the real world. We could all just be suckers.

This kind of skepticism sent Descartes on an internal journey, searching for something he could know with absolute confidence, something that could serve as a foundation upon which to build a true philosophy. He ended up with cogito, ergo sum — "I think, therefore I am" — but that has not fared well with subsequent philosophers.

Start your Friday off right and get those meaning-of-existence wheels spinning in your brain!

(Bonus comment: imagine trying to explain this concept to Donald Trump? Better yet, imagine Donald Trump trying to explain this concept to someone else?)

§

The End of Facts

Jill Lepore, writing for The New Yorker:

A “fact” is, etymologically, an act or a deed. It came to mean something established as true only after the Church effectively abolished trial by ordeal in 1215, the year that King John pledged, in Magna Carta, “No free man is to be arrested, or imprisoned . . . save by the lawful judgment of his peers or by the law of the land.” In England, the abolition of trial by ordeal led to the adoption of trial by jury for criminal cases. This required a new doctrine of evidence and a new method of inquiry, and led to what the historian Barbara Shapiro has called “the culture of fact”: the idea that an observed or witnessed act or thing—the substance, the matter, of fact—is the basis of truth and the only kind of evidence that’s admissible not only in court but also in other realms where truth is arbitrated. Between the thirteenth century and the nineteenth, the fact spread from law outward to science, history, and journalism.

This piece made me think of a line in the Stephanie Vaughn short story Dog Heaven:

She believed, like the adults in my family, that a fact was something solid and useful, like a penknife you could put in your pocket in case of emergency.

There has never been more things that are true than at this point in time. It's a gift and a curse.

§

The Doomsday Invention

Raffi Khatchadourian, writing for The New Yorker:

He believes that the future can be studied with the same meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The very long-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bistro calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes.

In the nineteen-nineties, as these ideas crystallized in his thinking, Bostrom began to give more attention to the question of extinction. He did not believe that doomsday was imminent. His interest was in risk, like an insurance agent’s. No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of reducing an existential threat by a billionth of a billionth of one per cent would be worth a hundred billion times the value of a billion present-day lives. Put more simply: he believes that his work could dwarf the moral importance of anything else.

I don’t remember the last time that I read something that effected me on an emotional level so much. I’ve been having dreams about this article. I can’t stop thinking about it.

§

Kierkegaard Explains the Psychology of Bullying and Trolling—in 1847

Maria Popova:

In an immeasurably insightful entry from 1847, 34-year-old Kierkegaard observes a pervasive pathology of our fallible humanity, explaining the same basic psychology that lurks behind contemporary phenomena like bullying, trolling, and the general assaults of the web’s self-appointed critics, colloquially and rather appropriately known as haters. Kierkegaard writes:

“There is a form of envy of which I frequently have seen examples, in which an individual tries to obtain something by bullying. If, for instance, I enter a place where many are gathered, it often happens that one or another right away takes up arms against me by beginning to laugh; presumably he feels that he is being a tool of public opinion. But lo and behold, if I then make a casual remark to him, that same person becomes infinitely pliable and obliging. Essentially it shows that he regards me as something great, maybe even greater than I am: but if he can’t be admitted as a participant in my greatness, at least he will laugh at me. But as soon as he becomes a participant, as it were, he brags about my greatness.

That is what comes of living in a petty community.”

Nailed it.

§

Fighting Cancer by Controlling It, Rather Than Killing It

Jerome Groopman:

The breakthrough is notable in part for the unconventional manner in which the drug attacks its target. There are many kinds of cancer, but treatments have typically combatted them in one way only: by attempting to destroy the cancerous cells. Surgery aims to remove the entire growth from the body; chemotherapy drugs are toxic to the cancer cells; radiation generates toxic molecules that break up the cancer cells’ DNA and proteins, causing their demise. A more recent approach, immunotherapy, coöpts the body’s immune system into attacking and eradicating the tumor.

The Agios drug, instead of killing the leukemic cells—immature blood cells gone haywire—coaxes them into maturing into functioning blood cells. Cancerous cells traditionally have been viewed as a lost cause, fit only for destruction. The emerging research on A.M.L. suggests that at least some cancer cells might be redeemable: they still carry their original programming and can be pressed back onto a pathway to health.

This is a fascinating article. This approach seems so obvious that it feels simultaneously maddening and completely understandable that it took so long to develop. On the other hand, by the end of this piece, I wondered what people would make of it in—100 years. Or even worse, in 200 years. What about 500 years? So much talk and work about and on cancer works from an assumption that it is something that eventually will be overcome. But what if (and pardon me for getting super nihilistic for a moment) cancer is something that we aren’t meant to beat/eradicate/cure? What if we’re beating our heads (and wallets) against the wall, fighting an enemy on the molecular level, for nothing?

§

The Biggest Apple Watch Problem? Time is an Illusion

The announcement of a new Apple product line is always a big event, and their recent unveiling of the Apple Watch (technically, the  WATCH) was no exception. I’ve specifically avoided forming any real opinions on a product that isn’t going to be released for another, at least, six months. (if you’re interested, though, I think the new messaging paradigm that they’re attempting to introduce with the watch is the real that’s-some-Jetsons-shit feature.) However, that hasn’t stopped the rest of the internet from weighing in.

Of course, if you wanted to do some reading, get some good, thoughtful feedback, you could read, say, “A Watch Guy’s Thoughts On The Apple Watch After Seeing It In The Metal” (via Daring Fireball). I know I shouldn’t be surprised, but damn—there’s an entire Watch Nerd World out there, apparently.

Or, if you’re (like me) nerdy and lame and not all totally there, you could dive into Dylan Matthews’ take:

There are plenty of reasons to be skeptical of the just-revealed Apple Watch. Who wants to use a touchscreen that tiny? Why would I want to send my heartbeat to my friends (a real feature of the watch emphasized in the product's unveiling)? Why should people who aren't titans of finance spend $349 on a watch?

But the best reason for skepticism is that it's, at root, a watch, with the primary purpose of telling time. And time is an illusion.

Now that’s a think piece.

§