“Dreams seen by a man-made machine
How does it seem, how does it seem
That we can see each others dreams”
– CAN, “Last Night’s Sleep” from Until the End of the World soundtrack
It’s a rainy Friday here in western MA, the perfect day to curl up with some scientific papers. There’s two papers sparking a lot of online discussion today. One is CERN’s fascinating announcement of neutrinos that appear to travel faster than the speed of light. I know perfectly well that I’m not going to understand that paper, so I’m waiting for Chad Orzel at Uncertain Principles to explain it to me. (After all, he’s the guy who explains relativity and quantum mechanics to his dog – I suspect he’ll get me to understand the paper and the controversy.)
Another is this fascinating paper from the Gallant Lab at UC Berkeley, where the authors experimented on themselves, imaging their brains using functional MRI processes while they watched movies of the natural world. By analyzing the blood flow to their brains as they concentrated on these videos, they built a set of models that allow them to reconstruct visual imagery from brain response.
The reconstructed images are built from a library of 18 million seconds of YouTube video. A Bayesian system matches the brain signals to 100 likely videos and averages those videos into a visualization of what the scientists saw. We see the video the experimenters watched, then the reconstruction made from the brain signals and the collection of YouTube clips.
In other words, the scientists are using fMRI to pull images they’ve seen out of their brains and put them onto screens. That’s a pretty mind-blowing idea. And if you read the paper, you’ll see that the researchers aren’t just thinking about reconstructing images from sight: “This is a critical step toward obtaining reconstructions of internal states such as imagery, dreams and so on.”
The paper led Bob Moon on Marketplace last night to declare that scientists were exploring concepts he’d never even thought about.
It’s something I’d thought about, but only because Wim Wenders had thought about it at length.
“Until the End of the World” is one of the strangest and most beautiful movies ever made. It’s a deeply flawed gem, a commercial disaster remembered more for its soundtrack than its narrative. That’s in part because the film that screened, at over two and a half hours, was the “Reader’s Digest” version of a 280 minute film that Wenders shot and realized he could never get shown in theaters. For years, the only way to see the full film was to catch one of Wenders’s screenings at universities. (You can now purchase a set of region 2 (European) DVDs, which won’t play on a US DVD player, but which will let you see the full film on a laptop or other player.)
In an odd way, the 280 minute version is, itself, an abbreviation of Wenders’s vision. Wenders is a master of the road film, and Until the End of the World wanders from Europe to North America to China to Australia, and eventually into orbit above the planet. But Wenders had hoped to cover all continents, and the film was originally intended to end in the Congo, echoing a musical motif of pygmy lullabies introduced in the first act of the film. I’ve seen the short cut of the film dozens of times, the director’s cut twice, and feel like I’m starting to be able to imagine the film that might have been, had not money, logistics and sanity intervened.
The plot is convoluted, but at its core, it’s about a technology invented by reclusive scientist Henry Farber to allow his wife Edith to see, though she’s been blind since childhood. The camera he invents records both pictures and the brain activity of the videographer – afterwards, the images can be transmitted from the brain of the videographer (Farber’s son, Sam, and his lover Claire) to Edith’s brain, allowing her to see video interviews with her family, who are scattered around the world.
Rather than being exhilarated by the return of her vision, Edith is overwhelmed by how much everyone has aged and the ugliness of the world – she dies shortly after. Claire, who’s proven a better operator of the camera than Edith’s son Sam, becomes addicted to another use of the camera: watching her dreams from the previous night. The third act of the film is dominated by the abstract imagery of Claire’s dreams, reconstructed on film from her neural activity. (The still above is from that section of the film.)
Part of what makes Wenders’s vision of the future so compelling for me is that the technology his scientist creates is far from magical – it requires intense concentration from camera operators to capture images, and Sam briefly goes blind from the strain of capturing images. It’s a profound effort again to transmit those images. In a striking parallel, it sounds like the Berkeley process is a lot of work as well:
“It takes several hours to acquire sufficient data to build an accurate motion-energy encoding model for each subject, and naive subjects find it difficult to stay still and alert for this long. Authors are motivated to be good subjects, to their data are of high quality. These high quality data enabled us to build detailed and accurate models for each individual subject.”
If the authors of this paper haven’t seen Until the End of the World, I can only hope they’ll go out and watch it immediately.
It’s a long way from the visualizations produced by the Berkeley researchers to “the disease of images“, the addiction Claire nearly succumbs to. But it’s fascinating for me to see science catch up with the most speculative of speculative fiction. I can only imagine the debates of ethics – and aesthetics – should we reach a point where we can see each other’s dreams. (I suspect YouTube would suddenly become a whole lot more interesting.)