I feed in Quiet Babylon, but I remain uneasy about that. The more time I spend thinking about social networks like LiveJournal, Facebook, Twitter, etc, the more I feel like automatically importing feeds is a bad move. It's why I've largely abandoned Facebook (except for event invites and messaging people who's email I don't know). The site has become a slurried mess of auto-updates, imported garbage, and miscellaneous content which mashes together updates from my dearest friends with people I met once at a social networking workshop.
Indeed, I feel so strongly that auto-importing feeds is a bad move that I've written a manifesto to that effect.
The last time I discussed auto-importing Quiet Babylon, people seemed to think that it wasn't so bad. I was motivated at the time by wanting a bigger audience, and by the fact that you guys were commenting on the automatic QB LJfeed, which gave me no notifications, meaning that I wouldn't see what you had to say. I'd prefer that you comment over there than over here, but who am I to tell you what to do with your lives?
So I'm thinking of turning off the feed entirely. If you wanted to keep reading, you could follow the syndicated feed or just follow the actual site (updated Mondays and Thursdays!). If you wanted to comment, you could send me an email or make a post of your own discussing your ideas or any number of other solutions.
In the summer of 2009, I was in San Francisco for the first time and on my way to meet Alexis Madrigal and Sarah Rich for a drink. Equipped with only a photocopied map and a dumb cellphone, I got off at the appointed BART stop with instructions to head south and no idea which way that was. Ever the intrepid explorer, I worked out the solution using the phone’s clock, the map, and the location of the sun. That’s so remarkable that it’s worth saying a second time: In 2009 in a major metropolitan area, confused and disoriented I resorted to navigation by the sun.
Here’s how that story goes in Edmonton, a city with which I am equally unfamiliar: I get off at the appointed stop, pull out my smartphone, put in the address, and the phone works out where I am and points me to my destination.
The difference? International roaming charges haven’t crippled me.
The wristwatch/phone hybrid used to be the way forward. Now it’d be considered clunky or annoying to use – either a case of too much bulk or no room for buttons – and associated with all sorts of bizarre RSI. The delicious irony is that today most people use mobile phones to tell the time.
Written by: Andrey Pissantchev
Exactly! I don’t think it’s so much a failure as us realizing we’d rather not have a phone strapped to our wrist. Look at it another way- a cell phone is really a pocket watch converged with a phone. And a camera. And a calendar. And a datebook. And a rolodex.
I’m disappointed that we don’t have these too! But, as the author points out, we have the same functionality, it’s just added to the phone rather than the wristwatch.
I used to coach debating full time, which meant a lot of staring at a coundown to check speech length. I took my watch off so often that I started just carrying it in my pocket. Then I got a phone with a timer function. I don’t have a wristphone, but I do have a pocketwatchphone.
What’s this all about?
In the waning days of 2009, Julian Dibbell mentioned videophones as a holy grail technology that ended up being a b-teamer. I liked the concept so much that I ran a contest on Quiet Babylon, looking for more examples.
In the waning days of 2009, Julian Dibbell mentioned videophones as a holy grail technology that ended up being a b-teamer. I liked the concept so much that I ran a contest on Quiet Babylon, looking for more instances.
The entries were fantastic and after a long and occasionally contentious dinner-meeting with my gracious panel of judges, it’s time to announce the results. I’d like to thank Ryan North of Dinosaur Comics and Project Wonderful and street artist Poster Child for their time and insight.
Beyond the winners presented here, there were 8 short-list finalists. Rather than cram them all into a single long post that no one reads, I’ll be featuring each of the others separately over the coming weeks, along with commentary by the judges.
On to the winners.
Diversity prize: Weather Control
WEATHER CONTROL In 1845, it was suggested that a continental meridian of fire—six hundred miles of prairie burning from North to South—could settle weather over the eastern half of the continent. A few years later, Congress considered ordering a great dike built athwart the Gulf Stream in order to gentle seaboard climes. The twentieth century brought cloud-seeding cannon—used most recently in China, where the Army fired silver iodide into clouds during the 2008 Olympics. As holy grails go, this may be the supremely ironic one: while we cannot control the weather, our influence over the climate may be our downfall.
Matthew Battles is the coeditor of hilobrow.com who writes about language, literature, and technology for a variety of publications in print and online.
Really loved this one, and the parallel between wanting to control the weather and ending up with the climate change we’ve got today. I still hope that, one day, I’ll be able to say how it’s too bad the Post Office isn’t a efficient as the Weather Service.
A lot of science fiction cautionary tales are about how attempts to build a controlling technology backfire and we are overwhelmed by the very thing we sought to master: I’m thinking of killer robots, Jurassic Park, and so on. In the science fiction version of the weather control story, those six hundred miles of prairie fire interact with the great dike resulting in thousands of tornados and permanent drought. As Matthew points out, the real story is much more terrifying.
Further Irony: We are actually seeding clouds 24/7 as a by-product of air travel. If air travel by jet stops as a result of dwindling jet fuel – losing all that artificial man-made cloud cover may further exacerbate our climate woes.
Grand Prize: Voice Recognition
No luck Fir tree could have been more well come than voice wreck ignition. The eyed yeah that one could control their tipi, con pewter, or even author mobile with a quick Spokane commando was an inversion in futuristic dither furniture; Shirley not every séance fiction right her would-be rung. However, none cold fours sea the the faculties present in trains lathing human speech tooth next. In tend, voice recon it shunt to kits place beside other trot shuck failure soft heck anthology, whore gotten at eels to for the in mediate future.
Written by Robert Ewing of Laughing at Nothing, a group of filmmakers so vagrant that they don’t even have a website right now.
This is so well written- If only my 5th grade teacher could be as accepting as I am of the absurdist styles of an essay written via voice recognition software, I’d be a much happier 5th grader. I was so sure that we have this sorted by the time I was an adult, but I’m still waiting.
Okay, I was the dissenting voice here, mainly because I’ve got a degree in Computational Linguistics, so I am just TOO CLOSE to the problem. The entry was really well-crafted, and the point that voice recognition isn’t really there yet is a good one! But I think it’s a little unfair because voice recognition is a new technology, and nobody is pointing to it and saying “There! FINISHED.” It’s young, it’s new, and there’s still lots of challenges left to overcome before we’ll be able to chat up our robot palls.
But then the other judges told me the entry was awesome and I was making excuses for the entire field and I thought, okay, maybe, but let’s see you analyze waveforms to statistically find word-boundaries, and then use n-gram processing to figure out the most likely series of words, using that predicatively on the candidate word form currently being processed.
And then I was like, man, I’m a cartoonist now.
Anyway, a great entry!
As you can tell, there was some controversy over the selection of this entry. Ryan wanted to argue that the tech isn’t there yet but Dragon Naturally Speaking is at version 10 and many companies have used voice-control in their phone labyrinths for years. For a technology that Ryan wants to say is not ready for primetime it sure is widely available commercially. This is what makes it so disappointing. The tech is plainly not done, but there’s a group of people with order forms saying “There! FINISHED,” ready to take your money.
Leaving aside the squabbles of the panel, for me what put this entry over the top was the sheer excellence of the “show, don’t tell” writing.
In the Pearson International Airport in Toronto, there’s a walkway that fascinates me. The walkway in question runs from where you get off the plane to the exit. If you get off the plane and have luggage, you proceed down the stairs to the carousels and the herd of humanity. If you don’t have luggage to collect, you can bypass the whole thing and take this walkway. It passes over the luggage claim area and then passes over the people waiting for their loved ones to emerge. A few meters later, its own set of doors opens and you are outside in a loading area, hailing a cab. Unremarkable.
When it comes to video games, creating enemy artificial intelligence for a stealth-action game tends to be much harder than creating the AI for a plain shooter. One reason is a more complex sensory system. Another is the sheer amount of time that you spend in their presence.
In a shooter, the AI is unlikely to spend more than a few seconds alive after they appear on screen. Even when they do, it’s in the context of bullets, rockets, and grenades flying everywhere. There’s a limited emotional and intellectual range required for those circumstances.
In a stealth game, the player is likely to spend several minutes in the presence of the AI, silently observing them. This gives the enemy plenty of opportunities to be unbelievably stupid. In stealth games, the player watches the enemy move around, talk to their friends, get nervous, and investigate sounds. The extra exposure makes it easier for the AI to fall into the uncanny valley because the player has time to get to know them. The more you are watched, the more we can tell if you are human.
photo credit: MATEUS_27:24&25
A recurring puzzle of evolution is the persistence of certain entities or behaviours that – at first glance – seem to harm the reproductive fitness of individuals. From the naive standpoint, an individual worker ant makes a mockery of evolution. They’re sterile; a reproductive dead-end.
One way of conceptualizing the answer is the unit of selection. It’s the idea that natural selection happens at a variety of levels: genes, cells, individuals, groups. When you look at ants, you don’t just look at individuals, you also look at colonies. At the colony level, there is an enormous benefit to specialization. Having thousands of sterile disposable workers lets you do all kinds of things that individually self-interested organisms mightn’t.
Here's how to donate to the Canadian Red Cross
Donations as low as $5 accepted.
On January 12th, Google announced that they’d been the target of a series of cyber attacks aimed at accessing the Gmail accounts of Chinese human rights activists. Their response was to stop censoring Google.cn effective immediately. In the same week, Google was granted a patent that would enable them to replace real billboards with virtual ones in Google Street View.
Leaving aside the utter surprise at finding myself on the pro-corporate side of the government v. megacorp sovereignty wars, both of these stories are about the same thing: filtering data streams to match one set of aims or another.