Essentially it queries Foursquare for search terms then passes to Yahoo to find out the latlong from the place name given, and then uses the lat long with google street view to display the images for that location. The result is this below: When i search for Wellington i can see a whole lot of of people checking into their homes and publishing this information to the world...
Another rather humorous feature of the tool is it's general searches under the categories: Who wants to get fired? Who's hung over? Whose taking drugs and Whose got a new phone number?
So what does it mean? Well it kind of makes some of the privacy concerns about such location services a little bit more obvious for people who probably don't realize what they're doing when they are checking in. (despite the fact that it is re-iterated time and time again by privacy experts) I've mentioned this before too:
Modern society seems to be geared up to deal with and accept casual tradeoffs of privacy - I personally am subscribed to this. You're reading my blog and know I have a twitter, I'm willingly giving a plethora of information about myself out into the fabric that is the Digital Age, but hey, who really cares about what Mark is doing? Well, A tech savvy burglar could potentially use the information i'm publishing to find out where i live, when i am at home (and more importantly to them, when i'm not) and read my twitter to find out that i just bought a brand new TV... food for thought.
Anyway, take a look at the website: www.weknowwhatyouredoing.com
So i came across this on google research's blogs:
Traditionally machine learning requires previously labeled data as a training set in order to then work on new input data. Many researchers these days are exploring methods to work with data that has not been labeled before hand, essentially allowing the system to learn from 'unknown' data and be able to associate items that were previously not known to be connected. Artificial neural networks are a conceptualization/mimicry of a mammalian brain's learning process.
Google's neural network for the research is made from a distributed infrastructure consisting of 100 machines with total of 16000 cpu cores, fed data into the system to produce models with more than 1 billion connections. The input data was derived by showing the system data from 10million stills (200x200px) taken from youtube videos
Google say that that the neural network learned to respond to pictures of human faces, human bodies and cat faces - cat faces being the thing that journalists are picking out from the research because lets face it - it's funny - and yep, cats rule the internet - as we kind of showed in some research we did with a Social robot with a persona geared around cat fanciers: Socialbots
Link to Google's research:
Well it's been almost 3 months since we won the Social Bots competition (read about it here in my previous blog post) - since then we've been referenced several times in the media, so i thought i'd take the opportunity to link to the articles written.
Also... while i'm writing a blog post...
What the heck New Zealand? I'm seriously disheartened right now that the Copyright (Infringing File Sharing) Amendment Bill has been passed - if you're not from New Zealand, check this article out to better understand why so many are up in arms about it:
There are many bad reasons why this shouldn't have happened (including the Richard Stallman debate of Copyright vs Community in the age of computer networks) - but perhaps the reason that I object to the most is not the fact that people who infringe copyright are punished (this is, i think another kettle of fish entirely) but that this law will give the power to disconnect internet users without sufficient proof of an offense - the word of a copyright/distributer seems to be sufficient enough - ever heard of innocent until proven guilty?
If you have visited my blog before you would have seen my post Robots, Trolling & 3D printing where I described the SocialBots competition that myself and my team won. The SocialBots competition was run by the Web Ecology Project and involved us competing against other teams in a two-week, all-out battle of automated social shaping.
As you can see in the below graph of the set of 500 users, the teams were able to totally distort the shape of the network graph so that it became pivoted around the 3 bots in the competition.
Since the competition ended, Tim Hwang of Web Ecology has been talking to a variety of different people about the competition and it's implications. One of these talks was at Ignite San Francisco and it's video be found here: Exterminate, Exterminate: On the Robotic Subjugation of Twitter.
So where to now after all of this? Considering that after only 2 weeks, the 3 teams in the competition were able to trigger a huge amount of activity in the social graph. Overall we were able to elicit 250 responses and created mutual follows from close to 500 of the target set of users. We observed some interesting events in the social graph as a result of our social shepherding; we were able to bring together users in the set that had previously not interacted; and we were able to shape an entire community of activity around our bots (as can be seen in the after shot above). But what have we really learned from SocialBots?
Tim is now setting our sights a bit higher with a new project called "The Narrows".
What is the Narrows? Well essentially it's the "first ever robot constructed social superstructure" - we're using our skills to recycle & extend the technology we implemented and the knowledge we gained from SocialBots to create an architecture that can really influence a massive scale social infrastructure. Essentially we aim to build a swarm of bots with statistically-predictable social outcomes that we can use to actively mould, shape, rewire and redirect social groups online - groups that contain thousands of users (or potentially hundreds of thousands).
To measure success we're going to start with two sets of totally unconnected twitter users (approx. 5,000 users in each set) and over the course of 6 to 12 months we're going to use waves of Social-Engineering-Cyber-Bot's to create a support structure that weaves and melds the two user sets together through a social bridge. The scaffolding driving this interaction (our bots) will then be largely removed from action, leave behind only a smaller set of caretaker bots that will maintain and shepherd the now joined social groups.
Myself and a few of the founding members of Team EMP have joined with Tim on "The Narrows" - and the first iteration of our engine will be called "Pacific Social".
Keep checking my blog for updates on this exciting new project and follow our twitter account for the project @PacSocial.