Keep calm and wait for Chrome to sync all your passwords you think you just lost

Today morning I opened my Chrome browser on Ubuntu 14.04 the way I do every morning. But all saved passwords were seemingly lost. Not that I could recover them from various lists, reminders and pure memory and logic but nevertheless I consider this the worst nightmare a usual computer user these days can be confronted with.

So I started googling the problem an within a blink I found this horrifying error report from 2012 that describes how Google Chrome on iOS syncs an empty profile over your good one. Since I opened Chrome on my iPhone after quite a long time of absence my first thought was, you can’t be serious and started to get angry.

My searches went on. Just for those of you that come along a similar problem and land here: make sure to read the last paragraph, first (like: now)! Here are some more obvious search hits:

The last paragraph:

And while I was reading all that stuff and crying and pulling my hair, I thought: hey! What if the sync is currently running and caught up while I tried to find an obfuscated solution for a problem that actually doesn’t exist? Went back to my Chrome profile settings, searched Passwords and… voila. While searching for a solution, Google has recovered all of them in the background, it just took a while.

Phew.

World War Hollywood. You can’t be serious!

Spoiler Alert. I mean: if you really dare to go that movie, do it; but be warned: it’s worse I could ever have imagined.

I bought Max Brooks’ book World War Z one day after seeing the first trailer on iTunes. I had it on my wishlist anyway and the story looked promising so I spent the 10€ for an original version (as you might notice, my English is far from native, as you might’ve guessed, I’m German). I read it within 7 days and my opinion was better than the average: a Zombie novel but with a great twist. Not the usual pulp story you could’ve expected.

World War Z is a story about a world wide catastrophy, told in short stories, as seen by individuals, assembled chronologically, spanning a time frame of around 10 years. It’s told from an outsider’s perspective. There are some classical horror elements in the novel, there are some of the old fashioned Zombie cliches but what makes the book outstanding are many fine tuned innovative ideas of the genre. To give you some ideas:

Zombies freeze during winter and thew in spring. That means, winter is a good time to hunt for their heads. Zombies die when their heads are ripped off

Survivors build swimming islands out of rogue ships and all vessels that can swim. One of the book’s major boiling points is set on one of those islands.

The “battle of Yonker”, a huge military confrontation in the beginning of the Zombie war, is a repeated motive in the book. Many individuals refer to it as it is the turning point of human hope: after the army got overrun they burn the place down with a thermobare weapon. Afterwards humanity has given up themselves. The book mainly evolves around the aftermath of the Yonker confrontation: if people stand together they find a way to find their way out. As it turns out the solution is simpler and much more obvious than in any Zombie story ever: go from house to house, rip Zacks head off and proceed to the next.

What Mr Pitt and his bunch of nobrainer producers / storywriters / directors have made out of the original story is a ridiculous attempt to resemble that story in a Hollywood movie. I mean, I’ve seen many bad and even worse movies over the last decade. And don’t get me wrong. Brad Pitt is an outstanding actor as he made clear in Fight Club and Snatch. If World War Z would just be a bad movie, I’d be okay with it. But it’s worse. In the beginning 4 studios present their logo:  a great indicator that there have been lots of interests involved. Then the story starts right into the spreading in Philadelphia. From one second to another Gerry (the name the studio gave the main character that stays completely unmentioned in the written story) is confronted with hordes of rogue Zombies. So far so good, I thought – it’s the way the movie could’ve been predictable.

What follows are two hours of a hide and seek ongoing slaughter mess that takes some of the fighting places from the novel (esp. Jerusalem) and mixes them with a few pictures borrowed by the original (some marine ships forming the military headquarter of the world’s last defense line). Gerry got ripped from his family – and Pitt makes clear that he’s the awesome family father he is in real live (I hope he is and I want to believe so). Hug your daughter, hug her again, tell my family I love her, calls his darling every day.

C’mon. This is a Zombie story, the world’s coming to an end, the original being written in a documentary style, just reporting. We’ve seen love and lose love stories over the past 60 decades and I can’t tell you how fed up I’m with the cliche of a neat American 4-heads family. We know that America an Mr Pitt wants to tell us that the family is the middle of your life. What’s worse is, you simply won’t buy it from these actors. And actually Gerry’s wife heats up the situation by don’t wanting her own husband to save the WORLD but to protect their two little “babies”, one badly performing an asthma attack. And for the tale to be told the family background is absolutely unnecessary  it just had to be scripted to follow along the typical story line of a movie made and produced in bloody California! Just to remind you: the character of Gerry is a bare addition to the original material – there is absolutely no hint of his familiarly bonds and therefore no believable background can be seen behind Pitt’s acting. I simply couldn’t care about any of the daughters: if you’d cut them out, the story wouldn’t have lost anything!

From minute 30 onwards The movie gets worse every second. Jerusalem gets overrun because a pile of Zombie overfloods the 20m high protecting walls after hearing a loud noise from the inside. Check, done the effects rotuine, spent the money on CGI, camera runs like in Black Hawk Down (“Why are they burning tyres?”). Before that: Pitt visits South Korea, got caught up by a rogue military squad that refills his “personal” plane (I didn’t get what’s so special about that Gerry character that he’s sent alone with a couple of shitheads in uniform plus the youngest viral professor around the world to find the roots of all evil when reports just said that the president and all administration is dead, Washington’s lost) with gasoline, after the last hope for humanity – a 23 year old medical specialist for viruses shoots himself in the head – got lost (no word of that situation in the original of course, event though it’s one of the few memorable moments in the movie). Pitt seems to get a hint of a solution in the Jerusalem scene (o gosh, no, please don’t give a “magic” solution, sinking deeper into my seat) but then he has to crash the Belorussian plane he got on last minute after a flight attendant unleashes a single Zombie from the plane’s food elevator with an Israeli hand-grenade, right after tearing off his female (nominated as worst actress in a no talking role)  sidekicks arm with something that looks like a dagger from the Hitlerjugend (sorry for that, that was tasteless, I’ll delete it if you urge me to).

At that point another humilating fact shows up very clear: The PG-13 rating (in Germany it got an FSK16, I still wonder why not 12). There’s nearly no blood spilled in the movie. There are lots of corpses flying through the air, there are great makeup effects and there are shootouts but besides some drops of a scatter wound on Pitt’s sleeve no blood at all. While I like that fact pretty much (gore nowadays leads to clear B-movie ratings and I’m absolutely fed up with it) it feels unrealistic in the scenes where it could’ve been helpful.

The “grand finale” ends up fulfilling the Zombie genre where Max Brooks ingeniously tried to avoid tapping into the trap of the unavaoidable “healing” theory of a Zombie “virus”: if you infect yourself with another deadly  virus the Zombies will simply not be able to “see” you anymore. Guys, come on. This is even greater bullshit than any badass conspiracy theory about an super-powered umbrella corporation arising from “milestones” of B-movies called Resident Evil. Max Brooks’ Zombie contagion is meant to un-heal-able, spelt as in un-avoid-able. There is no remedy besides (I did quote that already): going from house to house and rip Zacks’ heads off (in the book some guerrilla platoon invents the “Lobotomizer“, a motive that the movie tries to pick up when Pitt arms himself with a gun, a knife and a newspaper). No need to fall back to that old explanation pattern.

The reason why I’m writing all this is not so much about the disappointing 110 minutes starting from minute 10 where all begins to drift off but mainly the last two. Pitt returns to his family that found shelter in “Nova Scotia”. If you read the book carefully you know: this place would’ve been badly devastated. Not by Zack – Zombies might freeze in that area – but by the people living there, cooking and feeding their children with American Stew prepared out of their grandparents’ remainders.

Instead the last minutes Brad “Gerry” Pitt talks about “hope” and about “fighting” and, worst of it all: about “the war goes on“. If that means, that you’re planning to create a trilogy out of the bullshit your high paid script writers made out of the novel, be sure: you will not be able to lure  any additional penny out of my pocket for going to one of your movies again! What follows is a flash-cut overview of fighting scenes that could’ve been shown in the movie anyway but didn’t. The producers obviously didn’t want to put them in there in hope they could reuse them in Word War Z Part II and III. Hopefully there isn’t a “Hobbit” tale afterwards. Sigh. It’s only about the money then, isn’t it?

BTW: do WHO clinics in Wales have chargers for satellite telephones made in America in stock?

So, today I paid €22,70 + 3€ for parking fees for the shit that you made me wait for nearly half a year and made me write these words of hate (and believe me: if my English would be better you would understand my point far better). Here are 5+ options how to spend that money in a better way:

1) give it to the American Association of Journalists and authors: http://www.asja.org/index.php

2) give it to the one laptop per child initiative so kids can grow up with more content in their skull than your bloody scripters: http://one.laptop.org/

3) Give it to a future oriented initiative like the  Intl Counsil for Science: http://www.icsu.org/future-earth

4) Fight cancer or some other sickness

5) Save gorillas in Africa, support anti-rifle initiatives in the U.S., Send people to the moon

B) buy the original and never touch the movie: http://www.amazon.com/World-Market-Movie-Tie-In-Edition/dp/0770437400/

But:

Never. NEVER! Make this kind of movie again. For the sake of humanity. Don’t do it.

Using module.exports the “right way” for service instances and IDE introspection

I’m using service objects in node.js that are responsible for database operations on business entities and also perform some kind of  low level business logic if needed. Recently I was refactoring my code and came up with this pattern which I currently consider a “best practice” of doing things.

Service Objects that should perform asynchronous actions on remote services like querying a database  must get their resources at some point. Naively you could instantiate each service every time you need it, provide them a (fresh) link to your database (that you might want to store globally or in an application instance that you’re handing around). Now, in Javascript, respectively in a node.js / CommonJS environment there’s a better way of doing that: the module. It is not too obvious for developers coming from a Java-like background that those modules can (but don’t have to) be used for instantiating “singleton” services and can deal as single activation points to set your service objects up with their resources. So here’s an example (please note that I’m omitting some “real world” db logic, the mongo connection is there only for illustration):

Your “service module”, responsible of getting a user from a database (“UserService.js”)


var UserService = function() {

this.db = null;
this.collectionName = "users";
this.collection = null;

}

UserService.prototype = {

connect: function(db) {
if (this.db != null)
return;

this.db = db;
this.collection = db.collection(this.collectionName);
}

getUser: function(id, callback) {
this.db.findOne({_id:id}, callback);
}

}

module.exports.UserService = new UserService();

Your main module (“app.js” or whatever you want to call it)


var db = require('mongojs'),
UserService = require('service').UserService

db.connect("mongodb://a-fancy-server:27109/master");

app.set('mngdb', db);
UserService.connect(db);
var xId = new mongoSkin.ObjectID("1a2b3c4b5e...");
UserService.getUser( xId, function(err, doc) {
console.dir(doc);
}

Notice that you’re initializing (“connecting” in this case) the single instance of UserService in your main module. That means that it is ready to go in any other module where you would like to use it. That’s a good solution for immutable service instances that don’t depend on any state but only on some resources like database connections or global settings.

In the rare case where you’d like to have another user service you could export the constructor from your service module as well (in UserService.js)


...
module.exports._UserService = UserService;

and if you need another one, you can (anotherModule.js):

var db = require('mongojs').connect("mongodb://a-crazy-server:27110/samples");
var myCustomUserService = new require('UserService')._UserService();
myCustomUserService.connect(db);

There’s one little twist that I found when playing around that might be helpful when you try this “pattern” on your own. You might be tempted to omit the service’s name in the module.export like so (UserService.js):

....
//don't do that
module.exports = new UserService();

because then you could (yetAnotherModule.js):

var userService = require('UserService');
userService.getUser(...)

That code is definitely working. But note, that you a) cannot export anything else (e.g. the constructor) now  and b) your IDE might not be able to resolve the methods (getUser) of that instance (e.g. IntelliJ WebStorm cannot).

AngelHack Berlin! Team Veebibi: Lessons Learned and how we made it

This weekend I attended the AngelHack Berlin that took place at the ImmobilienScout campus and has been part of a worldwide event series leading to the “crown” of hackers.

(Promotion) Personally I’m absolutely not into the “location” world but rather work on a project for social content curation, called Qurate that’s pretty close to the Value Proposition of the winning team (Edgar tells…) that won the 2w ride to San Francisco Silicon Valley. Thanks to Qurate you can relive the event again, based on Tweets and photos taken by the crowd. Here’s my personal “story” in pictures. If you want to build your own, feel free to do so by using our media table on https://www.qurate.de/angelhackber . I’ll ask the YouIsNow team to contribute their photos as well 😉

This time I wanted to do something visual, something that I can show off and something that does  good for those who need. Two weeks ago went the news that the “Verkehrsverbund Berlin Brandenburg” (VBB, hence the project name 😉 ) has opened up its data (article on Golem)  so just before I went to sleep on May, 3rd I had the idea to integrate that data somehow. Here’s what we came up with: http://veebibi.herokuapp.com

The story behind Veebibi

I’ve been born in Berlin and yet a child I often found asking myself, “Where is this bus going?” (I mean the route, not the target. Obvious). Most of us that live in the inner circle of Berlin are using the Metro or the S-Bahn for transit. It’s rather comfortable and – at least the metro – is mostly in time and runs very frequently, at least on daytime. But I found, that I had no clue where to find a good bus plan overview (obvious solution: look at the wall map, I know, I mean: on my smartphone). Actually that wouldn’t have solved my problem anyway: as said, “I want to know which route this bus is taking!“. Another example: once in January (it was very cold) I waited at the main station for the S-Bahn to come; suddenly the speaker said: “unfortunately your train will be delayed for an unknown time“. I only had to go 3 stations to Hackescher Markt and took the U55 to Brandenburger Tor. From there I had a walk to work and I nearly froze an ear and two toes off. I could’ve taken some bus for sure (the TXL maybe?) but finding out with a smartphone is not as easy as it might sound, especially not if you only have O2 at hand. You usually visit http://www.fahrinfo-berlin.de/, say where you are, where you want to go and pick a line from the search results. Google Maps isn’t showing VBB transit lines (except the trains) at all.

If that kind of app exists already or not wasn’t important for us anyway. We thought: Lets honour the effort of Berlin Brandenburg to finally open up their data set and utilize that data to draw transit lines on a Google map. Actually there is a little political background behind the data, too: the VBB never would’ve opened up without the pressure of the project OpenPlanB and such. Apps like Öffi (everyone loves you for that one, Andreas!!) had to use unofficial data sets to get transit information and as I heard the Deutsche Bahn is far from happy that suddenly thousands of hackers can write apps for their unsatisfied customers (some details at the end of the Golem article mentioned above).

How we did it

We fetched the open data set from the official web site. It happens to be in a standard format called GTFS and is globally adapted by public transit authorities. Now, we’re hackers, we wanted to learn and didn’t give a shoot on what the specification reads so we tried to import everything on our own. The data delivered by the VBB is splitted into 8 CSV files that make up a relational data structure. Relational? Come on, it’s 2013, NoSQL is the big buzz (NewSQL is, but that’s another topic) so we wanted to have the stuff in MongoDB. Don’t shake your head before you’ve seen the results 🙂

Our team member Robert  tried to import the data into Mongo directly but as you can imagine: that was a desaster. Lesson One (for the beginners): don’t import relational data into an objective database! You can’t join them anyway. So I suggested to write a “workaround”: first import the data into a relational system (MySQL is always a good decision), transform them into a document-like representation and import that stuff into Mongo. At that point Robert decided to skip the time table data from the set because that would’ve blown the overall result up to millions of rows (it’s basically a cross join of 7 tables, so we reduced to 5). He exported the result set to CSV and it looks like this:

CSV:

9087171,1,0,1,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"Flughafen Tegel (Airport) (Berlin)",52.5540690,13.2928370
9019105,2,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"Buchholzweg (Berlin)",52.5469730,13.3177930
9020202,3,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"S Beusselstr. (Berlin)",52.5343140,13.3287030
9002102,4,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"Turmstr./Beusselstr. (Berlin)",52.5273390,13.3287430
9003104,5,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"U Turmstr. (Berlin)",52.5258370,13.3424010
9003204,6,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"Kleiner Tiergarten (Berlin)",52.5249900,13.3457640
9003201,7,0,0,170,"S+U Alexanderplatz via Hauptbahnhof","Bus TXL",BVB---,"S+U Berlin Hauptbahnhof",52.5258470,13.3689240

The first id marks the stop, the next the position of that stop in the route. Skipping some cols we find the target of the line, the line name (“Bus TXL”), the company’s name that’s responsible for it (BVG), the stop’s name and its geocoordinates. Next we needed to transform those lines into JSON documents that fit into MongoDB. With one eye open and one half of his brain already shut down at 2am Robert hacked a PHP script that did the job pretty well. I don’t know how he made his way home alive after that (he went by bike) but I’m glad he made it! I spent one hour to fix the bugs he left over and came up with JSON data that’s compatible with mongoimport. Here’s an example document:

{ "target":"Flughafen Tegel (Airport) (Berlin)","line":"Bus TXL","stations":[ 
 { "id":"9003104", "name":"U Turmstr. (Berlin)", "loc": {"lng":13.3424010,"lat":52.5258370} },
 { "id":"9002102", "name":"Turmstr./Beusselstr. (Berlin)", "loc": {"lng":13.3287430,"lat":52.5273390} },
 { "id":"9020202", "name":"S Beusselstr. (Berlin)", "loc": {"lng":13.3287030,"lat":52.5343140} },
 { "id":"9019105", "name":"Buchholzweg (Berlin)", "loc": {"lng":13.3177930,"lat":52.5469730} },
 { "id":"9087171", "name":"Flughafen Tegel (Airport) (Berlin)", "loc": {"lng":13.2928370,"lat":52.5540690} }
 ]
}

It has been already 5am but from now on everything went straight. I imported the JSON into a Mongo instance hosted at MongoLabs (mongoimport -d mongo -c veebibi converted.json), an addon you can get from Heroku and put an index on the stations’ loc fields:

db.veebibi.ensureIndex( { "stations.loc" : "2d"} )

That makes querying lines (!) by position as simple as:

db.collection("veebibi").find({
 "stations.loc": { "$near": [54.036022,10.447311] } }
});

yields an array of up to 100 lines including all their stops. The perfect foundation for Veebibi since it’s exactly what we want. Since routes are stored more than once (a bus line might fork depending on time of the day and goes in both directions) I “consolidated” the response data by picking the line with the most stops.

The frontend is a piece of cake since everything’s just JSON. We let your browser acquire your current position (navigator.geolocation.getCurrentPosition), send it to our backend and transform the result coordinates into Google Maps polylines, one for each returned line:

this.locator.findLines(latlng, function(lines) {
 var polyOptions = {
   strokeOpacity: 1.0,
   strokeWeight: 3
 };
 _.each(lines, function(line) {
   polyOptions.strokeColor = VB.Frontend.COLORS[_.random(VB.Frontend.COLORS.length)];
   var poly = new google.maps.Polyline(polyOptions);
   poly.vbbLine = line;
   google.maps.event.addListener(poly,'click', function(e) {
      alert(this.vbbLine.line); //show line details to the user
   });
   self.polyLines.push(poly); //prepare polylines for removal on next click
   poly.setMap(self.gmap);
   var path = poly.getPath();
   _.each(line.stations, function(station) {
      var latLng = new google.maps.LatLng(station.loc.lat, station.loc.lng);
      path.push(latLng);
   });
 });
});

And that’s what you see when you click on a map on Veebibi. Interested readers will notice the usage of underscore iterators and the Google Maps V3 API.

veebibi_berlin

While I was hacking the core of all that stuff our team member Gabriel (I never remembered his name on location, now I can) spent some hours on writing most of the “frontend” you see when visiting the page for the first time. He used Twitter’s Backbone.js for many elements and tried to make everything normalized and responsive. Here some learnings he had when coworking with me:

1. you should not do git push origin master if it’s not working well. Instead push a branch that the maintainer can merge. The “real way” is actually: fork the project, push to your fork’s master and create a Pull Request for the maintainer on the root project.

2. You don’t put CSS information inside your main file. All <style> is evil.

3. The Javascript mongodb-native driver doesn’t compile on Windows. At least not at 3am.

4. You should configure your git to not ask for username and password every time. If you reject that advise, be sure that you don’t accidentally push a new publicly visible branch when doing: git push origin my-branchgabi@somedomain.com-pA22w0rD . It’s very easy to forget pressing enter on 6am with no sleep.

Our fourth colleague Alexander did research on the Google Maps API in the meanwhile, unfortunately the results he came up with didn’t make it into the final code but he found that’s pretty simple to make polylines follow actual streets. If you have a look at the veebibi output you’ll notice that bus routes are assembled out of straight lines. Usually buses don’t go right accross the Tiergarten lawn so this obviously can be improved. He sent me this GIST  around midnight. It describes how you can utilize the waypoints option to let Google Maps render a correct route across streets. For buses that might not be 100% exact but totally sufficient to render a nice view.

Our fifth colleague Zachary who went a long way from Ohio to join us on AngelHack BER (just kidding, he’s in the city for studies) was taking care of an idea that Gabriel came up with: while the bus lines are pretty uninteresting at first glance, why not pepper the view up with a heatmap that’s rendered on a Twitter search result for popular / trending hashtags (e.g. “#party” or “#bbq”) so you know where to go once you realized how to get there – we actually call that the “party mode” component of Veebibi: buy a beer, get on a bus and head to a party. In Berlin that can be really fun:)

We never integrated that stuff unfortunately but Zachary did an amazing job analysing the FusionTable concept in Google  that can be used to generate datasources for maps overlays with a huge amount of location data. In our case we could simply have used the standard way of doing things (for a limited set of data the Google Maps API alone is sufficient to render heatmaps).

The Pitch

I was the lucky one to pitch that project on stage, using a presentation assembled by Alexander and I made clear on the first second that this wasn’t going to be the next “We have a brilliant business concept and here is how to make money with it” pitch ). It’s simply a product of some productive minds that used a day and a night to hack the shoot out of their brains. The audience cheered when they saw that you can actually travel from Berlin to Stralsund just using public transit lines so I’m glad we achieved our goal: we made something to make people cheer!

veebibi_stralsund

[tweet https://twitter.com/picsoung/status/331035922592301057 ]

Thank You All!

So I can only finish this article with an especially grateful “Thank You!” to Alexander from Westech Ventures who honored our team’s effort with a special price for “an idea that could possibly grow to a business”. The core idea to utilize GTFS data to build up a global transit information system is definitely not unique but would lead to a possible B2B-approach that could work world wide. The way we utilized the date is far from an industrial state but we showed that it’s absolutely possible. So we got away with 4 Chinese Android tablets; imho more than we could’ve expected.

I’d like to thank Robert, Gabriel, Alexander (and your girlfriend: thanks for the logo 😉 ) and Zachary for making this possible! Not to forget the orga team of ImmobilienScout24 / You Is Now that offered a brilliant location for the Hackathon and done a great job to feed us over the time (I won’t have donuts for the next couple of months!).

PS: don’t forget to visit Qurate and contribute your impressions. And tell your own story if you want to 🙂

[GER] Die “Wahrheit” über Instagram’s neue Nutzungsbedingungen

Vorgestern kündigte Instagram neue Nutzungsbedingungen an, die alle Nutzer implizit beim nächsten Klick anerkennen werden.

Gestern begannen alle, sich darüber aufzuregen, nachdem elendig dumme Nachrichtenagenturen eine Interpretation in die Welt setzten, die deutsche Medien wie immer  zur sensationsbedingten Stärkung der Auflage unfiltriert nachplapperten oder dem weniger geschult blickenden Nutzer wenigstens so vorkauen: Instagram will Eure Bilder verkaufen!  Was für ein Bullshit. Instagram passte seine Nutzungsbedingungen so an, dass sie zur Konzernmutter Facebook passen. Alle Rechte werden der Plattform eingeräumt. Damit sichert sich so eine Plattform in erster Linie vor den ganzen bescheuerten Abmahnanwälten ab, die aus dem Kraut schießen, wenn irgendetwas in Bezug auf Verwertung unklar ist. Wer zum Teufel verkauft denn quadratische Kaffeetassen-Fotos? Andererseits: was wäre so schlimm daran, wenn man sie “verkaufen” könnte (und den Nutzer daran beteiligte)? Das ist 500px Geschäftsmodell!

Gut, dass heise sich erbarmte, heute morgen die bessere Interpretation von Instagram ins Deutsche zu übersetzen: http://www.heise.de/newsticker/meldung/Instagram-Wir-wollen-ihre-Fotos-nicht-verkaufen-1771758.html. Die Kollegen von SPON, die den Stuss inklusive hochqualifizierter Pressestimmen aus dem sensationslüsternen und investorenverseuchten amerikanischen Technews-Stream herausfiltern, schreiben dazu auch noch frech: Instagram knickt nach Foto-Protest ein. So ein Quatsch! Da knickt nichts ein, da wird nur für doofe Journalisten gesagt, was man wirklich tut.

Und, um es nochmal klar zu sagen: Bilder, die wir ins Netz stellen, sind immer public. Nur manchmal sind die Zugangshürden höher. Und das ist auch gut so! Denn privat waren wir die letzten 2000 Jahre.  Und: ich würde mir wünschen, dass man sich darüber freut, wenn Apps  Wege finden, Geld abseits von blinkenden Werbebannern (die heute sowieso kein Schwein mehr wahrnimmt geschweigedenn so dumm ist, sie anzuklicken) zu verdienen. Das würde Tech-Startups endgültig legitimieren und hochqualifizierte Jobs schaffen. Aber in diesem Land hier muss ja immer erst gemeckert und auf Gefahren hingewiesen werden, anstatt Potenziale und Mehrwerte zu erkennen.

Bullshit Dollars B$D: no-money payment for digital goods

Today morning I read an article on SPON about the unwillingness of readers to pay for digital journalism: you might’ve heard that recently in Germany two popular and serious newspapers (The Financial Times and the famous Frankfurter Rundschau) have been forced to lay down their business because they simply couldn’t monetize their users. The Daily and Newsweek are other popular examples of the journalistic genocide that traditional newspapers and even digital media is confronted with.

I must admit that I also don’t want to pay $/€3.99 a month for articles even though I know that it would be more than worth the price to have a broad variety of independent writers delivering truth to the people in the world. I just don’t feel like doing it.

Idea: B$D

Then I came up with this one: Since many people seem to have the same feelings as I have (don’t want to pay but want to have) let’s start a “virtual” currency that isn’t worth any real money in the first place! For example: if I’d like to read an article (which has no physical value at all for itself). The paper offers me to read it for 1 “bullshit dollar” (B$D) which is not worth anything at all – no one has paid for it yet. I actually don’t even own any B$D dollars when I register for the service. But I can pay with it: setting my balance to -1B$D and the publisher’s balance +1B$D. By paying 1B$D I can read the article. I can go on and read as many articles as I want, paying 1 B$D each, the paper earning 1B$D each. Playing that game leads me to the situation that I have lots of negative B$Ds and the papers have lots of (yet worthless) B$Ds.

What (all!) digital news publishers – including blogs – now could do is check the user’s current B$D-balance and implement  a higher latency (waiting time) or advertisement cycle for users that have an overproportionally high negative amount of B$Ds.  Those users will get served very slowly, gets presented lots of ads etc. To make the system work, as many news publishers as possible should offer B$D payments on their site and implement a growing latency for users with high  B$D “debts”. That would mean that users that consume lots of news will be served slower and slower over time affecting the whole internet and news reading experience (on sites that require B$D payments for reading their news).

If the user is fed up with the high latency he can pay a certain amount of real money to lower his negative B$D balance. Each payment is distributed equally among all publishers with high B$D balances. Since every day new B$Ds are created by users who read articles for free the value of one B$D decreases every day.

Consequences

In this system each click (!) on an article is charged in B$Ds. This means that everyone who starts a blog starts earning virtual money from the very first click on one of his articles. Since there are millions of publishers and readers the “real” money value is extremely low and the exchange rate highly flexible since every day new B$Ds are created. Big publishers with good visibility and high traffic would earn  more from paying users since they have a proportionally high balance, small blogs at least have the opportunity to earn some money.

No one has to charge users directly: there is simply no contract between the user and the publisher. Real money is distributed absolutely equally among all participants.

Everything is free. The web shows that users simply are not willing to pay for news as long as they can get them for free somewhere else. Implementing the B$D payment on many sites would introduce the parameter of “pain” to non-paying customers slowly. On the long term users pay for the remedy of pain (“quality/speed”), not for distinguished digital goods.

Conclusion

I think the publishing industry has tried and failed to establish paid news content on the web. This has been proven by so many examples now that it is time to rethink the relation between readers and publishers. The B$D introduces an opportunity to charge users for “quality” that is not defined by the product itself but by the speed of the infrastructure.  Users who can  pay more for news get faster access (it’s like cars: you can go from A to B with an old one but it’s more fun using a new / expensive one)

This is an open thought and definitely not finalized. Feel free to comment!

Here’s the pointer that led me to the idea: http://www.spiegel.de/wirtschaft/unternehmen/warum-das-zeitungssterben-auch-online-leser-beunruhigen-muss-a-871220.html

[Unanswered] How is Pinterest implementing their Wall-Posts on Facebook?

Today morning I stumbled upon a Wall posting on Facebook from my friend Melanie who pinned some pictures on her Pinterest boards. I was pretty amazed by the grade of interactivity that the posting is offering: you can share / aggregate multiple items at once (as it is done by the aggregation panes in the Timeline view), have individual interactions (Go to, Comment, like) on each of them and the post is “branded” with “XXX has XXX on XXX” which obviously is implied by the Graph API . As a Facebook developer I wasn’t aware that this is (currently and officially) possible on Facebook and I wasn’t able to find anything on Facebook’s Open Graph documentation that looks even close. This is how Melanie’s actions (pin) on Pinterest’s objects (board) look like on my Wall: