two helpful WebStorm plugins: Heroku and MongoDB

I’m utilizing  mongoDB every day out of a Node/express/Javascript backend. I always found it rather cumbersome to assemble mongo’s JSON based queries since there are not many good frontends around. Since I’m using JetBrain WebStorm – by far the best choice for Javascript-coders –  I recently had a look in their plugin repo and what did I see? A MongoDB plugin (@Webstorm). It comes with two views: the Explorer lets you pick the collection you want to query from, the Runner executes your queries and shows a JSON-prettified set of results. There’s not much to it but I like IDE-embedded DB-querying a lot!

PS: A quite feat-complete but Swing-powered (proprietary UI-framework to be honest) UI for mongoDB is UMongo (or if you prefer: the project site). Unfortunately it doesn’t seem to be maintained anymore. If you’re interested in refactoring / rewriting / forking this  software let me know how I could possibly help 🙂 I think, the first step would be to have a working Maven POM for it since the author only provides a .netbeans “build script”. Next up would be factoring out the author’s proprietary frontend framework – good thought, but no one knows / wants to know how to use it (my 50 cents) and replace it with either native Swing (sucks) or a more abstract UI layer (web components? Faces? FX? SWT? )

mongo_webstorm

Another quite helpful tool for Webstorm is the Heroku plugin (github). It allows you to connect your Heroku application instances, watch the logs, scale your dynos, invite collaborators, activate addons, restart the container and maintain your environment configuration.

heroku_webstorm

It’s much more visual than your command line toolbelt and since the git support in Webstorm is particularly well done you can e.g. track which changes you have deployed on a certain day without having to leave  your IDE!

heroku_webstorm

Advertisements

[GER] Die “Wahrheit” über Instagram’s neue Nutzungsbedingungen

Vorgestern kündigte Instagram neue Nutzungsbedingungen an, die alle Nutzer implizit beim nächsten Klick anerkennen werden.

Gestern begannen alle, sich darüber aufzuregen, nachdem elendig dumme Nachrichtenagenturen eine Interpretation in die Welt setzten, die deutsche Medien wie immer  zur sensationsbedingten Stärkung der Auflage unfiltriert nachplapperten oder dem weniger geschult blickenden Nutzer wenigstens so vorkauen: Instagram will Eure Bilder verkaufen!  Was für ein Bullshit. Instagram passte seine Nutzungsbedingungen so an, dass sie zur Konzernmutter Facebook passen. Alle Rechte werden der Plattform eingeräumt. Damit sichert sich so eine Plattform in erster Linie vor den ganzen bescheuerten Abmahnanwälten ab, die aus dem Kraut schießen, wenn irgendetwas in Bezug auf Verwertung unklar ist. Wer zum Teufel verkauft denn quadratische Kaffeetassen-Fotos? Andererseits: was wäre so schlimm daran, wenn man sie “verkaufen” könnte (und den Nutzer daran beteiligte)? Das ist 500px Geschäftsmodell!

Gut, dass heise sich erbarmte, heute morgen die bessere Interpretation von Instagram ins Deutsche zu übersetzen: http://www.heise.de/newsticker/meldung/Instagram-Wir-wollen-ihre-Fotos-nicht-verkaufen-1771758.html. Die Kollegen von SPON, die den Stuss inklusive hochqualifizierter Pressestimmen aus dem sensationslüsternen und investorenverseuchten amerikanischen Technews-Stream herausfiltern, schreiben dazu auch noch frech: Instagram knickt nach Foto-Protest ein. So ein Quatsch! Da knickt nichts ein, da wird nur für doofe Journalisten gesagt, was man wirklich tut.

Und, um es nochmal klar zu sagen: Bilder, die wir ins Netz stellen, sind immer public. Nur manchmal sind die Zugangshürden höher. Und das ist auch gut so! Denn privat waren wir die letzten 2000 Jahre.  Und: ich würde mir wünschen, dass man sich darüber freut, wenn Apps  Wege finden, Geld abseits von blinkenden Werbebannern (die heute sowieso kein Schwein mehr wahrnimmt geschweigedenn so dumm ist, sie anzuklicken) zu verdienen. Das würde Tech-Startups endgültig legitimieren und hochqualifizierte Jobs schaffen. Aber in diesem Land hier muss ja immer erst gemeckert und auf Gefahren hingewiesen werden, anstatt Potenziale und Mehrwerte zu erkennen.

Bullshit Dollars B$D: no-money payment for digital goods

Today morning I read an article on SPON about the unwillingness of readers to pay for digital journalism: you might’ve heard that recently in Germany two popular and serious newspapers (The Financial Times and the famous Frankfurter Rundschau) have been forced to lay down their business because they simply couldn’t monetize their users. The Daily and Newsweek are other popular examples of the journalistic genocide that traditional newspapers and even digital media is confronted with.

I must admit that I also don’t want to pay $/€3.99 a month for articles even though I know that it would be more than worth the price to have a broad variety of independent writers delivering truth to the people in the world. I just don’t feel like doing it.

Idea: B$D

Then I came up with this one: Since many people seem to have the same feelings as I have (don’t want to pay but want to have) let’s start a “virtual” currency that isn’t worth any real money in the first place! For example: if I’d like to read an article (which has no physical value at all for itself). The paper offers me to read it for 1 “bullshit dollar” (B$D) which is not worth anything at all – no one has paid for it yet. I actually don’t even own any B$D dollars when I register for the service. But I can pay with it: setting my balance to -1B$D and the publisher’s balance +1B$D. By paying 1B$D I can read the article. I can go on and read as many articles as I want, paying 1 B$D each, the paper earning 1B$D each. Playing that game leads me to the situation that I have lots of negative B$Ds and the papers have lots of (yet worthless) B$Ds.

What (all!) digital news publishers – including blogs – now could do is check the user’s current B$D-balance and implement  a higher latency (waiting time) or advertisement cycle for users that have an overproportionally high negative amount of B$Ds.  Those users will get served very slowly, gets presented lots of ads etc. To make the system work, as many news publishers as possible should offer B$D payments on their site and implement a growing latency for users with high  B$D “debts”. That would mean that users that consume lots of news will be served slower and slower over time affecting the whole internet and news reading experience (on sites that require B$D payments for reading their news).

If the user is fed up with the high latency he can pay a certain amount of real money to lower his negative B$D balance. Each payment is distributed equally among all publishers with high B$D balances. Since every day new B$Ds are created by users who read articles for free the value of one B$D decreases every day.

Consequences

In this system each click (!) on an article is charged in B$Ds. This means that everyone who starts a blog starts earning virtual money from the very first click on one of his articles. Since there are millions of publishers and readers the “real” money value is extremely low and the exchange rate highly flexible since every day new B$Ds are created. Big publishers with good visibility and high traffic would earn  more from paying users since they have a proportionally high balance, small blogs at least have the opportunity to earn some money.

No one has to charge users directly: there is simply no contract between the user and the publisher. Real money is distributed absolutely equally among all participants.

Everything is free. The web shows that users simply are not willing to pay for news as long as they can get them for free somewhere else. Implementing the B$D payment on many sites would introduce the parameter of “pain” to non-paying customers slowly. On the long term users pay for the remedy of pain (“quality/speed”), not for distinguished digital goods.

Conclusion

I think the publishing industry has tried and failed to establish paid news content on the web. This has been proven by so many examples now that it is time to rethink the relation between readers and publishers. The B$D introduces an opportunity to charge users for “quality” that is not defined by the product itself but by the speed of the infrastructure.  Users who can  pay more for news get faster access (it’s like cars: you can go from A to B with an old one but it’s more fun using a new / expensive one)

This is an open thought and definitely not finalized. Feel free to comment!

Here’s the pointer that led me to the idea: http://www.spiegel.de/wirtschaft/unternehmen/warum-das-zeitungssterben-auch-online-leser-beunruhigen-muss-a-871220.html

[Unanswered] How is Pinterest implementing their Wall-Posts on Facebook?

Today morning I stumbled upon a Wall posting on Facebook from my friend Melanie who pinned some pictures on her Pinterest boards. I was pretty amazed by the grade of interactivity that the posting is offering: you can share / aggregate multiple items at once (as it is done by the aggregation panes in the Timeline view), have individual interactions (Go to, Comment, like) on each of them and the post is “branded” with “XXX has XXX on XXX” which obviously is implied by the Graph API . As a Facebook developer I wasn’t aware that this is (currently and officially) possible on Facebook and I wasn’t able to find anything on Facebook’s Open Graph documentation that looks even close. This is how Melanie’s actions (pin) on Pinterest’s objects (board) look like on my Wall:

 

 

When an old Mustache partial wants {{.}} give him {.:v}

Javascript is full of weird “special” features. I’m reusing my Mustache-templates using Hogan.js for Express on the server and on the client side. Lately I noticed  that the  client-side ICanHaz template loader uses Mustache 0.4.0, a version that’s highly outdated. Let me show you what I tried.

When traversing arrays the {{.}} template variable comes in handy: it’s replaced with the “current” value. If you want to render an array of strings you do this (Mustache 0.4.0):

$ npm install mustache@0.4.0
$ node

var Mu = require('mustache');
var opts = {attrs: ["weird", "crazy","awesome"]};
var tpl = 'Javascript is {{#attrs}} {{.}} {{/attrs}}';
Mu.to_html(tpl, opts); //Mu 0.4.0

> 'Javascript is  weird  crazy  awesome '

More advanced, lets use partials (preregistered sub templates):

var partials = {part: "<b> {{.}} </b>" };
var tpl = 'Javascript is {{#attrs}} {{>part}} {{/attrs}}';
Mu.to_html(tpl, opts, partials);

> 'Javascript is  <b> weird </b>  <b> crazy </b>  <b> awesome </b> '

What if you would like to render the part-partial on its own for only one string, say “queer” ?:

var qu = "queer";
Mu.to_html(partials.part, qu)

> ‘<b>  </b>’

Solution:

var qu = { ".":"queer" };
Mu.to_html(partials.part, qu);

> ‘<b> queer </b>’

First, I thought this is a major “flaw” in Mustache but on the server side I’m using the latest version (0.7.0) so I never noticed this behaviour before. Good thing: you can bring your own Mustache for ICanHaz if you don’t want to go the “.” way 🙂 Going to do that now.

In 0.7.0 you simply do:

var Mu = require('Mustache');
var partials = {part: "<b> {{.}} </b>" };
var qu = "queer";
Mu.render(partials.part, qu);

> ‘<b> queer </b>’

Lessons learned: object property names can be “.”. Another funny side-notice: you can also use keywords as property names, e.g. I recently have seen someone returning {“return”:”true”} in his AJAX calls. I doubt that this is good practice…

Idea: A Business Model Canvas Collaboration and Comparison Platform

If you’re into the founding business and you’re following best practices threads you will have stumbled upon the concept of Business Model Canvasses as supposed by Alexander Osterwalder. The idea of business model canvasses is to focus on the cornerstones of your business: what is the value that you want to deliver (aka your “product”)? What is the customer focus (B2B/B2C)? Where’s the money coming from? What are you doing everyday? What do you essentially need to keep the whole thing running (resources, partners)? A Business Model Canvas aligns all of your answers on a unique table and acts as a discussion tool for your co-founders, partners and investors. Instead of pitching the idea (which basically leads you to explain your product, not your revenue opportunities) you can pitch the canvas – people who know the concept will understand in 3 minutes what you’re talking about as long as you found the right descriptions for each item.

The Business Model Canvas

The bad thing about business modelling is: not everything that sounds good or valid is working in the real world. And in most cases you won’t be able to see that on the canvas. If you are coworking with experienced people from your business area they might tell you: “that never worked for me, so don’t do it” or “I’ve tried that in the past, it worked great for me”. But what you still don’t know is if the market is still ready to pay for your value or if it’s going to fail because the idea you have is going to fail because it always had failed in the past. Business Modelling is only working if you can validate your model – that means greatly simplified: call your customer or partner, ask them if they want to pay, charge them, adjust the price, see if it scales. In reality you’re still depending on experience. But where to get the experience?

Since Osterwald’s canvasses are structuring the concept of a business model very well they should be comparable. Most canvasses that I have seen also contain somewhat similar words like “end customer”, “traveller”, “media budget”, “AdSense”, “Affiliates”, “Local Heroes”, “mass market” etc. I think those words could at least be categorized if not completely reused. For many business this is even true for the value proposition perspective: A company that sells personalized cereals is quite similar to a company that sells personalized T-Shirts – except their Key Activities which in the first case means you have to operate machines or employ people to mix the ingredients and in the latter case to have a machine print T-Shirts. In the end both companies are profiling, attracting and keeping customers and ship a physical product, one on a weekly, the other on a rather spontaneous schedule.

I’d like to state that a main purpose of business model canvasses is to make business models comparable. So if you share your model with standardized captions with others you might discover patterns in what’s working well and what might be a rather bad idea instead. If 10 companies that assemble a shippable personalizable physical product have failed putting sponsor brands in the Key Partnership or Revenue Stream columns it could be rather obvious that you’re business model is also unhealthy if you do the same thing. Of course this example highly depends on the business area you’re working with: Adidas might be highly interested in putting their name on an individual sport drink package than Kellog’s on your personal cereal.

Fact is that some patterns are proven to work better than others, depending on industry, market environment, company size etc. So lets imagine you want to start your next idea’s business model canvas by putting phrases (using a certain category / taxonomy) in the right columns. As soon as VP contains [physical, personalizable] and R$ contains [branding] an alert bulb goes yellow to warn you that this combination has failed in ten other cases and only worked for two. You then can switch your view to those cases and see if you have really good reasons to prove the warning incorrect. Furthermore the system could learn from good practices and suggest ideas to new ones: if you have drawn most of the product / value parts but still don’t have an idea what to write into the R$ column (other than “advertising” which most people do first) the system makes suggestions (“branding”, “one time fee”) that it gets from similar models that have worked.

 

Using command line tools to retrieve original images from Picasa Web albums

Today I tried to share a list of photos from my best friend’s wedding with various internet services. Since Facebook for Android is not capable of uploading multiple images at once I decided to upload them to Google Picasa Web albums first. Afterwards I switched their visibility to  a restricted circle of users and shared them with Google+. Unfortunately not many wedding guests use G+ but mostly favor Facebook. Lazy as I am I didn’t want to transfer the photos from my mobile to the desktop and upload them again (using the MTP protocol propagated by ICS/JB//Nexus doesn’t work well on Linux anyway) so I looked for a way to download all of the Picasa Web Album shots using the Picasa API.

Here you’ll get an overview of available endpoints: https://developers.google.com/picasa-web/docs/2.0/developers_guide

Note that the Picasa Wen Albums Data API strongly deals with XML, media and ATOM formats so it should be rather easy to parse the results. I decided to go with the album list first:

https://picasaweb.google.com/data/feed/api/user/<yourUserId>

You can use the long integer that is shown on your Google/+ profile’s URL as <yourUserId>. That one yields a document containing all albums. Note that you have to get a valid OAuth access token if you’d want to get the contents from within your own software – for this simple usecase it’s sufficient to use a browser window that holds your Google authentication cookies. To save the results you can simply save a page locally.

That document will yield an ATOM document containing the authenicated user’s albums. Find the album that you want to retrieve pictures from and copy the href-link from the <link rel=’http://schemas.google.com/g/2005#feed&#8217; … > element. Note that by default settings that document won’t contain links to the full size images. To let the API yield those, add another parameter to the URL, right after the auth-key: &imgmax=d . The url should look like that:

https://picasaweb.google.com/data/feed/api/user/<yourUserId>/albumid/<albumId>?authkey=#####&imgmax=d

Open that one in your browser window again and save its contents. Note that the API restricts you to 30 images by default. If you need more add yet another parameter: &max-results=1000. Now you can parse the XML result using your favorite tools. Since I’m on Linux I chose xmlstarlet that can be easily installed using apt:

sudo apt-get install xmlstarlet

This little XML-suite contains tools to execute XPath expressions on documents from the command line. Additionally you could also apply XSLT transformations. To get the plain image media sources from the last document retrieved you’ll issue this command:

xmlstarlet sel -t -v "//media:content/@url" your_document.xml > origs.txt

origs.txt will now contain a line-broken list of original images. To finally retrieve them you can use wget (that should come with your standard installation; if not: sudo apt-get install wget):

wget -i origs.txt

and voila: there are all your images 🙂

Links: