Facebook JS SDK for login and python backend api calls with Pyfb.

Sometimes you don’t want to have a redirect from your site to facebook to just perform the login. The solution to this problem is simple. Fortunately facebook provides a login via a popup through the js sdk. The only big problem with this is that you must do api call with javascript right on the client side. This is not the best choice at all. If you don’t take care your application might become very vulnerable.

That’s the reason I’ll you you how to use the js sdk just for login and api calls through python backend code using the library I wrote, Pyfb.

First at all you need to write the index.html where the code to achive the login will be located. It would look like this:

    <head><title>Facebook Login with JS SDK</title>
    <div id="fb-root"></div>

        function isConnected(response) {
            return response.status == 'connected';

        function getLoginStatus(FB) {

            FB.getLoginStatus(function(response) {

                if (isConnected(response)) {
                else {

        function onLogin(response) {

            if (isConnected(response)) {
                location.href = '/facebook_javascript_login_sucess?access_token=' + response.authResponse.accessToken;

        window.fbAsyncInit = function() {

                appId      : '{{FACEBOOK_APP_ID}}',
                channelUrl : 'http://localhost:8000/media/channel.html',
                status     : true,
                cookie     : true,
                xfbml      : true,
                oauth      : true,


             var js, id = 'facebook-jssdk'; if (d.getElementById(id)) {return;}
             js = d.createElement('script'); js.id = id; js.async = true;
             js.src = "http://connect.facebook.net/en_US/all.js";


        <button onclick="getLoginStatus(FB)">Facebook Javascript Login</button>

As you can see, in the login callback function (onLogin) you are receiving the access token. This token will allow you to make backend calls, so don’t lose it! I’d recommend to save it in session or store it on the database every time a user do the login.

I will be using django for this example but you could use whatever you want for backend. The views.py django file would looks like this:

from pyfb import Pyfb
from django.http import HttpResponse, HttpResponseRedirect
from django.shortcuts import render_to_response


def index(request):
    return render_to_response("index.html", {"FACEBOOK_APP_ID": FACEBOOK_APP_ID})

#Login with the js sdk and backend queries with pyfb
def facebook_javascript_login_sucess(request):

    access_token = request.GET.get("access_token")

    facebook = Pyfb(FACEBOOK_APP_ID)

    return _render_user(facebook)

def _render_user(facebook):

    me = facebook.get_myself()

    welcome = "Welcome <b>%s</b>. Your Facebook login has been completed successfully!"
    return HttpResponse(welcome % me.name)

Finally just configure the urls.py:

urlpatterns = patterns('',
    (r'^$', 'djangoapp.django_pyfb.views.index'),
    (r'^facebook_javascript_login_sucess/$', 'djangoapp.django_pyfb.views.facebook_javascript_login_sucess'),

And don’t forget to have the properly configuration constants on your settings.py:

# Facebook related Settings

That’s it! enjoy the facebook graph API!


Proxy Dispatcher implemented in PHP

I want to share a piece of code which might be very usefull when you have to deal with objects introspection in PHP. I played for years with the python’s introspection system and I loved it.

But now I’m back on PHP. A language that have very good metaprogramming tools but which is less pragmatic than python or ruby in this aspects (and maybe in almost all aspects) under my point of view.

In this piece of code I’m trying to replace the *args of python with the php function call_user_func_array. The functionally behind this differents implementations is very similar in the end. But I ever think python’s approach is far better =).

Let the code talk:

* Proxy Dispatcher using php call_user_func_array (http://us2.php.net/manual/en/function.call-user-func-array.php)
* */

class Foo {

    function bar1($arg, $arg2, $arg3, $arg4) {
         return "arg: $arg, arg2: $arg2, arg3: $arg3, arg4: $arg4\n";
    function bar2($arg, $arg2) {
        return "arg: $arg, arg2: $arg2\n";
    function bar3($arg) {
        return "arg: $arg\n";

class FooWrapper {

    public function __construct() {
        $this->_foo = new Foo();

    public function __call($method, $arguments) {
        return call_user_func_array(array($this->_foo, $method), $arguments);

$fooWrapper = new FooWrapper();
echo $fooWrapper->bar1(1,2,3,4);
echo $fooWrapper->bar2(1,2);
echo $fooWrapper->bar3(1);

And here is the python’s code for the same:

class Foo(object):

    def bar1(self, arg, arg2, arg3, arg4):
        print "arg: %s, arg2: %s, arg3: %s, arg4: %s" % (arg, arg2, arg3, arg4)

    def bar2(self, arg, arg2):
        print "arg: %s, arg2: %s" % (arg, arg2)

    def bar3(self, arg):
        print "arg: %s" % arg

class FooWrapper(object):

    foo = Foo()

    def __getattr__(self, name):
        return lambda *args, **kwargs: getattr(self.foo, name)(*args, **kwargs)

fooWrapper = FooWrapper()

Just Another Real Time Chat Built Over Node-js and Socket.io

I recently wrote another real time chat built over Node-js and Socket.io. Here is the link: https://github.com/jmg/node-simple-chat

I was researching so much about node-js lately and it turned out amazing when I have to deal with real time applications. But I think python’s Eventlet could achieve a very good performance too. I really need to reimplement this using eventlet websockets and then do some sort of benchmarks.

Probably the topic for my next post =). Keep reading.

New year’s Post (Part II)

I started this blog exectly one year ago and loved so much sharing posts and getting feedback from people who read me. Also I think I made the better choice when I decide start blogging with WordPress. WordPress have very nice features to help me to write posts and include code snipets in them (by the way, almost all of my posts are about software programming).

This was my first year as a blogger. I learned too much the past year and I’m really glad to have a place which allows me to share my acquired knowledge. I hope will be able to continue doing that in this brand new year.

I’m sure this will be a great year for my professional carrer and I will have too much to share here. I started a Company, got more jobs and more interesting activities to do. That’s awesome to me but I wish won’t make me stop writing by lack of time.

Anyway, don’t worry, I’ll always find time to write. I love so much this and I’ll ever be back here, the place I discovered exactly one year ago.

Now, One year after my first post, I promised myself to write one post every first day of every new year. Just for remember, just for recall the time when I started…

Crawley Cloud

I want to share this new web site I’m building. It’s called crawley cloud and will be a crawling and scraping network built on the top of crawley framework.

The original idea of crawley cloud emerged from the lack of a user friendly interfaces to allow every people to search and extract data from the internet. The main goal of this network will be provide the user a bunch of tools (like a customized web browser) in order to make easy the task of searching and extracting data from web sites.

The users will be able to register in this site, download these tools and store theirs projects on the server. Also they would store the extracted data into their accounts and access it whenever they want!

We’re thinking about presenting the extracted data in a real time way too. Wich will provide a more interactive task.

And most importat. It all will be based on an open source framework. So you are able to contribure when you wish!

It’s just the beggining of the project, so if you need to extract data from a specific web site right now you can contact us and we will be glad to help you! Just go to our contact page and send us an email with the details.

Keep reading for updates!

Crawley – A Scraping / Crawling Framework Built On Eventlet

A few weeks ago I started a new project. This is a Crawling / Scraping framework aimed to make easy the way we extract data from the web and store it in a relational database.

Today I released the early version 0.0.4 and I wrote several examples wich explains what the framework can do. I promise to make more real world examples and more documentation in the next days. In the mean time you can follow the project advances on the official repository at github and play with the examples.

You can also download crawley from pip running:

~$ pip install crawley 

and check the documentation.

That’s all for now. Keep watching the repository  =).