JavaScript Notebook. Saw, Shura, saw!*

Javascript / UI / Web technologies notes / * – "Saw Shura, Saw! These are golden kettlebells…" (from "The Little Golden Calf" by Ilf and Petrov)

JavaScript Promises: handle repeatedly rejected requests

Lets look at JavaScript promises and try to build a simple flow that should handle multiple rejection of the same request.

Why? When I first learned about promises in Javascript I was so excited that decided to use them everywhere: for data, business logic (flow), basically for everything that requires (or potentially could require) async actions. I don’t have strong Node.js experience but I guess folks with this background deal with promises all the time. What I am not sure for now is how flexible and convenient promises are. When I first started to work with event streams (Flapjax, RxJS) I felt the power and could see a lot of advantages in defining a flow using them as well as building data abstraction. But can we rely on promises the same way?

GOAL: By starting with this post what I want is to look at different cases to use promises for and to see how well they work out. I will also try to compare promise based implementation with event stream based one (using RxJS).

If you are not familiar with promises in JS I would recommend you reading the following posts: JavaScript Promises and Promise Anti-patterns

Here is a simple scenario we will implement:

GIVEN: A view with a single Login button.

WHEN User clicks Login
AND  fails authorization
THEN User should be able to click Login and attempt 
     another authorization.

WHEN User clicks Login
AND  succeeds authorization
THEN every next click should skip authorization call and 
     just perform the success action.

*** In my code I will be using Dependency Injection pattern and the implementation of javascript promises by Kris Kowal from here.

We can have our user resource to look like this:

function User (asyncLogin, q) {
    var loginStatus;

    return {
        isLoggedIn: function () {
            if (!loginStatus) {
                loginStatus = q.defer();
                asyncLogin(loginStatus);
            }
            return loginStatus.promise;
        }
    };
}

And lets use a timeout for our asynchronous login functionality:

function asyncLogin (deferred) {
    timeout(function () {
        if (confirm('Do you want to be logged in?')) {
            deferred.resolve(true);
        } else {
            deferred.reject('User did not authorize');
        }
    }, 100);
}

Let our UI look like this: single “Login” button with onClickLogin handler returned by LoginHandler factory:

function LoginHandler (user) {
    return function () {
        user.isLoggedIn().then(function () {
            alert('Successfully Logged in');
        }, function (reason) {
            alert('Error: ' + reason);
        });
    }
}

Now if we initialize our mini app:

    var user = User(asyncLogin, Q),
        onClickHandler = LoginHandler(user);

    $('#loginBtn').click(onClickHandler);

then click on the Login button and login successfully, then all other clicks will alert “Success…” immediately.
But if the first time we fail to login, then all further clicks will alert the error (also immediately).

This is because once we created loginStatus promise (which could be rejected or resolved only once) isLoggedIn() is no longer capable to perform another asyncLogin attempt.

Lets fix this.

First, lets modify the async login function to have its own deferred object:

function asyncLogin () {
    var deferred = q.defer();
    timeout(function () {
        if (confirm('Do you want to be logged in?')) {
            deferred.resolve(true);
        } else {
            deferred.reject('User did not authorize');
        }
    }, 100);
    return deferred.promise;
}

And lets update our api function like this:

function User (asyncLogin, q) {
    var loginStatus;

    return {
        isLoggedIn: function () {
            if (!loginStatus) {
                loginStatus = q.defer();
                asyncLogin().then(function (result) {
                    loginStatus.resolve(result);

                }, function (reason) {
                    // Notify subscribers about the failure:
                    loginStatus.reject(reason);

                    // And destroy the deferred object for this failed attempt:
                    loginStatus = null;
                });
            }
            return loginStatus.promise;
        }
    };
}

Now if we start with unsuccessful login, every time we click Login button for the next attempt user.isLoggedIn() will create a new deferred and perform another async login.
And Once we succeed in login then user.isLoggedIn() will immediately return the resolved promise without async login call.

Here is a JSFiddle (http://jsfiddle.net/UYb6e/) and all the code together along with html is below. Later I will also provide a github link to angularjs app using the described pattern.

CONCLUSION

As we can see promises can handle the described scenario. In terms of reactive programming what we did is we merged two event streams: UI interactions (button click to initiate login) and authorization requests (e.g. an ajax call to a server api). We did have to think of a way of “merging” the streams – we created one series of promise for the login button action and another series of promise for the async call.

<button id="loginBtn">Login</button>

<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.js"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/q.js/0.9.2/q.js"></script>

<script>
console.log('Started');
$(function () {
    console.log('Loaded. Initializing');
    var asyncLogin = AsyncLogin(Q),
        user = User(asyncLogin, Q),
        onClickHandler = LoginHandler(user);

    $('#loginBtn').click(onClickHandler);
});

function User(asyncLogin, q) {
    console.log('User initialized');
    var loginStatus;

    return {
        isLoggedIn: function () {
            if (!loginStatus) {
                loginStatus = q.defer();
                asyncLogin().then(function (result) {
                    loginStatus.resolve(result);

                }, function (reason) {
                    // Notify subscribers about failure:
                    loginStatus.reject(reason);

                    // And destroy the deferred object for this failed attempt:
                    loginStatus = null;
                });
            }
            return loginStatus.promise;
        }
    };
}

function AsyncLogin(q) {
    console.log('AsyncLogin initialized with q: ' + typeof q);
    return function () {
        var deferred = q.defer();
        setTimeout(function () {
            if (confirm('Do you want to be logged in?')) {
                deferred.resolve(true);
            } else {
                deferred.reject('User did not authorize');
            }
        }, 100);
        return deferred.promise;
    }
}

function LoginHandler(user) {
    console.log('LoginHandler initialized with user: ' + typeof user);
    return function () {
        user.isLoggedIn().then(function () {
            alert('Successfully Logged in');
        }, function (reason) {
            alert('Error: ' + reason);
        });
    }
}
</script>
Advertisements

Kooboodle Online UI Architecture

Today we are going to look at the problems we are experiencing with our current Kooboodle website (UI part) and how they can be addressed. Also we are going to make an overview of a new conceptual architecture we are proposing for the website and see whether we should rebuild the UI as a SPA (Single Page Application). This article covers only web client-side UI and related hosting server-side and does not target server api components or desktop/mobile apps.

Why do we need a new UI architecture?

First of all, current website was built based on OpenPhoto open-source framework and was customized for our needs. This served as a good starting point for creating a photo sharing platform. But when project was branched off and we started to implement new features with time we lost the capability of updating the system’s framework with the original source being actively developed.

Problems we are facing right now which imply high cost

Here is a list of problems that makes both maintenance cost high and user experience poor.

  • Performance issues
    • Server performance issues – this is not directly related to client-side UI, but well-organized client side can help decrease server-side load (mostly by decreasing the number of requests and optimizing resource loading).
    • UI performance problems are mostly related to pages where a big amount of photos is shown or involved into a single operation. E.g. sharing photos. It takes a lot of resources to execute performance tests. Also UI components are not scalable to handle required amount of data (pictures).
  • Buggy functionality due to the lack of UI unit tests. This is because of immaturity of the original framework and inconsistency in UI modules.
  • Slow development due to non-modularity of the UI components. This makes implementation of new features very slow.
  • Poor UX due to impossibility to upgrade framework and UI dependency libraries. Consuming the old version of the original framework  which is tightly coupled with outdated versions of some libraries (e.g. jQuery!!!) makes it difficult to develop according to modern development standards.
  • Fragility of integration test automation. QA often faces broken tests when new features are introduced or bugs are fixed. I believe this is happening because current web site organization is not good enough for automation (e.g. no css class naming convention, html structure of components is not generic enough for reliable referencing UI elements from automation scripts). This brings us away from continuous integration pipeline, and makes a release process painful and time consuming.

What’s not possible with current architecture

Current state of UI components used in the website (html templates/pages, CSS and javascript) does not allow us easily execute any of the following changes if they were required by PO:

  • Rebranding
  • Responsive design
  • User-specific customizations
  • Widget based experience
  • Real-time update experience

To solve all the listed problems and to get prepared to possible feature its essential to reevaluate the existing architecture and use appropriate tools and libraries to rebuild the application.

 Goals to be achieved by the new architecture

Lets look now at the aspects that we should consider when proposing an architecture. I would like to refer to these design considerations as a goals the new architecture should be targeting. Achieving these goals is supposed to make maintenance  and development cost low.

  1. Modularity. As a rich web application the UI should be built as a set of reusable, self-contained modules. This will give us important benefits such as:
    • Reusability. Why build reusable modules? Because we are already looking towards different platforms like Facebook or Kik (smartphone messenger with a built-in browser) that we could develop UI apps for (Facebook canvas app, Kik card). And once we create a good module like gallery, or lightbox, or comments we could reuse it for any of our web apps.
    • Extensibility
    • Maintainability
    • Testability
  2. Scalability. By scalability in general we mean “quality of a software system to satisfy goals when characteristics of the execution environment and the system design vary over expected ranges” [4]. At the simplest level, scalability is about doing more of something. This could be responding to more user requests, executing more work or handling more data [3]. The last point is the most relevant to UI. The main goal of our software is to serve images the amount of which could be huge.
    • Capability to handle potentially infinite stream of images.
  3. Robustness / Reliability. This includes:
    • Fault tolerance
    • Non-blocking UI experience
  4. Customizability. System should be able to serve a customized pages based on some criteria (like user’s location, previous experience, user’s preferences, etc) which will allow us to provide:
    • User specific experience
    • A/B testing

Now lets look at the concepts that could allow as to achieve the listed goals.

Design concepts

  • Loose coupling of the modules. Modules should not have many dependencies and should be self-contained to be testable and reusable. Also modules should expose only certain functionality its build for via “public api”, this will allow to avoid problems when internal module functionality has to be updated.
  • Functional reactive approach. See [5] for explanation of the paradigm by the example of flapjax language.
    • to handle events as a stream (e.g. scrolling, mouse clicks, browser window resizing, etc). What is the main difference between Web app and “native” app? In case of a native its OS who handles events and resources for a developer. Browser does the job much worse so we have to help it via organizing events into streams that could be manageable in a much better way. This will allow as to build “non-blocking” UI experience.
    • to handle data as a stream. If we could abstract data input into a stream that we know how to manage it will be very helpful for UI components that require fast a lot of rendering of an infinite type of data (like scrolling the gallery or comments).
  • Use modern robust framework that is both actively supported and mature enough for a large scalable applications.
  • Decouple from server-side. E.g. use REST api for communication. Some exceptions may apply, like: rendering some data on server side to avoid extra http requests, or separately served index page to not load the whole app before user gets authorized.
  • Graceful degradation over progressive enhancement.

The goals we discussed and the concepts we would like to use can be applied to both multi-page and single-page applications. Lets take a look at these two approaches and see which one could fit better for our needs.

Single-Page Application vs Multi-Page Website

1. Resource loading

The main difference between SPA and regular website is resource loading. For multi-page website with every navigation browser reload the whole page including HTML, CSS and JavaScript. The specificity of our application inquires rich user experience which implies a lot of JavaScript code. With a multi-page approach not only browser has to reload all the scripts (which is not that crutial due to caching) but to parse and re-evaluate them which is quite time consuming.

2. HTML caching

Also, multi-page website approach usually implies to render user specific data on server side which makes it impossible to cache by browser or CDN. Its opposite in SPA where application usually loads html templates which can be successfully cached regardless to user content, and then injects user specific information into the DOM. These templates can even be reused by multiple views.

3. Server load, non-blocking UX and offline experience

With regular website when user navigates from page to page every time browser submits a request to server. Then in order to serve the request server creates a process in memory and performs all necessary actions to finally render the output. The server-side actions can take long time in some cases and user would be completely blocked on browser’s side during this period.

With SPA all actions are executed asynchronously, which means user can continue to use the application in browser even when a heavy operations was initiated, e.g. uploading a big amount of photos, or sharing them to other users, or performing some non-trivial searching. While browser is waiting for the async operation to be resolved user is NOT BLOCKED from UI.

Also, for mobile users its often common to experience connectivity loss which is killing for multi-page websites. Whereas SPA can handle this problem with some trivial logic. Moreover modern browsers allow us to use Session and Local storage to store a lot of information including images that will allow user to enjoy the application constantly with an unreliable internet connection.

High level diagram of a proposed SPA architecture

SPA architecture

High lever diagram of a proposed SPA architecture

The main idea is that every Page (or Page App) is a small (or big) SPA. We configure pages to use certain modules.

For example, a “semi-private” page could serve a case where Kooboodle user shares photos (or an album) to non-Kooboodle user (the link has to be unique and only known to this person thats why we call it “semi-private” – it does not require authorization for viewing). This page can include “gallery” module, “lightbox” module, “comments” module, “sign-in/sign-up” module.

Another examples of a Page could be a Facebook Canvas App or Kik card.

Regardless to the purpose of the Page, it is a SPA configured to include the core module and one or more other modules.

This way we can develop modules and reuse them for different applications.

Development Patterns

  • Module
  • Dependency Injection
  • Mediator
  • Facade

Technology stack

  • AngularJS – as a main application framework
  • RequireJS – as a package dependency manager
  • RxJS – as a functional reactive library
  • Bootstrap – as a responsive design framework
  • SASS (LESS) tbd – as a object-oriented CSS framework
  • Protractor – as a e2e development testing framework (not a QA)

Literature:

  1. Patterns For Large-Scale JavaScript Application Architecture (url)
  2. Software design, wikipedia article (url)
  3. Scalability Principles (url)
  4. Software System Scalability: Concepts and Techniques (keynote talk at ISEC 2009) (url)
  5. Flapjax: A Programming Language for Ajax Applications (pdf, url)

HTTP Streaming

http://ajaxpatterns.org/HTTP_Streaming

http://infrequently.org/2006/03/comet-low-latency-data-for-the-browser/

http://en.wikipedia.org/wiki/Comet_(programming)

http://dou.ua/lenta/articles/asinhronnyiy-http-no-ne-ajax/

http://stackoverflow.com/questions/11077857/what-are-long-polling-websockets-server-sent-events-sse-and-comet

http://stackoverflow.com/questions/10028770/html5-websocket-vs-long-polling-vs-ajax-vs-webrtc

Comparison:

of different communication techniques for real-time communication

  • AJAX – creates connection to server on each request, sends request (with possible extra data as like request methods GET, POST, PUT, DELETE, .. , and arguments in url), and gets response from server. Then connection closes. It is single request > response for each AJAX call. Supported in all major browsers.
  • Long poll – creates connection to server like AJAX does, but keep-alive connection open for some time (not long though), during connection open client can receive data from server. Clients have to reconnect periodically after connection is closed due to timeouts. On server side it is still treated like HTTP request same as AJAX. Supported in all major browsers.
  • WebSockets – create TCP connection to server, and keep it as long as needed. Server or client can easily close it. Bidirectional communication – so server and client can exchange data both directions at any time. It is very efficient if application requires frequent messages. WebSockets do have data framing that includes masking for each message sent from client to server so data is simply encrypted. support chart (good)
  • WebRTC – Is peer-to-peer type of transport and is transport-agnostic so uses UDP, TCP or even more abstract layers. By design it allows to transport data in reliable as well as unreliable ways. This is generally used for high volume data transfer such as video/audio streaming where reliability – is secondary and few frames or reduction in quality progression can be sacrificed in favour of response time and at least delivering something. Both sides (peers) can push data to each other independently. While it can be used totally independent from any centralised servers it still require some way of exchanging endPoints data, where in most cases developers still use centralised servers to “link” peers. This is required only to exchange essential data for connection establishing – after connection is established server on aside is not required. support chart (medium)

Advantages:

Main advantage of WebSockets for server, is that it is not HTTP request (after handshake), but proper message based communication protocol. That allows you to achieve huge performance and architecture advantages. For example in node.js you can share the same memory for different socket connections, so that way they can access shared variables. So you don’t need to use database as exchange point in the middle (like with AJAX or Long Polling and for example PHP). You can store data in RAM, or even republish between sockets straight away.

Security considerations

People often are concerned regarding security of WebSockets. Reality is that it makes little difference or even puts WebSockets as better option. First of all with AJAX there is a higher chance of MITM as each request is new TCP connection and traversing through internet infrastructure. With WebSockets, once it’s connected it is far more challenging to intercept in between, with additionally enforced frame masking when data is streamed from client to server as well as additional compression, that requires more effort to probe data. All modern protocols support both: HTTP and HTTPS (encrypted).