Earlier this year I ran a series of topcoder challenges to build a Node.js API for a potential topcoder blogging platform called "TopBlogger". I wanted to see how fast and well the community could design and build an API given only a rough set of requirements. We were very happy with the results but I wanted to do more investigation on how to build and iterate better and faster.
TL;DR We rewrote our Express API using LoopBack in half the time with 75% less code.
We've been talking with StrongLoop for awhile but I've never really had an opportunity to get my hands dirty with their services. I typically build APIs using frameworks like Express, actionhero or hapi but I thought this would be a great chance to build something more relevant than 'hello world'. That's typically the best way to learn! What would be the effort to rewrite the TopBlogger API using LoopBack and how could that directly impact the topcoder community?
So, here's a brief overview of how we rolled our own API using Express and the process I went through using LoopBack. Next week I'll go step by step on how I built the API and show how fast and easy it is using LoopBack.
Rolling Our Own API
The first challenge I launched was to design the API from a set of very vague, "customer-like" requirements and generate documentation using Swagger. This documentation would be used by subsequent developers during the build challenges down the road.
The requirements were fairly straight forward given the familiarity of a blogging application (it's not rocket surgery). When blogging, everyone can typically view blogs and comments but any meaningful interaction requires authentication. Once logged in users can create new blogs, edit blogs that they authored, delete an unpublished blog that they authored, up or down-vote blogs not authored by themselves, comment on blogs and like or dislike comments not authored by themselves. No one is allowed to delete comments. That would just be crazy.
Given that people want to find and read blog entries, a fair amount of work was needed for discovery. We needed endpoints for keyword search, newest blogs, trending blogs, most popular blogs and blogs by author and tags. All of this of course with pagination. We also wanted a permalink so that people could access a blog by the author username and slug (e.g., http://topblogger.com/jeffdonthemic/hello-world).
Over the course of five days, four competitors designed the models, security, parameters, REST endpoints and HTTP status codes for the API. You can view the results of their work by pasting the YAML file directly into the Swagger Editor. Swagger was essential to the entire process as not only does it allow participants to document and collaborate on the API but the generated UI allows you to test without setting up a separate server!
The second challenge was to implement OAuth2 authentication (using JWT) and security so that it can be used eventually by the web app and developers adding routes and unit tests to the API. This challenge was completed in two days with three participants.
Our final challenge was to actually build the API based upon the design. I entered roughly 15 github issues which corresponded to the functionality specified by the Swagger docs. I added a short description and perhaps some notes for each issue and then referenced the endpoint in the Swagger document. Participants simply picked out an issue they wanted to work on and started writing code. We would document assumptions and discuss any questions in the github issue and then I'd receive a pull request for each issue (i.e., REST endpoint) with their code and unit tests. The challenge was completed in ten days with eleven participants submitting pull requests. See the TopBlogger github repo for the complete code.
Using any framework is somewhat risky and requires some investment. You are buying into the framework's methodologies, conventions and restrictions in return for an anticipated productivity gain. There's a learning curve at first but the overall goal is that a large portion of your code base will be eliminated as the framework provides this functionality. The result should be a smaller, more maintainable code base with less technical debt.
After installing LoopBack, I fired up their CLI, created my app and added the database connector for MongoDB. I then started creating my models, relations and security. The CLI walks you through a series of questions for each one which results in the necessary code being added to your app. Of course you are free to add the code manually but the CLI is super simple and fast. The LoopBack documentation is very well done and contains sample code throughout. As long as you RTFM you'll be fine!
Since most of the API's CRUD functionality was handled by LoopBack, I concentrated on writing mocha test for anything that was outside of this scope or required security. I'd implement a failing test and then modify the code or ACL to ensure it worked properly. Within no time all of my tests were running successfully and my API was complete! However, in the spirit of full disclosure, I did refactor a few endpoints based upon new assumptions.
Why LoopBack is Better
There are many features of LoopBack that make it awesome, but here were a few that made the strongest argument during my rewrite.
No Custom CRUD
The original CRUD code was no longer needed as LoopBack implemented that functionality. Defining the models implements the REST endpoints automatically. No need to write handlers for each endpoint! Extending models was simple and logical as the code resides in individual model JS files, all neat and tidy.
Easily Create Relations Between Models
One of the great things about Rails is the functionality to easily implement and use relations between models. LoopBack offers similar functionality. The framework offers BelongsTo, HasMany, HasManyThrough, HasAndBelongsToMany, Polymorphic and Embedded relations that allow you to connect, query, expose endpoints and perform all sorts of nifty functions with models.
Query All the Things!
With our original API a considerable amount of time was spent writing and testing search and pagination logic. This code was therefore redundant as LoopBack provides a simple, efficient and intuitive way to query data. It support filters for where, fields, include (results from related models), limit, skip and order. Finding data by relations was also a snap!
Data Integration Made Easy
LoopBack currently supports Oracle, SQL Server, MongoDB and MySQL databases. Connecting to MongoDB was a breeze. Run the datasource connector to generate your code and then add your credentials. You can even hold models in different database! For instance, you could have Users in MySQL, blogs in SQL Server and comments in MongoDB. All with a simple configuration. Have you ever tried to connect Mongoose to multiple database? Not exactly a fun task.
LoopBack authenticates apps, users and devices using local credentials or social logins (via Passport). This eliminated a large chunk of our original code and mocha tests to implement this functionality. It was also simple to authorize access to protected resources (models) with granular ACLs. This worked for both built-in CRUD and custom methods with standard and custom roles.
Next week we'll get into the specifics of building the API with LoopBack but in conclusion the speed and efficiency we achieved was quite impressive. Given the initial learning curve, it still only took me 2-3 days to rebuild and test the API using LoopBack. Most of that was figuring out how to get my mocha test to run correctly with the ACLs.
Follow-up article: Building the TopBlogger API with LoopBack
My custom code decreased from 1,272 lines to 314 and my mocha test went from 1,599 lines to 380. A savings of nearly 75%! This will undoubtably make the API easier to grok, debug, maintain and enhance.