I recently wrote an application called airstrip.io. It is a daily aggregator of digital nomad stories on the Internet.

Do you ever wonder how some applications are conceptualized and developed?

I often do when I see some interesting apps. In case anyone is feeling the similar way for my app, here is the story of how I made airstrip.io. It is also open source and you can visit this GitHub repo to see the code.

Why?

I like reading digital nomad literatures of any kind. I think the fad of traveling permanently and working remotely is fascinating and want to stay on top of what goes on in the scene. But here is a problem.

There is too much inefficiency in the process of discovering and consuming information about digital nomad culture. Two main reasons stand out.

  1. There exist so many sources from which you can obtain information. People write everywhere, and your bookmark grows.

  2. The information is uncategorized. Is it a blog article, podcast, forum post, or news? I want to categorize items and get a bird-eye view of a sorted list.

Because of this inefficiency, some valuable information is always left undiscovered, to the detriment of both content creators and consumers.

I wanted to solve the problem by aggregating the digital nomad stuff in one place every day in a form of a digest.

Choice: Meteor vs. Ruby on Rails

From the get-go, I was confronted with a choice of technology stack. Rails would have been my default choice because I know my way around it. But I had recently built a todo app with Meteor and was impressed by how lightweight it felt. And I wanted to try a new thing. So I went with Meteor.

It was hard because many design patterns and language features of Ruby were not applicable in the JavaScript/Meteor world. Frustrated by the steep learning curve, I seriously went to my terminal and typed rails new airstrip couple of times (in case you are not familiar, that is a command for creating a new Rails application).

Looking back, I am quite happy with my choice. Although it definitely took me more headache than necessary to build the prototype, I now have a powerful tool under my belt. The technology war between these frameworks is beyond the scope of this blog post. So let me just say Meteor was a pleasure to work with.

Conceptualizing

Naming

I was not sure what to name my app. I toyed around with namemesh for a while with combinations including ‘nomad’. It seemed natural to name my app ‘nomad[SOMETHING]’ or ‘[SOMETHING]nomad,’ but nothing really clicked.

Then, crater.io - where Meteor news lands - gave me an idea. I was browsing the website to read something interesting about Meteor, and suddenly realized that I don’t have to use the term ‘nomad’ to name a digital nomad app, just as crater.io did not use ‘meteor’ in its name.

It was just an arbitrary constraint that I imposed on myself.

I thought the word play with ‘crater’ and ‘where Meteor news lands’ was pretty cool. Plus, there was a similarity: my app was where nomad stuff lands, if you will.

So I asked myself, “where do nomad stories land?” and went with ‘airstrip.io’.

Giving personality

Because no one wants to use a boring app, I decided to come up with some metaphor or lingo that resonates with and makes sense to the users. Hopefully that will give the app some personality.

Since the app is called airstrip, I wanted to go with a flight metaphor. The daily news containers will be called ‘flights,’ and they will have items that were aggregated on that day. And they will ‘land’ on the ‘airstrip’.

Designing

I made a mockup using a trial version of Sketch 3. The first mockup looked like this:

After building a boilerplate structure with HTML, I realized infinite scroll was not a good idea because it is hard to navigate to a specific date. Also, it is pretty hard to share the link.

So I made another mockup:

I was satisfied with it and got to work.

Challenges and code debts

I wanted to get it out as soon as I possibly can, because I wanted use it myself. Also, I would not want to spend time perfecting some app that nobody wants to use. I did not even know whether people wanted it.

So I spent a week making a barely functioning app that crawled RSS feeds once a day, persisted the data, and displayed the data. When posted on reddit, I saw that some people liked the idea.

But the code quality was dismal with lots of technical debts. Here were the challenges that ensued after the quick release.

Data structure

The data structure was badly designed because I am used to SQL-based databases, and Meteor was using MongoDB which is NoSQL. I needed to restructure the data if I wanted to maintain the app. Here is how I used to store the flight data using MongoDB.

{
  _id: 'mongoDocumentId',
  date: '20151210',
  items: [
    {
      title: 'someTitle',
      description: 'rawDescription',
      guid: 'someGuid',
      link: 'link',
      author: 'author',
      hidden: false
    },
    {
      title: 'someTitle',
      description: 'rawDescription',
      guid: 'someGuid',
      link: 'link',
      author: 'author',
      hidden: false
    },
    ...
  ]
}

This way, it is a pain to update multiple nested documents because MongoDB does not support it yet. Fetching an individual item is also inconvenient. Moreover, I thought it made more sense for items to have _id because they are documents in and of themselves, and I found and edited them frequently.

So I made multiple migrations to make the data structure more relational, and easier to work with.

Dumb aggregating algorithm

One of the complaints about the app was that it only fetched items from two sources: Reddit and NomadForum.

Actually, the algorithm was designed to fetch from various sources I hard-coded, but many sources were inactive except for those two.

Also, to avoid fetching old items, I just manually fetched 500+ items from all the sources along with guid and marked them as hidden. That way, the algorithm did not fetch the old items because there would be a duplicate item with the same guid.

This was, of course, not the most efficient way because the fetching algorithm had to iterate through all items in the database for every single items scrapped from the feeds.

Unhelpful Tweets

I had automated Tweets so that every time a digest is created at 08:00 GMT, the the app will make an API call to Twitter to post the Tweet of the following kind:

But I quickly discovered that not many people clicked on this link. It made more sense to tweet individual items, and feedback from users consolidated that idea.

Iterating

I worked on one week sprints, and iterated for three weeks. At the time of writing this, I am on the third sprint, with all the major improvements completed. Trello was helpful to keep track of the sprint.

I usually set up four stages: Backlog, Todo, In Progress, and Done, and drag individual tickets to appropriate stages as work gets done.

At the end of the sprint, I create another Done list for the completed sprint, move all the Done tickets to that list, and rename the Todo to reflect the next sprint.

Summary of sprints

Unit testing

I began by writing unit test for the core functionality, the aggregator that fetches items from different sources. I wasn’t sure how to write and organize tests using MeteorJS, and there was a little bit of learning curve. But after I got comfortable, I began writing tests as I iterated.

Admin section

Admin section is where I can manually add, remove, and hide items. This was added to compensate for the dumb aggregator. Rather than investing time on improving the aggregator while end users saw no improvements, I thought it was wiser to hack together a solution for curating contents until better aggregator was ready.

Individual Item Tweets

I released individual Tweets for the items in the first sprint. After other few more iterations, this is how it looks now:

I built a ItemTweetFactory that constructs the Tweet status using the item type, author Twitter account, flight number, and different sentence structure and words so as to make it sound more human.

For instance, this is the code that constructed a part of the above Tweet status:

var getOpening = function (item) {
  var possibleOpening = [
    "Check out:",
    "Recent:",
    "New:",
  ];

  if (item.sourceName === 'Reddit') {
    possibleOpening.push(
      "Recently on Reddit:",
      "New on Reddit:",
      "Reddit:",
      "New item on /r/digitalnomad",
      "On Reddit:"
    );
  }

  if (item.sourceName === 'NomadForum') {
    possibleOpening.push(
      "Recently on nomadforum:",
      "Check out this thread on nomadforum.",
      "On nomadforum:",
      "New post on nomadforum",
      "New on nomadforum"
    );
  }

  if (item.sourceType === 'blog') {
    possibleOpening.push(
      "New blog post:",
      "Meanwhile in a nomad blog:",
      "Blog post:",
      "Check out this post:"
    );
  }

  return _.sample(possibleOpening);
};

Better aggregator

I stopped hard-coding the source URL and made a section in the admin interface where feeds can be added/removed and edited. Each source has an order of importance (which ones to fetch from first), and item limits per flight. I’ve set the item limit for the flight itself.

Also, the aggregator rejects items older than 7 days as the picture shows.

Newsletter

I had garnered around 20 subscribers for the newsletter in the first week, but did not build the functionality until the end of the second week. Instead, I just sent the new subscribers confirmation emails using MailChimp, explaining that newsletter will come soon.

When I finally put together daily newsletter, it dawned on me that people might not want to receive an email every day. So I decided to ask the users.

According to few responses, users wanted a weekly digest. So I extended the code to support both weekly and daily digest, and made a cron job to build and send weekly digest emails.

SyncedCron.add({
  name: 'Schedule a weekly digest',
  schedule: function (parser) {
    return parser.text('at 7:00 am on Sat');
  },
  job: function () {
    Meteor.call('scheduleWeeklyDigest');
  }
});

It was released at the end of the second sprint. The first digest was sent on 07:00 GMT on Saturday of that week. This is how it looked like.

Miscellaneous

In the third sprint, I set up SSL using Cloudflare. Cloudflare can give you a free SSL certificate. I configured the application so that Google can properly scrap the dynamically generated HTML content.

Lessons

As a closing note, I would like to share the lessons I picked up along the way.

Set up continuous delivery

I worked at places with slow Internet connection. Usually, the slow Internet speed is inconvenient, but somehow tolerable. However, when it comes to deploying the code, it is a massive bottleneck in productivity.

Especially at the beginning stage, where you need to iterate very rapidly and constantly deploy your features and fixes, I would argue that continuous delivery is a necessity. Also, if you are on the road constantly while building your products, you will need this (I am looking at you, nomads).

I usually set it up with Rails projects, but I did not know how with Meteor apps. So I would turn on personal hotspot in my iPhone and deploy my code. It was slow and data-intensive.

In the third week, I sat down and figured out how to set up a continuous delivery with Meteor. I wrote a blog article about how I did it.

As you see in the picture, when I push my code to a certain branch on GitHub repo, the CI server runs the tests and deploys it instantly (compared to god-knows-how-long using my iPhone tethering).

Release quickly even at the cost of technical debts

You can have the most beautifully designed software with 100% test coverage and flashy patterns… with no users. As an alternative, you can have something that works and that people actually use.

It took me one week to release the MVP for airstrip.io. Usually, it would have taken me much longer. In my usual practice, the code would have had the test coverage above 95% and had many layers of abstractions, even for a simple app like this one.

But I tried to ship airstrip.io without my usual pre-optimization. It turned out fine, and now I have user feedbacks and engagements.

Admittedly, we all justify pre-optimization by thinking, “but what if this is the next FaceBook?” But the chances are that is not the next FaceBook. In a rare case that it turns out to be the next FaceBook, you will have more complex problems than some code debts.

That’s it.

That is how airstrip.io was made. Learning a new framework is fun, and this app has been a hell of a great playground for me.

I am quite content with it because it solves my problem and saves me time. But I am even more satisfied that some people actually use it. In my opinion, seeing others use his/her product is every creator’s greatest delight.