Own Your Own Feedbin Data with 11ty

Introduction

As a mentioned a month ago, I have been getting back into RSS, and loving it. I used to be an RSS fanatic, checking it every day like a social network, but such fervor eventually burned me out. When you're reading anywhere from 50-100 articles a day, it’s tough to take a day off. You know it will mean more work for you later.

Back in the day we used to have Google Reader, but it was discontinued and deleted back in 2013. I still see people talk about it to this day, which goes to show what a beloved tool it was.

But in a way I’m glad it’s gone, because it has prompted other companies to fill the gap, specifically a company I have fallen in love with: Feedbin. Feedbin is truly a dream come true for those of us who grew up hooked on RSS and all the benefits it contains.

One of my favorite things about Feedbin is something I recently discovered: they have a well designed and well documented API. This will become relevant in a moment.

The Idea

Good old Uncle Dave Rupert had an idea to make a page on his website that fetches all of his favorited articles from Feedbin. His page is elegant in its simplicity. It fetches his Feedbin RSS feed using JavaScript, does some lightweight data crunching, and outputs the results. Robin Rendle also posted his own implementation which runs along the same lines.

There are a few problems with these implementations, and I’m going to explain for a moment why I decided to go in a different direction for my implementation:

The Problems

Dave and Robin's implementations work well, but they have a few issues:

  • They are heavily dependent on Feedbin's API. If the API changes or is removed at some point, their pages will break, and all that data will be gone. I am a big fan of owning your own data, so this is a problem for me.
  • Since these are front-end JavaScript solutions, they are inherently pretty slow. A lot of things have to load properly in order for the page to display, which makes this fragile and slower than it should be.
  • This solution also doesn't account for broken links. Thinking long term, it’s unlikely all of your links will exist as long as your site will. Broken links lead to lower website traffic, so we need a strategy for handling them.

Using 11ty to solve these issues

This website is built on 11ty, and as it turns out, the issues I’ve outlined above are actually reasonable to fix with 11ty, and I’m going to show you how.

First of all, here are my goals for this project:

  • I want copies of all my favorited articles. Not only the link, but also the article. This way if the link breaks someday, I can automatically display my own copy of the article (with attribution, of course!)
  • I want to "own my own data". We will fetch the data using the Feedbin API, but then we will save it in our project, so that our page isn't dependent on their API to display properly.
  • Our page should be as fast and performant as every other page on the website. This means no API calls on the page, we will only display the data we already collected from the API.
  • We should be gentle with the Feedbin API. No need to request everything every time: we will only request favorites that we don’t already have stored.

Does all of that sound good to you? Alright then. Let's jump into the details.

Goal #1: Creating copies of all our favorited articles

Initially I was going to use the RSS feed generated by Feedbin, as my forebearers did, but the RSS feed is a little too limited. It has only a small subset of the information I want. So instead I looked onto the Feedbin API.

Bingo! The Feedbin "Starred Entries" route returns a list of IDs, which you can then send to the Entries route. The Entries route returns everything we need, and more.

So in our 11ty project, we need to get set up to fetch this data from their API. Their API requires authentication, so first we need to create an .env file with our login credentials:

FEEDLY_AUTH="[email protected]:password"

Replace with your own Feedbin credentials. If you're using version control, make sure you ignore this file, so your credentials aren't stored where everyone can see them.

Now that we have our private data set up, I’m going to create a file in my 11ty project to contain the logic we need to get this done. I’m going to call it favorites.js, and place it inside the _data folder. The _data folder is special in 11ty, it allows you to create datasets that will be accessible throughout your project, which is exactly what we want.

I’m going to show you the finished file first, and then we’ll walk through and discuss how it works. Here's my v1 favorites.js file:

const fs = require('fs')
const fetch = require('node-fetch')
const unionBy = require('lodash/unionBy')

// Load .env variables with dotenv
require('dotenv').config()

// Define Cache Location and API Endpoint
const CACHE_FILE_PATH = '_cache/favorites.json';
const API = 'https://api.feedbin.com';
const STAR_ROUTE = '/v2/starred_entries.json';
const ENTRIES_ROUTE = '/v2/entries.json';
const AUTH = process.env.FEEDLY_AUTH;

async function fetchFavoriteList() {
// If we dont have a domain name or token, abort
if (!API || !STAR_ROUTE || !AUTH) {
console.warn('>>> unable to fetch favorites: missing domain or token')
return false
}

let url = `${API}${STAR_ROUTE}`

const response = await fetch(url, { headers: { 'Authorization': 'Basic ' + Buffer.from(AUTH).toString('base64') }});
if (response.status === 200) {
const feed = await response.json();
console.log(`>>> ${feed.length} favorites fetched from ${url}`)
return feed;
}

return null
}

async function fetchFavoriteText(favorites) {
let url = `${API}${ENTRIES_ROUTE}?ids=` + favorites.join();

const response = await fetch(url, { headers: { 'Authorization': 'Basic ' + Buffer.from(AUTH).toString('base64') }});
if (response.status === 200) {
const feed = await response.json();
console.log(`>>> new favorite text fetched from ${url}`)
return feed;
}
}

function checkForNewFavorites(oldFavorites, feed) {
var newFavorites = [];
for (var i = feed.length - 1; i >= 0; i--) {
var id = feed[i];
newFavorites.push(id);
oldFavorites.forEach(oldFav => {
if (id === oldFav.id) {
newFavorites.pop();
}
});
}

if (newFavorites.length) {
console.log(`>>> ${newFavorites.length} new favorites found`);
} else {
console.log(`>>> no new favorites found`);
}

return newFavorites;
}

// Merge fresh favorites with cached entries, unique per id
function mergeFavorites(a, b) {
return unionBy(a, b, 'id');
}

// save combined favorites in cache file
function writeToCache(data) {
const dir = '_cache';
const fileContent = JSON.stringify(data, null, 2);
// create cache folder if it doesnt exist already
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir)
}
// write data to cache json file
fs.writeFile(CACHE_FILE_PATH, fileContent, err => {
if (err) throw err;
console.log(`>>> favorites cached to ${CACHE_FILE_PATH}`);
})
}

// get cache contents from json file
function readFromCache() {
if (fs.existsSync(CACHE_FILE_PATH)) {
const cacheFile = fs.readFileSync(CACHE_FILE_PATH);
return JSON.parse(cacheFile);
}

// no cache found.
return {
lastFetched: null,
children: []
}
}

module.exports = async function () {
console.log('>>> Reading favorites from cache...');

let cache = readFromCache();

if (cache.children.length) {
console.log(`>>> ${cache.children.length} favorites loaded from cache`);
}

// Only fetch new mentions in production
if (process.env.NODE_ENV === 'production') {
console.log('>>> Checking for new favorites...');
const feed = await fetchFavoriteList();
const newFavorites = checkForNewFavorites(cache.children, feed);
if (newFavorites.length) {
const text = await fetchFavoriteText(newFavorites);
if (text.length) {
const favorites = {
lastFetched: new Date().toISOString(),
children: mergeFavorites(cache.children, text)
}

writeToCache(favorites);
cache = favorites;
return favorites;
}
}
}

return cache;
}

This is definitely going to improve in the future, but for now, it works pretty well. Eventually I want to make a real bonafide plugin for 11ty out of this, but that time is not today.

Let's walk through the heart of this file, the module.exports function. This is where we tie everything together, most of the rest of the file is helper functions for this one function. Let's walk through it together:

module.exports = async function () {
console.log('>>> Reading favorites from cache...');

let cache = readFromCache();

// Only fetch new mentions in production
if (process.env.NODE_ENV === 'production') {
console.log('>>> Checking for new favorites...');
const feed = await fetchFavoriteList();
const newFavorites = checkForNewFavorites(cache.children, feed);

if (newFavorites.length) {
const text = await fetchFavoriteText(newFavorites);
if (text.length) {
const favorites = {
lastFetched: new Date().toISOString(),
children: mergeFavorites(cache.children, text)
}

writeToCache(favorites);
cache = favorites;
return favorites;
}
}
}

return cache;
}

The whole point of this function is to cache pretty heavily, because we want to be gentle with Feedbin's API. So the heart and soul of this project really revolves around the cache, starting with the first line of code: let cache = readFromCache();

The readFromCache method does two things: it grabs the contents of the cache file if it exists. If it doesn't exist, then it creates an empty scaffolding for us to use later.

The next line is another optimization to avoid hitting the Feedbin API too much:

  if (process.env.NODE_ENV === 'production') {

This accesses an environment variable to check if 11ty is running in production mode or not. We only want to hit the api if we’re building for production, otherwise we can use our cached data. This is another way we can be "gentle" with an API.

const feed = await fetchFavoriteList();
const newFavorites = checkForNewFavorites(cache.children, feed);

These two lines do similar things.fetchFavoriteList will hit the starred_entries.json path that Feedbin provides. This will return to us a list of IDs, like this: [12345123, 12341234, 5234534, 2345345].

checkForNewFavorites then takes that list, and compares it to our cache. If we find duplicate IDs, we remove them, because we don’t need to fetch the same data more than once. Once we have it stored, we don’t ever need to fetch it again. checkForNewFavorites then returns a list of the new IDs, and stores it in the newFavorites constant.

if (newFavorites.length) {
const text = await fetchFavoriteText(newFavorites);
if (text.length) {
const favorites = {
lastFetched: new Date().toISOString(),
children: mergeFavorites(cache.children, text)
}

writeToCache(favorites);
cache = favorites;
return favorites;
}
}

Perhaps the most complicated bit of this function, let's go line by line.

First we check to see if there are any new favorites, with newFavorites.length.

If there are, then we pass them to the function fetchFavoriteText, which makes another call to the Feedbin API, this time the entries.json route. This returns to us all of the information Feedbin has on that entry, which we then store inside the text constant.

Then we have to check if any text was returned, which we do with text.length.

If text was indeed returned, then we need to merge that text into our existing cache. We do that by creating a new constant called favorites, and using our mergeFavorites utility:

const favorites = {
lastFetched: new Date().toISOString(),
children: mergeFavorites(cache.children, text)
}

Once we’ve done that, we can overwrite our existing cache and return the results:

writeToCache(favorites);
cache = favorites;
return favorites;

Using the Favorites data

Once you have this favorites.js file inside your _data folder, you can use that data anywhere you want in your 11ty project. I created an unordered list to contain my favorites:

<ol reversed class="postlist">
{% for post in favorites.children | reverse %}
<li class="postlist-item{% if post.url == url %} postlist-item-active{% endif %}">
<a href="{{ post.url | url }}" class="postlist-link">
{% if post.title %}{{ post.title }}{% else %}<code>{{ post.url }}</code>{% endif %}
<p class="postlist-desc">{{ post.url | url }}</p>
</a>
</li>
{% endfor %}
</ol>

This works pretty nicely, as you can see here.

Conclusion

I continue to be blown away by 11ty. I love working with a CMS written in JavaScript for JavaScripters, it makes so much intuitive sense. With any other CMS it would've been far easier for me to do this on the frontend, like Dave and Robin did, but with 11ty there is little distinction between the front and backend. You can do anything you want on either side of the fence, which is very refreshing.

I’m going to continue developing on this idea and try to turn it into a plugin at this GitHub repository. If you're interested, feel free to dig in further there.

← Home