Sasha Sirotkin

Building A Reasonable Website

Oct 3, 2016

I don’t remember the last time I made a website.

The funny thing is that I am a web developer. What the hell have I been building?

I built webapps. They are apparently more complicated than websites. Complicated enough to have their own word.

Now, I want to build a personal website. This website.

Note that I did not say personal webapp. Angular or react are not welcome here. All it needs to tell you, the reader, is what a satisfactory human being I am and let you read some things I wrote. Well, there is more to it than just that:

The last two items create for a good workflow, but it adds complexity. That’s fine though. I just don’t want to manually format my blog posts using HTML like some kind of barbarian.

Preface

This article won’t teach you how to write HTML, CSS or javascript. It is meant to expose some parts of web development that can get overlooked. And to pad my website with some meaningful content.

CSS & Design

There are two websites that you should know about. Despite being laden with profanity, they make an excellent point: websites are readable and responsive by default. We don’t need a lot of CSS.

The CSS we do write is going to adhere to Harry Robert’s BEMIT naming conventions, which is an extension of BEM. BEMIT is meant to make CSS class names more readable and ease collaboration. For example, if we see a CSS class called c-post__meta, we can immediately discern the following in plain terms:

The design for the website is going to be simple. My name, then a description and then all my blog posts in chronological order. We are going to ignore pagination for now. We will do that once there is enough content to actually warrant that feature.

Search Engine Optimization

A lot of the best practices for SEO can be found in this comprehensive SEO guide. Here is a less comprehensive summary:

My largest SEO concern was in regards to duplicated content. Search engines hate them! But this website will have host all my blog posts and those are the exact same blog posts will also be published to Medium. That seems pretty bad, right?

Luckily there is no harsh penalty for reposting content. As long as the website follows the above best practices, it should stay on the good side of the search engines.

Accessibility

I never want to hear someone say “you would have to be blind not to hire that guy”. I want you to hire me even if you are blind! Assuming I am looking for a job at the time.

We get some accessibility for free by following the SEO best practice of the proper use of HTML tags. Other examples of such best practices include:

The best way to get a feel if our site is accessible for the blind is using the same tool they do to navigate a website: a site reader. OSX has a feature called VoiceOver which we can enable by pressing FN + Command + F5. On Windows we can use Narrator which is activated using Windows Key + Enter. Notice how they put emphasis on the header tags when describing content on the screen.

If someone is not fully blind, we can help them out by providing sufficient contrast between our text and background. There are tools to help with color contrast.

Lastly, there are tools to evaluate the compliance of a website with accessibility guidelines. However, they will not catch every violation of accessibility best practices. We must remain ever vigilant.

Analytics

We will use Google Analytics to track the unique visitors this site will get. It’s easy too! Just add the code snippet to the head and we are good to go!

If the site had more interactions like button clicks, we would have also added something like Keen.io. But we don’t need that.

Static Generation

There are a lot of static website generators out there. The reasonable thing to do would be to use an existing static website generator like Jekyll or Hugo. However, I think it will be fun to write a small script to do that for us.

Our static website generator will be written in Node and because we will be doing many asynchronous operations in javascript, it will be a series of Promises.

We will read the /post directory for markdown files. Our markdown files will have front-matter to provide metadata, like date and description, to our blog posts. For the curious, this is the markdown file for this blog post! The metadata alongside the actual content of the blog posts will be used to populate our templates from the /templates directory. This will generate an index file, an error page and an individual page for each blog post inside a /build directory.

Fair warning that the code example is written using es2015 (ES6).

const ejs = require('ejs');
const fs = require('fs-extra');
const matter = require('gray-matter');
const recursive = require('recursive-readdir');
const showdown = require('showdown');
const Promise = require('bluebird');

// Convert Node-style callback functions into promises.
const copyP = Promise.promisify(fs.copy);
const emptyDirP = Promise.promisify(fs.emptyDir);
const readFileP = Promise.promisify(fs.readFile);
const recursiveP = Promise.promisify(recursive);
const renderFileP = Promise.promisify(ejs.renderFile);
const writefileP = Promise.promisify(fs.writeFile);

const showdownConverter = new showdown.Converter({
  // We have up to three levels of header nesting before a post appears.
  headerLevelStart: 4,
});

const REQUIRED_MD_PROPS = [
  'title',
  'slug',
  'date',
  'description',
  'tags',
];

/*
 * Returns true if markdown front-matter data contains all required props.
 */
function isValidFrontMatter(data) {
  return REQUIRED_MD_PROPS.reduce(
    // eslint-disable-next-line no-confusing-arrow
    (isValid, prop) => data[prop] ? isValid : false,
    true
  );
}

/*
 * Write a file to a given destination by combining an ejs template with a
 * data object.
 */
function compile(dest, template, data) {
  return renderFileP(template, data, {})
  .then(html => writefileP(dest, html, { encoding: 'utf-8' }));
}

/*
 * Given a filepath to a markdown file, returns an object in the form of
 * { data, content } where `data` is the front-matter and `content` is
 * the markdown content converted to HTML.
 */
function parseMD(filepath) {
  return readFileP(filepath, { encoding: 'utf-8' })
  .then((file) => {
    const md = matter(file);

    if (!isValidFrontMatter(md.data)) {
      return Promise.reject(`${filepath} is missing props.`);
    }

    return {
      data: md.data,
      content: showdownConverter.makeHtml(md.content),
    };
  });
}

// Make or clean the `build` directory.
emptyDirP('./build')

// Copy the assets to build.
.then(() => copyP('./assets', './build'))

// Read and process the markdown files.
.then(() => recursiveP('./posts'))
.then(paths => paths.filter(filepath => /.+\.md/.test(filepath)))
.then(paths => Promise.map(paths, filepath => parseMD(filepath)))

// Combine the templates and markdown data.
.then((posts) => {
  // Sort all posts by date.
  posts.sort((a, b) => new Date(b.data.date) - new Date(a.data.date));

  // Generate index file.
  return compile('./build/index.html', './templates/index.ejs', { posts })
  // Generate the error file.
  .then(() => compile('./build/error.html', './templates/error.ejs'))
  // Generate the post files.
  .then(() => Promise.map(posts, md =>
    compile(`./build/${md.data.slug}.html`, './templates/post.ejs', md)
  ));
})

// Need to exit with a status code to cause CircleCi to fail.
.catch((err) => {
  console.error(err);
  process.exit(1);
});

You’ll notice that the script does the bare minimum to generate the website. That is the point!

Publishing

Both Github and Gitlab offer free hosting for static websites and have built-in support for static generation too! Reasonable people should use those services instead. The fancy continuous integration pipeline to S3 only exists because of my perverse definition of fun.

We will use CircleCi to trigger a deploy whenever we merge code to the Github repository. The deploy script uploads the contents of the /build directory onto S3 and then deletes any orphaned files on S3. The configuration for tis step can be found inside the circle.yml file.

Our sensitive AWS credentials to our deploy script will be provided using a CircleCi feature called environment variables. However, environment variables only hide our credentials from Github. They can still read by anyone who can access the circleCI build log!

Epilogue

This article skips on a lot of details. It is not meant to be an extensive tutorial. If you want to play around with it, the code can be found at Github.