Behavioral tests / BDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing BDD and / or behavioral-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for Behavioral Testing

Lets start with setting up a testsuite.

We usually need stuff like

  • something that connects to a browser
  • something that runs the tests
  • something that interprets the tests
  • something that compiles all needed files

There is this wonderful package called chimpjs that already helps us out on most of these facets.  It does so by integrating and sprinkling magic over the following tools:

Let’s install it and see from there.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD› 
╰─➤ npm i chimp ts-node --save-dev
╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ ./node_modules/.bin/typings i cucumber chai --save-dev --source=dt

Configuring Chimp

Let’s set up chimp. Chimp is primarily a wrapper and seamless integration of multiple test frameworks, so it might not come as a surprise that we can set config options to these individual frameworks. By default the configuration options will be as such:

https://github.com/xolvio/chimp/blob/master/src/bin/default.js

These options can be overridden in our own file, and we have to, because chimp by default isn’t set up to use TypeScript.

Create a file chimp.conf.js.

module.exports = {

  // - - - - CUCUMBER - - - -
  path: './feature',
  compiler: 'ts:ts-node/register'

};

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test bdd

and add the rules (extend test if you’ve also done the tdd post):

 

test:
    node_modules/.bin/chimp chimp.conf.js --chrome

bdd:
    node_modules/.bin/chimp chimp.conf.js --watch

Let’s also create the proper directories

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ mkdir -p feature/step_definitions

Create some tests

In order for us to know if we’ve properly set up the test framework, we want to create some tests. Since we’ve already created some nonsense during the creation of the generic build process, we’ll continue on that.

First create the .feature file

The feature file should tell in plain English what feature we expect, and how it behaves in different scenarios.

in: features/config.feature

@watch @feature

Feature: Seeing the effect of the config on the screen
  In order to know if the config was correctly applied,
  As a Developer
  I want to test some of the aspects of the config on the screen

  Scenario: Check if background color is corect
    Given the config has the color set to blue
    When we look at the website
    Then I should be having a blue background

Then we write implementation for this feature

The featuretest as written, can not directly be interpreted by our test framework. Our script just doesn’t know what ‘background color’ means, and what element is been intended to check. So that’s why we create support for these steps. The nice thing is, that you might notice some punch holes in the sentences. Like ‘blue’, might be switched for another color, and ‘background’ might be ‘font-color’ or something along these lines. If you cleverly analyse your scenarios, you might become able to recognise standard patterns that you can re-use.

Be careful! A common caveat is that you start writing a language processor. Don’t do it! Tests should:

  • be straightforward
  • be easy to understand
  • have no deep connections with other tests 

Here’s the example implementation of the feature scenario. Put it in feature/step_definitions/config.ts

/// <reference path="../../typings_local/require.d.ts" />
import IConfig from "../../ts/i_config"

let config = <IConfig>require("../../conf/app.json");

export default function() {

  this.Given(/^the config has the color set to ([^\s]+)$/, function (color: string) {
    if (config.color !== color) {
      throw "Color in config mismatches the test scenario";
    }
  });

  this.When(/^we look at the website$/, function () {
    this.browser.url('http://localhost:9080');
    return this.browser.waitForExist('body', 5000);
  });

  this.Then(/^I should be having a ([^\s]+) background$/, function (color: string) {
    let browserResponse = this.browser.executeAsync(function(color: string , done: (response: boolean) => void) {
 
      let compareElem = document.createElement("div");
      compareElem.style.backgroundColor = color;
      document.body.appendChild(compareElem);
 
      let bodyElem = document.querySelector('body');

      done(
        window.getComputedStyle(compareElem).backgroundColor == window.getComputedStyle(bodyElem).backgroundColor
      );
    }, color);

    if (!browserResponse.value) {
      throw "BackgroundColor didn't match!";
    }

  });

}

Running the tests

By now we have set up TDD and BDD tests with TypeScript.  A simple

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ make test

Should give you something like this:

Conclusion and notes

We are now fully able to write our tests – feature as well as function – in TypeScript, and have integrated them in our example build-process. We can run these tests on our own machine to verify our project locally.  BDD and TDD are set up separately so that we have more grip on either of the testing solutions and prevent coupling where not needed.

We are however not completely done yet.

  • We will have to set up some CI / CD make-tasks that can be ran at a headless server since we now leverage the browser in our own OS.
  • We will need make sure our watchers and compilers are set up properly, in order for BDD and TDD to run nicely in the back while developing our code.

We will go more in-depth on those aspects when we start hooking our project up to nginx and really start developing an application.

Changes applied in this blog post can be found at github.

Suggestions, comments or requests for topics? Please let me know what you think and leave a comment or contact me directly.

Unit-tests / TDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing TDD and / or unit-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for unit testing

So what do we need:

Some testing framework (we go with Jasmine)

There are lots of unit-test tools (mocha, chai, sinon, etc) out there that are really good. I at this moment prefer Jasmine. It’s well documented, stays relevant, serves an atomic purpose, configurable and has plugins separated through the NPM repo.

Some orchestrator / runner

We need an orchestrator to launch our tests in browsers. We use Karma for this.

Some browsers

There’s so many you can use, but also should use. Karma facilitates that you can hook up your own browser (open multiple on multple machines if you want) to test with. If that’s too manual you can go with solutions like: PhantomJS, chrome automated testing with Selenium / Webdriver, or doing it through Browserstack and have it tested on multiple versions of multple browsers on multiple operatingsystems and their versions. Lucky you, te runner we chose (Karma) supports interfacing with all of these as part of your testline.

Some reporters

What would we need to get a bit of grip and feeling with our test-process.

  • spec – show the entire spec of the units
  • coverage – we want to know if we’ve actually covered most of our logic (again, why you would like to do this will be described in another article)

 

You convinced me, two thumbs up, let’s do this.

So our lovely NPM can help us quite a bit with this. Do as followed:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ npm i jasmine-core karma karma-browserify karma-browserstack-launcher karma-jasmine karma-phantomjs-launcher karma-coverage karma-typescript-preprocessor phantomjs-prebuilt watchify karma-spec-reporter --save-dev

Next chapter.. ;-).

Instructing the runner

Karma needs to know a bit about what it should do when we’ve asked for a test.

karma.conf.js

module.exports = function (config) {
 config.set({
   basePath: '',
   frameworks: ['browserify', 'jasmine'],
   files: [
     'spec/**/*_spec.ts'
   ],
   exclude: [],
   preprocessors: {
     'spec/**/*.ts': ['browserify','coverage']
   },
   browserify: {
     debug: true,
     plugin: [['tsify', {target: 'es3'}]]
   },
   reporters: ['spec', 'coverage'],
   port: 9876,
   colors: true,
   logLevel: config.LOG_INFO,
   autoWatch: true,
   browserDisconnectTimeout: 1000,
   browserDisconnectTolerance: 0,
   browserNoActivityTimeout: 3000,
   captureTimeout: 3000,
   browserStack: {
     username: "",
     accessKey: "",
     project: "build-process",
     name: "Build-process test runner",
     build: "test",
     pollingTimeout: 5000,
     timeout: 3000
   },
   coverageReporter: {
     type: 'text'
   },
   customLaunchers: {
     ie10: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "7",
       browser: "ie",
       browser_version: "10"
     },
     chrome: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "10",
       browser: "chrome",
       browser_version: "latest"
     },
   },
   browsers: ['PhantomJS'],
   singleRun: false
})}

don’t forget to create the directory that is scanned for your _spec.ts files

 

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test tdd

and add the rules:

test:
    node_modules/.bin/karma start --single-run

tdd:
    node_modules/.bin/karma start

 

Getting definitions of Jasmine

Since your code is written in TypeScript, your tests preferably are also written in TypeScript. You’ll need some definitions of the capabilities of Jasmine in order to use it properly. Luckily the people of typings are geniuses and supplied such a definition for us!

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
 ╰─➤ node_modules/.bin/typings i jasmine --source="dt" --global
 jasmine
 └── (No dependencies)

 

Test if we can test

Oh boy that is a nice title :-). Let’s write some nonsense first, so we can write tests for it later.

The nonsense

Now create some simple example module like ts/example_module.ts:

type someCallback = (someString: string) => string;

export default class example_module {

  constructor(private someVar: string, private callback: someCallback) {

  }

  public some_method(){
    console.log('some method ran!');
  }

  public get_string(): string {
    this.some_method();
    return this.callback(this.someVar);
  }

}

 

There’s a range of nonsense that can be applied in even more bizarre ways that I don’t intend on pursuing  if you don’t mind. This should suffice 🙂

Let’s test this nonsense

Create this testfile in spec/example_module_spec.ts

Generally it’s a good idea to separate the tests from the project since they otherwise clutter the area you’re working in. But do try to mimic the structure that’s used in your normal ts folder. This allows you to find your files efficiently. We append _spec to the filename, because when your project grows, it’s not uncommon to create a helper or two, which shouldn’t be picked up automatically.

/// <reference path="../typings/index.d.ts" />

import ExampleModule from "../ts/example_module"

describe('A randon example module', () => {

  var RANDOM_STRING: string = 'Some String',
      RANDOM_APPENDED_STRING: string = ' ran with callback',

      callback = (someString: string): string => {
        return someString + RANDOM_APPENDED_STRING;
      },
      exampleModule: ExampleModule;

   /**
    * Reset for each testcase the module, this enables that results
    * won't get mixed up.
    */
   beforeEach(() => {
     exampleModule = new ExampleModule(RANDOM_STRING, callback);
     spyOn(exampleModule, 'some_method');
   });

   /**
    * testing the outcome of a module
    *
    * Should be doable for almost all methods of a module
    */
   it('should respond with a callback processed result', () => {
     let response = exampleModule.get_string();

     expect(response).toBe(RANDOM_STRING + RANDOM_APPENDED_STRING);
   });

   /**
    * testing that specific functionality is called
    *
    * You could make use of this, when you expect a module to call
    * another module, and you want to make sure this happens.
    */
  it('should have called a specific method each time the string is retrieved', () => {
    // notice that, because of the beforeEach statement, the spy is reset
    expect(exampleModule.some_method).toHaveBeenCalledTimes(0);

    // execute logic twice
    exampleModule.get_string();
    exampleModule.get_string();

    // expect that the function is called twice.
    expect(exampleModule.some_method).toHaveBeenCalledTimes(2);
  });
});

The result:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ make test
node_modules/.bin/karma start --single-run
08 12 2016 22:58:35.109:INFO [framework.browserify]: bundle built
08 12 2016 22:58:35.115:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
08 12 2016 22:58:35.116:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
08 12 2016 22:58:35.130:INFO [launcher]: Starting browser PhantomJS
08 12 2016 22:58:35.380:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#RxSPDX6Lu-LvxyP2AAAA with id 76673218
PhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0.04 secs / 0.001 secs)
--------------|----------|----------|----------|----------|----------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
--------------|----------|----------|----------|----------|----------------|
 spec/        |      100 |      100 |      100 |      100 |                |
 app_spec.ts  |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|
All files     |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤

Or as my lovely console likes to tell me in more color:

build-process-tdd.png

 

Check the PR for the actual code.

Want more? Any ideas for the next one? Let me know or use the poll on the right side of the screen!

Where business logic isolation fails with RDBMSses

For: software architects, DBAs and  backend developers

Databases get a lot of attention and they should. In a mutual relationship code gets more and more dependent on a database, and a database grows by demand of new functionality and code.

In order to be as resilient as possible to future changes, in RDBMS-land one should strive for getting the highest normal-state for their data. If you’ve already decided that you have to have an RDBMS to clear the job, take this advice: screw the good advice ‘never design for the future’. Because the future of your application will look very grim if you don’t carefully consider your data scheme (I assume considerable longevity and expansion of functionality on the application here).

So.. What’s this blog post about?

There is tight coupling involved between a database structure and an application. There’s no way around it. And that’s okay. But what I think is not okay, is setting rules about the  relations more than once. This is prone for discrepancies in logic between the database and your codebase. There can be countless triggers, foreign key-constraints that block, cascade or set to null,  and you wouldn’t know about them unless you perform your action and analyze the result, or you’ll have to mimic the database logic in your code (which will break in time due to the discrepancies).

Making the issue visible

Let’s first fire up a database

╭─tim@The-Incredible-Machine ~
╰─➤ sudo apt-get install mysql-server-5.7

And populate it with some data. I’ve found this example database on github. If you also use it to learn from or with, please give the repo a star so people know they aren’t putting stuff out for nothing.

╭─tim@The-Incredible-Machine ~/Git
╰─➤ git clone git@github.com:datacharmer/test_db.git
Cloning into 'test_db'...
remote: Counting objects: 94, done.
remote: Total 94 (delta 0), reused 0 (delta 0), pack-reused 94
Receiving objects: 100% (94/94), 68.80 MiB | 1.71 MiB/s, done.
Resolving deltas: 100% (50/50), done.
Checking connectivity... done.

╭─tim@The-Incredible-Machine ~/Git/test_db ‹master›
╰─➤ mysql -u root -p < employees.sql
Enter password:
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
INFO
LOADING departments
INFO
LOADING employees
INFO
LOADING dept_emp
INFO
LOADING dept_manager
INFO
LOADING titles
INFO
LOADING salaries
data_load_time_diff
00:01:02

In order to see what we’ve got, I reverse-engineered the database diagram from the database. This sounds harder than it actually is. Open MySQL Workbench, make sure you’ve established a connection with your running MySQL service, go to tools tab “database” and use the “reverse engineer” feature. Workbench will create a schema for you, a so called EER (Enhanced Entity Relationship Diagram).

example_db_structure.png

EER Diagram of example database

We can basically see that all relations are cascading when a delete occurs, and restricting when an update occurs. So when an employee gets deleted, he or she will be removed from the department, will not be a manager anymore and all history of salaries and titles will be removed.

So now let’s look up one single department manager

mysql> select * from dept_manager limit 1;
+--------+---------+------------+------------+
| emp_no | dept_no | from_date  | to_date    |
+--------+---------+------------+------------+
| 110022 | d001    | 1985-01-01 | 1991-10-01 |
+--------+---------+------------+------------+
1 row in set (0,00 sec)

And ask the database to explain it’s plan upon deletion of the employee that we know is a manager.

mysql> explain delete from employees where emp_no = 110022;
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
| id | select_type | table     | partitions | type  | possible_keys | key     | key_len | ref   | rows | filtered | Extra       |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
|  1 | DELETE      | employees | NULL       | range | PRIMARY       | PRIMARY | 4       | const |    1 |   100.00 | Using where |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
1 row in set (0,00 sec)

So what is it I miss?

I am missing an Explain-like function, that doesn’t give me meta-info about the query-optimizer, but actual info on what would happen -in terms of relations- if I where to remove an entity from my database given a specific query.

I would expect that it would return data like:

  • which table
  • how many records will be affected
  • what would be the operation (restrict, cascade, null)

And just like an explain can get extended, running this in an extended mode could also yield a group-concatenated list of primary keys, separated by comma and csv fanciness to not break the string (using string delimiters, escape characters, whatever you feel needed).  Imagine the fancyness you could unleash with actually informing your user which record is the culprit.

Name me some examples where I need this!

Lets imagine we’ve created a CRM-like system. For now, let’s imagine we have

  • organizations
  • addresses
  • invoices
  • contacts
  • notes

Some questions that might arrise are:

  • Can I delete the organization, or is there still an unprocessed invoice attached (blocking constraint)? I’d rather not present the user the delete-button and give a reason why it cannot be done, then concluding this issue and rolling back on the action.
  • When I delete an organization, what will go with it?
    • Will processed invoices also be removed
      (hope not! Better set to null in this case, the invoice should (besides the org.id) also store a copy of all relevant org data)?
    • Will unprocessed invoices be removed?
      (Maybe best to block in this situation, since there is still money potential  or reason to assume the user doesn’t want this)
    • will notes be removed?
      (You could do this, but only if you are very verbose about it)
    • will contacts be removed?
      (Often you don’t but since these kinds of relations are often many-to-many, these coupling-records should be removed)
  • What do I actually know about this relation? What’s exactly connected? Super nice to dynamically create graphs on how data is connected and how relevant this data actually is.

Why solve it in the database?

I have worked on two applications that counted each in excess of 150 tables that where all interconnected, and I can really say that investing significant effort in philosophizing on your database schema isn’t a luxury, it’s a must.

A database contains atomic values. It lays the foundation where your code starts to work with. That inherently means, that whatever we can do at that level to protect it, we should.

A couple of advantages of solving as much as possible in the database

  • If multiple tools or applications connect to the database, they will all have to follow the same conventions
  • whatever’s already in the database, doesn’t have to be transferred towards the database, and thus saves bandwidth.
  • Foreign key constraints use indexed columns by nature. Your queries will run faster, since the data where your tables generally relate to each other are already indexed
  • You don’t have to rely on super-transparent naming (you should do that anyhow by the way), but everyone that looks to your database will understand how tables relate to each other.

What’s next?

With regard to Database-related topics, there are a couple of topics that I’d like to cover in the future like:

  • Caches
    • microcaches, denormalizing data
    • cache-renewal strategies
  • Versioning of datastructures
  • Query optimization
    • do’s and don’ts on writing of queries
    • methods to optimize

Do you think that I’m off, missing some recent progression in this area or just want to chat? Drop me a line and let me know what you think!

Creating a resilient Front-End build process

For: intermediate front-end developers and expert developers that need a quick reference.

There are lots of ways to get your front-end architecture to the client. Usually the basic concepts seem easy and quickly done, but you will soon find yourself

  • make an effort to integrate it in a running environment,
  • make it work for the world (browser compatibilities and such)
  • handle external dependencies
  • write code that’s understandable for your colleagues (and yourself in 6 months)

Yep you’ve guessed it, with every demand you put on your code, the effort will go up exponentially.

This talk is not about how to manage your project. There’s many good books (e.g. Eric Ries – The Lean Startup) and articles to be found about this. This talk focusses on some core steps that I generally like to make to get an optimal work environment that gives me the least amount of friction during development.

During development you will work with lots of individual files, comments, testers and such, to ensure that the project is structurally sound, but also understandable for humans. But as soon as we deploy for our client, there will only be machines interpreting your code, and we’d like to get rid of all that isn’t strictly necessary and really get the highest performance we can get.

Assumptions

I have to make some assumptions about your environment, otherwise I would have to go all the way back to installing linux. So the things that I assume are:

  • you are devloping / deploying on linux machines under the debian architecture (I use Ubuntu)
  • you have NVM (Node Version Manager) installed (https://github.com/creationix/nvm). I don’t include NVM in the build process, since it is installed with a shell script and I consider piping a curl-response to bash as a potential hazard for your organization.
  • you have basic knowledge of linux, client-server over http and javascript
  • you have installed sass

Versions

One issue that is persistent over the years, is versions of dependencies. Sometimes a dependency gets updated and sub-dependencies cannot be resolved anymore. A solution for that resides in the combination of NVM, NPM and Bower.

You might notice that I refrain from using global modules. I do this to detach the project as much as possible from anything that is, or should be available on your machine. This way, we can also ensure that we use the correct version, and not an unknown version that is set globally.

Node

We first need to define which version of Node we’d like to use. Usually, at the time of developing we’d like to use the latest stable.

╭─tim@The-Incredible-Machine ~
╰─➤ nvm install stable
Downloading https://nodejs.org/dist/v7.0.0/node-v7.0.0-linux-x64.tar.xz...
######################################################################## 100,0%
WARNING: checksums are currently disabled for node.js v4.0 and later
Now using node v7.0.0 (npm v3.10.8)

Here you see my machine pulling in the latest version of Node, which at this time is 7.0.0.

You can see which versions are available on your machine like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm ls
   v5.5.0
   v6.8.0
-> v7.0.0
system
node -> stable (-> v7.0.0) (default)
stable -> 7.0 (-> v7.0.0) (default)
iojs -> N/A (default)

And you can switch between them like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm use 7.0.0
Now using node v7.0.0 (npm v3.10.8)

Now we have Node in place, it can supply us with tools that we use to build our solutions with. We’ll first need to create a project.

 

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master›
╰─➤ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (build-process)
version: (1.0.0)
description: Example build process
entry point: (index.js)
test command:
git repository: (https://github.com/timmeeuwissen/build-process.git)
keywords:
author: Tim Meeuwissen
license: (ISC)
About to write to /home/tim/Git/build-process/package.json:

{
 "name": "build-process",
 "version": "1.0.0",
 "description": "Example build process",
 "main": "index.js",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 },
 "repository": {
 "type": "git",
 "url": "git+https://github.com/timmeeuwissen/build-process.git"
 },
 "author": "Tim Meeuwissen",
 "license": "ISC",
 "bugs": {
 "url": "https://github.com/timmeeuwissen/build-process/issues"
 },
 "homepage": "https://github.com/timmeeuwissen/build-process#readme"
}


Is this ok? (yes)

Node is basically a javascript-engine running in a linux environment (e.g. on a server). There are lots of great tools written in node to interpret and mutate your project’s files to become client-friendly.

Bower

Bower is one of these tools. It enables to get dependencies for your front-end architecture. Whenever you feel like using jQuery, lodash, react, material-design or whatever you prefer to use, you will always need to get the dependencies, structure them in a certain way and keep track of their versions. Bower is NPM for front-end related components and does exactly that.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i bower --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└── bower@1.7.9

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/bower init
? name build-process
? description Example build process
? main file index.js
? keywords
? authors Tim Meeuwissen
? license ISC
? homepage https://github.com/timmeeuwissen/build-process
? set currently installed components as dependencies? Yes
? add commonly ignored files to ignore list? Yes
? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes

{
 name: 'build-process',
 description: 'Example build process',
 main: 'index.js',
 authors: [
 'Tim Meeuwissen'
 ],
 license: 'ISC',
 homepage: 'https://github.com/timmeeuwissen/build-process',
 private: true,
 ignore: [
 '**/.*',
 'node_modules',
 'bower_components',
 'test',
 'tests'
 ]
}

? Looks good? Yes

TypeScript

I rather use typescript than I use plain JS. Typescript is a superset of JS, that needs to be pulled through a compiler in order for it to be able to run on a client. There are always reasons not to do stuff, but I’d like to share my reasons to do it anyhow.

  • It is developed and maintained by Microsoft. This isn’t the smallest guy in town, and they do an excellent job at it.
  • It is a superset, and depending on your settings you can or cannot use typing wherever you want (May I kindly request you enforce every variable to be typed strictly, for reasons to follow)
  • Code-completing gets a hell of a lot more fun for your IDE (I personally really like Visual Studio Code on Linux from Microsoft. It’s free, try it!)
  • You can always work in the latest standards. Depending on your compiler arguments the code will be transpiled to any given EcmaScript standard you require for your company. The TypeScript compiler nicely polyfills whatever isn’t available in that ES version, and as time moves on, browsers get better, and your code will deprecate less fast (you can export to ES6 on any given day of the week e.g.).
  • You can more easily apply your backend architecture skills when you work with the latest ES version.
  • Your fellow developers will exactly know the input and output of each and every function without knowing the intricate details of your application.

Packing and transpiling

It’s a safe bet that we will create lots of documents that will all depend on each other. Browserify is able to follow these dependencies and combine them in to one file. There is a plugin called tsify, which basically runs the typescript compiler on the files before or after the merging has happened.

Lets add it to our project:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i browserify tsify typescript --save-dev
build-process@1.0.0 /home/tim/Git/build-process
├─┬ browserify@13.1.1
│ ├── assert@1.3.0
…
…

Get the google closure compiler. We will use this to pipe the output of browserify through.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i google-closure-compiler --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ google-closure-compiler@20161024.1.0

Typings

Now, not every project is written in TypeScript, but we would like to be able to rely on their behavior within our files. E.g. jQuery might return some structure after invoking it, and we want to be able to recognize that output and work with it as such. Typings is a library filled by lots of wonderful people with interfaces of these external dependencies, so you don’t have to do it yourself!
Lets get it first:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i typings --save-dev 1 ↵
build-process@1.0.0 /home/tim/Git/build-process
└─┬ typings@1.5.0
├── archy@1.0.0
…
…
╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/typings init

Sass

It’s important that every part of our application is structured in such a way that someone else understands what he or she is looking at. This includes the style. Style documents get easily overlooked and deemed as less important, but in my experience they are often responsible for the biggest part of the technical debt. Files with thousands of lines of style that react with the html, and no way to understand or properly refactor aren’t an uncommon sight.

In a separate document I plan to get deeper in how you can structure in such a way that you won’t build your own little jungle of css. Here, I will remain to focus on the build-steps and high-level reasoning.

Sharing of configuration and building

It often happens that some variables are shared across front-end architecture. Examples might vary between a path to the CDN or a max-amount of items within a caroussel.

Assuming that you already have sass installed, I could encourage you to also install sass-json-vars.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ sudo gem install sass-json-vars

Because we always want to compile our files this way, it’s handy to create a helper that helps with picking up all scss files that match a pattern, and converts them to a normal css file.

I created a directory helpers with the file build_css.sh that basically contains this:

#!/bin/bash

# $1 = directory to scan for documents
# $2 = directory to put finished css documents in

find $1 -name [^_]*.scss 2> /dev/null | while read input
do
  output=$(echo $input | sed s@^$1@$2@ | sed s@\.scss$\@\.css@)
  outputdir=${output%/*}
  mkdir -p $outputdir
  sass -r sass-json-vars -t compressed $input ${output}
done

Call it with an input and an output dir as first and second argument. Nice helper right?

p.s. don’t forget to make it executable.

Basic file structure

Now that we have our external dependencies in, our directory structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── helpers
│   └── build_css.sh
├── package.json
└── typings.json

 

(I’ve omitted some documents for the tree-view, you can do this yourself also by creating the alias in your .bashrc: alias tree=”tree -C -I ‘vendor|node_modules|bower_components'”)

Lets add some extra folders to give some directions where stuff should go.

First all public stuff.

Here’s where all scripts that can be visited by a browser will reside. All compiled files will be written to these directories. Why not dist? Since these projects are intended to be a dependency by other scripts, other parts like sass or typescript files are also part of the stuff that’s distributed, but those should never go ‘public’ so there you have it :-).

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p public/api public/css public/js

 

Now the source stuff.

There will occur situations in which you want to have a special interface for an external dependency. Typings will create its own dir, but here we have a space in which we can put our own custom ones.

Sass is split in multiple files which don’t have to be converted individually (partials) and functions that cover some visual logic (mixins).

The conf dir will have ‘constants’ variables that will become fixated in the code at compiletime.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p sass/partials sass/mixins ts helpers typings_local conf

By now our structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── conf
├── helpers
│   └── build_css.sh
├── package.json
├── public
│   ├── api
│   ├── css
│   └── js
├── sass
│   ├── mixins
│   └── partials
├── ts
├── typings.json
└── typings_local

11 directories, 4 files

Create a .gitignore. It makes it so much nicer if you don’t have to store all external dependencies in your own repo. Notice that all files in public are checked in as normally. Optionally you could run a post-commit hook that builds the project so you always know that the built files represent the state of the original source files.

node_modules
bower_components
typings
.sass-cache
ts/**/*js

Tying it together

There are so so so many ways to run tasks. You can use Grunt, Gulp or all kinds of fancy task-runners. Using task-runners can bring many advantages. But for me, a build process is something that should be intuitive whether or not you are familiar with a project or even a programming language. Linux has a common make process, and in my opinion whatever you want to expose as build steps should go through that mechanism. Makefile.

So if you decide on going with plain bash scripts, Grunt, Gulp or whatever you like, always make sure that your endpoints are also mapped in your makefile. In this way you can reliably build all your projects – but also detect issues on all your projects – in the same way, no matter what’s running underneath the hood of the project.

Since there isn’t any real exciting stuff going on in this project, we already have to do a makefile, I see no reason to already start implementing one of these task-runners.

Let’s make a Makefile:

.PHONY: all clean get-deps build build-js build-css

all: get-deps build

clean:
    -rm public/css/*.css
    -rm public/js/*.js
    -find ts/ -name "*.js" -type f -delete

get-deps:
    nvm install 7.0.0
    nvm use 7.0.0
    npm i
    node_modules/.bin/bower i
    node_modules/.bin/typings i
    sudo gem install sass-json-vars

build: build-js build-css
build-js:
    node_modules/.bin/browserify -p [ tsify --target es3 ] ts/app.ts \
        | java -jar node_modules/google-closure-compiler/compiler.jar \
        --create_source_map public/js/app.map --source_map_format=V3 \
        --js_output_file public/js/app.js
build-css:
    helpers/build_css.sh sass public/css

serve:
    node_modules/.bin/static-server -i index.htm public

I’ll explain briefly what happens

Make clean clears the public folders, since they can be regenerated. It also removes .js files in the ts folder. Some IDEs create these files to test their validity.

Make get-deps gets the dependencies for the project. This can be ran at your test and merge servers every time before building.

Make build builds the JS from TypeScript, drags it through browserify to create one file and drags it again through the google closure compiler to garble it and optimize it. Once done, it creates the CSS from SASS

Make serve starts a super-simple server that enables you to test this front-end application visually on-screen.

This small server that helps you during development can be very handy. For now we don’t require any fancy rewriting, proxying or server-sided processing, so staticly serving assets should suffice. Install the package by running:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i static-server --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ static-server@2.0.3
├─┬ chalk@0.5.1
…
…

Seeing it work

Lets populate the project with some base values in order to test the build process.

Configuration of the app that’s shared between css and js.

conf/app.json

{
  "color": "blue"
}

Entrypoint for css

sass/app.scss

@import '../conf/app.json';
@import 'partials/_example';

Set the background color to the color in the variable, and center the color name as text on the page

sass/partials/_example.scss

html,
body {
 width: 100%;
 height: 100%;
 line-height: 100%;
 background-color: $color;
 text-align: center;
 font-size: 40vw;
}

An interface to define what can be expected from the configuration.

ts/i_config.ts

interface IConfig {
 color: string
}

export default IConfig;

A basic javascript file that replaces the content of the body element to show that the config variable “color”

ts/app.ts

/// <reference path="../typings_local/require.d.ts" />

import IConfig from "./i_config"

let config = <IConfig>require("../conf/app.json");


window.onload = () => {
 document.body.innerHTML = config.color;
}

In order to load the external JSON file:

typings_local/require.d.ts

declare var require: {
 <T>(path: string): T;
 (paths: string[], callback: (...modules: any[]) => void): void;
 ensure: (paths: string[], callback: (require: <T>(path: string) => T) => void) => void;
};

The html file that combines it all together

public/index.htm

<!DOCTYPE html>
<html>
<head>
 <meta charset="UTF-8">
 <title>Build Process</title>
 <link rel="stylesheet" href="css/app.css">
 http://js/app.js
</head>

<body>
 JS not loaded
</body>

</html>

now run:

make clean build serve and see what happens!

What happens afterwards

This sets a base, but it’s far from done. And since it’s my first blogpost, I’ll first need to assess how this will go.

I plan on writing on lots of topics, but in sequel and relation to this post I’m considering stories about:

  • ServiceWorkers, PWA.
    • TypeScript
    • Caching
    • Cache manipulation
  • TDD, BDD automated testing
    • karma
    • browserstack
    • jasmine
    • chimp

Let me know what you think!

p.s. you can find this code at https://github.com/timmeeuwissen/build-process