Behavioral tests / BDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing BDD and / or behavioral-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for Behavioral Testing

Lets start with setting up a testsuite.

We usually need stuff like

  • something that connects to a browser
  • something that runs the tests
  • something that interprets the tests
  • something that compiles all needed files

There is this wonderful package called chimpjs that already helps us out on most of these facets.  It does so by integrating and sprinkling magic over the following tools:

Let’s install it and see from there.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD› 
╰─➤ npm i chimp ts-node --save-dev
╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ ./node_modules/.bin/typings i cucumber chai --save-dev --source=dt

Configuring Chimp

Let’s set up chimp. Chimp is primarily a wrapper and seamless integration of multiple test frameworks, so it might not come as a surprise that we can set config options to these individual frameworks. By default the configuration options will be as such:

https://github.com/xolvio/chimp/blob/master/src/bin/default.js

These options can be overridden in our own file, and we have to, because chimp by default isn’t set up to use TypeScript.

Create a file chimp.conf.js.

module.exports = {

  // - - - - CUCUMBER - - - -
  path: './feature',
  compiler: 'ts:ts-node/register'

};

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test bdd

and add the rules (extend test if you’ve also done the tdd post):

 

test:
    node_modules/.bin/chimp chimp.conf.js --chrome

bdd:
    node_modules/.bin/chimp chimp.conf.js --watch

Let’s also create the proper directories

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ mkdir -p feature/step_definitions

Create some tests

In order for us to know if we’ve properly set up the test framework, we want to create some tests. Since we’ve already created some nonsense during the creation of the generic build process, we’ll continue on that.

First create the .feature file

The feature file should tell in plain English what feature we expect, and how it behaves in different scenarios.

in: features/config.feature

@watch @feature

Feature: Seeing the effect of the config on the screen
  In order to know if the config was correctly applied,
  As a Developer
  I want to test some of the aspects of the config on the screen

  Scenario: Check if background color is corect
    Given the config has the color set to blue
    When we look at the website
    Then I should be having a blue background

Then we write implementation for this feature

The featuretest as written, can not directly be interpreted by our test framework. Our script just doesn’t know what ‘background color’ means, and what element is been intended to check. So that’s why we create support for these steps. The nice thing is, that you might notice some punch holes in the sentences. Like ‘blue’, might be switched for another color, and ‘background’ might be ‘font-color’ or something along these lines. If you cleverly analyse your scenarios, you might become able to recognise standard patterns that you can re-use.

Be careful! A common caveat is that you start writing a language processor. Don’t do it! Tests should:

  • be straightforward
  • be easy to understand
  • have no deep connections with other tests 

Here’s the example implementation of the feature scenario. Put it in feature/step_definitions/config.ts

/// <reference path="../../typings_local/require.d.ts" />
import IConfig from "../../ts/i_config"

let config = <IConfig>require("../../conf/app.json");

export default function() {

  this.Given(/^the config has the color set to ([^\s]+)$/, function (color: string) {
    if (config.color !== color) {
      throw "Color in config mismatches the test scenario";
    }
  });

  this.When(/^we look at the website$/, function () {
    this.browser.url('http://localhost:9080');
    return this.browser.waitForExist('body', 5000);
  });

  this.Then(/^I should be having a ([^\s]+) background$/, function (color: string) {
    let browserResponse = this.browser.executeAsync(function(color: string , done: (response: boolean) => void) {
 
      let compareElem = document.createElement("div");
      compareElem.style.backgroundColor = color;
      document.body.appendChild(compareElem);
 
      let bodyElem = document.querySelector('body');

      done(
        window.getComputedStyle(compareElem).backgroundColor == window.getComputedStyle(bodyElem).backgroundColor
      );
    }, color);

    if (!browserResponse.value) {
      throw "BackgroundColor didn't match!";
    }

  });

}

Running the tests

By now we have set up TDD and BDD tests with TypeScript.  A simple

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ make test

Should give you something like this:

Conclusion and notes

We are now fully able to write our tests – feature as well as function – in TypeScript, and have integrated them in our example build-process. We can run these tests on our own machine to verify our project locally.  BDD and TDD are set up separately so that we have more grip on either of the testing solutions and prevent coupling where not needed.

We are however not completely done yet.

  • We will have to set up some CI / CD make-tasks that can be ran at a headless server since we now leverage the browser in our own OS.
  • We will need make sure our watchers and compilers are set up properly, in order for BDD and TDD to run nicely in the back while developing our code.

We will go more in-depth on those aspects when we start hooking our project up to nginx and really start developing an application.

Changes applied in this blog post can be found at github.

Suggestions, comments or requests for topics? Please let me know what you think and leave a comment or contact me directly.

Why and When to do Behavioral or Test Driven Development (B/T)DD

For: Teamleads, Architects, Entrepreneurs and QA members that are searching for a path to higher quality.

Generally, testing is perceived as boring, time consuming and ‘expensive’. This is also the first question business people will ask when you propose it. ‘Fine, but how much does it cost?’. This article should give you some sort of a hand-hold to determine what it might bring you and if the time is right.

How to test your stuff, isn’t as valuable if you don’t have a solid understanding of the Why and When. I have written a post on how to implement TDD in a TypeScript build-process, about the how. Now it’s time for the reasons behind it.

When to start with automated testing

It’s in my opinion foolish to immediately start with writing tests when you start with creating a new product. Often you really don’t know what the product will become (you might think you do, but really, usually you don’t, read about this in The Lean Startup from Eric Ries). There will be many iterations of pivoting that will cause your application to go a completely different way when you build an MVP (Minimally Viable Product).

But once the product found it’s way in to the market, and the goals become more and more long-term, your focus will start to shift from creating new stuff, to make sure we create the good stuff. Code gets refactored all the time to be more performant and better readable. But stuff will break all the time.

At this moment your code will – and should – be tested to maintain a certain level of quality. This is the moment Automated Testing steps in.

Why would I do automated testing

I mean, developers know what they’ve done right? They can check what they’ve created?

That’s the general over-simplification we hear. It’s true in some sense, and happens always to reach your Acceptance Criteria, but it won’t suffice. With automated tests you can:

  • Run automated tests before merging to stable to know you are safe and automated rollout to staging or even production (Continuous Integration / Deployment)
  • Test against hundreds of browsers and their versions, on different devices and operating systems
  • Prevent regression
  • The tests define functionality. They are the place you can go to to find an example of integration
  • Do code coverage checks that give you information on how much of your code is covered by tests.
  • They cut time and reduce the risks that you have when going to production. It builds in a certain amount of certainty. Whenever a bug does slip in, you can write new tests to check for that issue. This makes sure that the exact bug doesn’t re-occur (hence regression tests)

What does the T/B DD stand for?

TDD means Test Driven Development

When you read it carefully, you see “Driven Development” trailing the first word “Test”.  This basically means: write your tests before you start writing any code at all. So, what are the benefits of doing this?

  • By writing your test first, you’ll have to think about the thing you are creating. Think about the outlines, the how and what. Since you are still not really hands-on, you also don’t have to improvise and be burdened by hacking stuff in to make it work. You just think about the feature or the unit that you want to add to the system. This makes your code mode atomically correct once you start writing and you’ll spot issues before they arise.
  • Now we know wat we can expect from the thing we want to create, we start running the test. The test will fail since the code for your new test isn’t there yet.
  • You iterate over your code to make your tests green, and if needed add other tests if the functionality isn’t sufficient yet, and start this process all over again.
  • You’ll deliver code that is restricted to what’s asked from it, not what future questions might be. You’ll deliver code that works, and what others can rely on.  All features are documented.

TDD is often synonym for Unit Testing. Unit testing means, each unit of your code should be testable as a separate black-box apart from the complete system. You’ll see that you’ll be mimicking the file structure of the original project. Don’t merge the two together although this might be tempting! Tests should run separate from your production code. Your code should never rely on your tests. Your tests should only rely on units of your code.

BDD stands for Behavioral Driven Development

So the ‘Driven Development’ is exactly the same as for TDD, but in this case we don’t test units, but the sum of their outcome. These units combined create an experience for the user, and your scrum stories rarely contain AC’s with the granularity of a Unit. To properly test the code, we’ll have to do integral tests that measure if all criteria for AC’s are met.

With BDD we:

  • click through the website
  • expect functionality to be there
  • finish operations, like creating a basket and ordering
  • test if the outcome is as desired.
  • If not correct, take screenshot, log errors and alarm.

This means:

  • have all AC written in a structural way
  • test AC in human readable text against our production and test environment
  • develop a feature base (since we already keep track of AC) that informs the next person about the what, why and how of all features in our application.

I can advise ChimpJS to do this for you, while writing your tests in Cucumber syntax. This is friendly for as well as Business as Tech. Here’s a great example of how that would look like!

 

What does testing cost?

So to finish with the first thing that will be asked. Investing in automated testing does cost money, but a lot less than humans doing the same thing.

The question really is, how much can you afford to screw up?

If the answer to that is: I don’t mind, then don’t do it. Because you wouldn’t hire a human to test either.

If the answer is: I do mind a bit but I don’t want to invest too much, then make sure that all software that is tightly related to KPIs is automatically tested. You will see that over time it will give you more than it costs you.

If testing is already an integral respected part of your deployment routine but not digitized yet, I would say: read this article again and draw your own conclusions.

 

Testing isn’t giving you 100% assurance. Nothing does. But you can always try to become better at what you do, and with that idea in mind be sure that you structurally test whenever changes are being made. I once spoke with a CTO that had a complete division of testers, that wrote tests apart from the development teams. To my recollection both teams where about equal in size. What we should learn from this is: when there is much at stake, you must do more to make sure things go right.

How much is for you at stake?

Let me know what you think in the comments! Want more of this? Use the Poll on the right of the screen, comment or contact me!

Unit-tests / TDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing TDD and / or unit-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for unit testing

So what do we need:

Some testing framework (we go with Jasmine)

There are lots of unit-test tools (mocha, chai, sinon, etc) out there that are really good. I at this moment prefer Jasmine. It’s well documented, stays relevant, serves an atomic purpose, configurable and has plugins separated through the NPM repo.

Some orchestrator / runner

We need an orchestrator to launch our tests in browsers. We use Karma for this.

Some browsers

There’s so many you can use, but also should use. Karma facilitates that you can hook up your own browser (open multiple on multple machines if you want) to test with. If that’s too manual you can go with solutions like: PhantomJS, chrome automated testing with Selenium / Webdriver, or doing it through Browserstack and have it tested on multiple versions of multple browsers on multiple operatingsystems and their versions. Lucky you, te runner we chose (Karma) supports interfacing with all of these as part of your testline.

Some reporters

What would we need to get a bit of grip and feeling with our test-process.

  • spec – show the entire spec of the units
  • coverage – we want to know if we’ve actually covered most of our logic (again, why you would like to do this will be described in another article)

 

You convinced me, two thumbs up, let’s do this.

So our lovely NPM can help us quite a bit with this. Do as followed:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ npm i jasmine-core karma karma-browserify karma-browserstack-launcher karma-jasmine karma-phantomjs-launcher karma-coverage karma-typescript-preprocessor phantomjs-prebuilt watchify karma-spec-reporter --save-dev

Next chapter.. ;-).

Instructing the runner

Karma needs to know a bit about what it should do when we’ve asked for a test.

karma.conf.js

module.exports = function (config) {
 config.set({
   basePath: '',
   frameworks: ['browserify', 'jasmine'],
   files: [
     'spec/**/*_spec.ts'
   ],
   exclude: [],
   preprocessors: {
     'spec/**/*.ts': ['browserify','coverage']
   },
   browserify: {
     debug: true,
     plugin: [['tsify', {target: 'es3'}]]
   },
   reporters: ['spec', 'coverage'],
   port: 9876,
   colors: true,
   logLevel: config.LOG_INFO,
   autoWatch: true,
   browserDisconnectTimeout: 1000,
   browserDisconnectTolerance: 0,
   browserNoActivityTimeout: 3000,
   captureTimeout: 3000,
   browserStack: {
     username: "",
     accessKey: "",
     project: "build-process",
     name: "Build-process test runner",
     build: "test",
     pollingTimeout: 5000,
     timeout: 3000
   },
   coverageReporter: {
     type: 'text'
   },
   customLaunchers: {
     ie10: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "7",
       browser: "ie",
       browser_version: "10"
     },
     chrome: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "10",
       browser: "chrome",
       browser_version: "latest"
     },
   },
   browsers: ['PhantomJS'],
   singleRun: false
})}

don’t forget to create the directory that is scanned for your _spec.ts files

 

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test tdd

and add the rules:

test:
    node_modules/.bin/karma start --single-run

tdd:
    node_modules/.bin/karma start

 

Getting definitions of Jasmine

Since your code is written in TypeScript, your tests preferably are also written in TypeScript. You’ll need some definitions of the capabilities of Jasmine in order to use it properly. Luckily the people of typings are geniuses and supplied such a definition for us!

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
 ╰─➤ node_modules/.bin/typings i jasmine --source="dt" --global
 jasmine
 └── (No dependencies)

 

Test if we can test

Oh boy that is a nice title :-). Let’s write some nonsense first, so we can write tests for it later.

The nonsense

Now create some simple example module like ts/example_module.ts:

type someCallback = (someString: string) => string;

export default class example_module {

  constructor(private someVar: string, private callback: someCallback) {

  }

  public some_method(){
    console.log('some method ran!');
  }

  public get_string(): string {
    this.some_method();
    return this.callback(this.someVar);
  }

}

 

There’s a range of nonsense that can be applied in even more bizarre ways that I don’t intend on pursuing  if you don’t mind. This should suffice 🙂

Let’s test this nonsense

Create this testfile in spec/example_module_spec.ts

Generally it’s a good idea to separate the tests from the project since they otherwise clutter the area you’re working in. But do try to mimic the structure that’s used in your normal ts folder. This allows you to find your files efficiently. We append _spec to the filename, because when your project grows, it’s not uncommon to create a helper or two, which shouldn’t be picked up automatically.

/// <reference path="../typings/index.d.ts" />

import ExampleModule from "../ts/example_module"

describe('A randon example module', () => {

  var RANDOM_STRING: string = 'Some String',
      RANDOM_APPENDED_STRING: string = ' ran with callback',

      callback = (someString: string): string => {
        return someString + RANDOM_APPENDED_STRING;
      },
      exampleModule: ExampleModule;

   /**
    * Reset for each testcase the module, this enables that results
    * won't get mixed up.
    */
   beforeEach(() => {
     exampleModule = new ExampleModule(RANDOM_STRING, callback);
     spyOn(exampleModule, 'some_method');
   });

   /**
    * testing the outcome of a module
    *
    * Should be doable for almost all methods of a module
    */
   it('should respond with a callback processed result', () => {
     let response = exampleModule.get_string();

     expect(response).toBe(RANDOM_STRING + RANDOM_APPENDED_STRING);
   });

   /**
    * testing that specific functionality is called
    *
    * You could make use of this, when you expect a module to call
    * another module, and you want to make sure this happens.
    */
  it('should have called a specific method each time the string is retrieved', () => {
    // notice that, because of the beforeEach statement, the spy is reset
    expect(exampleModule.some_method).toHaveBeenCalledTimes(0);

    // execute logic twice
    exampleModule.get_string();
    exampleModule.get_string();

    // expect that the function is called twice.
    expect(exampleModule.some_method).toHaveBeenCalledTimes(2);
  });
});

The result:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ make test
node_modules/.bin/karma start --single-run
08 12 2016 22:58:35.109:INFO [framework.browserify]: bundle built
08 12 2016 22:58:35.115:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
08 12 2016 22:58:35.116:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
08 12 2016 22:58:35.130:INFO [launcher]: Starting browser PhantomJS
08 12 2016 22:58:35.380:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#RxSPDX6Lu-LvxyP2AAAA with id 76673218
PhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0.04 secs / 0.001 secs)
--------------|----------|----------|----------|----------|----------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
--------------|----------|----------|----------|----------|----------------|
 spec/        |      100 |      100 |      100 |      100 |                |
 app_spec.ts  |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|
All files     |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤

Or as my lovely console likes to tell me in more color:

build-process-tdd.png

 

Check the PR for the actual code.

Want more? Any ideas for the next one? Let me know or use the poll on the right side of the screen!

Turtles all the way down

A story about the risk of over-abstraction and false assumptions on technical debt.

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

The first time I’ve got in contact with this story is when I read the book Godel Escher Bach. Although it seems hilarious, it also points out that there is no solution for human stubbornness and lack of logical thinking. It is an exceptional example of missed credits for the inertia of mankind’s cumulative factual cognition, or perseverance of human-induced stories one could believe in.

Not soon after, I started recognizing some of these silly patterns in my own behavior. Of course regarding personal and behavioral stuff, but it somehow concerned me more that I noticed that these patterns can easily be found in day-to-day technical tasks. A recurring theme in my software seemed to be some serious over-engineering with abstractions over abstractions, all to separate concerns wherever possible and isolating whatever could be isolated. The layers of abstraction grew so thick that they became very hard to follow for anyone else including my future me, and I realized I needed some serious re-prioritization of what I perceived as good practice in software development.

The rule of three

It is so, so hard to leave technical debt when you’ve had a history full of it. This becomes the second nature of a developer. Remove any tech-debt up-front before it bites you in the ass afterwards. But there’s a risk in this.

We tend to prematurely optimize our code. But the risk here is that we optimize without knowing the full set of features that are required. This is when I introduced the rule of three (which sometimes is ignored, sometimes the rule of two, but don’t tell anybody okay?).

Only when you’ve seen similar functional demand occur three times, start clustering the functionality and isolate the individual concerns.

By the time you’ve implemented a third similar functionality (which usually needs some adaptions to work in a specific situation), you can tell something about the environment the component should work in.

Set your KPIs

(For those who don’t know, Key Performance Indicators, the stuff that tell you if you are doing the right thing or should pivot your efforts).

This might seem strange, but make sure you set your definition of done straight before you start the development of new features. The definition should only encompass the creation of functionality. Not the how. Just the what. Don’t create elaborate structures, but try to get to your goal the fastest way possible.

Honoring:

  • transparency
    Read your code, and let someone else read it (peer reviews). If it’s not clear what it does or how it works, it’s not good enough
  • Usage of other modules (DRY (Don’t Repeat Yourself))
    Don’t do work that’s already done
  • Don’t implement features you don’t directly need (KISS, Keep It Simple, Stupid)
    I guarantee you that the functions you consider nice to have but unused, will be the first to bring your code to a grinding halt.

You’ll need these KPIs! Because odds are that you won’t feel good – at all – about the product you’ve just delivered. There’s ALWAYS a better way to do things, and that shouldn’t drag your just created real-life value down. Satisfy your KPIs and feel satisfied. But watchful.

Observe

Take notes along the path of deliverance. Mark the project as Concept and MVP (although functionally you might feel you’re there, you can sometimes treat functionalities as separated products) and keep track of it. Observe all stuff needed in the future and observe if your suspicion of lacking features, abstractions and re-usage of code are right. If so, don’t be shy to become your own PO and create a story that removes tech-debt. If your relation to your usual PO is one that has trust in it’s fundament, he should respect this story as much as any other feature request and allocate time to remove this technical debt.

Apply validated learning

By waiting to apply all these abstractions, you enable validated learning (beautifully described by Eric Riess in The Lean Startup) to more or less scientifically confirm the future of the feature (the standard definition used in validated learning), but also the need and the focus of the future optimization.

Bottom line: You’ll spend less time, on stuff that get’s thrown away.

It’s not turtles all the way down anymore. It’s just a bunch of oddly stacked turtles on a ridge in some water on a planet.

What follows after this.

I’d still like to write a blog post about testing code. This article about levels of abstractions relates to that future testing blog post in so many ways.

If you’d like me to put some focus on that, let me know by using the poll on the right side of the screen!

I’m starting an app. What technology should I use?

For: entrepreneurs that need to decide what technology to use for their future app

Mobile first. Everybody does it, wants it, needs it. Simply because your phone’s front
camera probably sees your face more than your spouse. It’s easy, it’s fun and addictive. So usually going for a mobile-first approach seems like a good thing to do.

battle_gloves_sport_boxing.pngIt’s – as an entrepreneur – very hard to decide which technology to use. You could be non-technical, but you still would have to make a well-informed decision on this technical matter and you want what’s best for the company. This blog post attempts to give you some insight (boxing gloves) in what options you have, and how they might affect you in the short and long run.

Progressive web apps

pros

  • The app can be developed in a  well-known stack. Most developers in this world are webdevelopers.
  • Quicker iterations in development, easier to work with each other
  • Quicker iterations on client-side, codebase can pull the latest version ad hoc, instead of having to wait for store approval
  • No store denial, apple google and others cannot curate your app based on its content. Not only the app, but also the update itself doesn’t have to be reconsidered against policies
  • Cross platform, any browser that supports it, on any platform, will be able to show and serve (there’s a caveat here, read for cons later on)
  • Fallback for non-compliant browsers, the browser will simply see a website without the offline benefits
  • It’s really (really) fast
  • The app can be visited by browser, but also installed (become chromeless for a perfect UX. There is no reason your user shouldn’t be able to have the exact experience he or she would have like on native technologies).
  • Not only cross-platform, but also re-usability of components cross-ui (reuse business logic for different experiences on mobile, tablet, desktop, television).
  • Easy to apply lean / agile methodology (determine MVP, test / experiment, pivot or accept and enhance)
  • Forcing of https
    • security against eavesdropping and modifications with proxies
    • security all-throughout the communication platform
    • enablement of https2 (more performance!)
    • more services become active (like GPS) since having them on http would infringe on personal security
  • Progressive compliance with device interfacing rights (e.g. the user will only be asked to allow camera access, when the camera is actually used by the app)
  • All major browser vendors (Google, Microsoft, Firefox, Samsung-web) are actively supporting the adoption of this technology

cons

  • Not all hardware and native functionalities can be accessed as easily as through a native app, though browsers are making a real effort to mitigate this issue. Think of the recent work that’s done in:
    • gps
    • camera
    • push notifications
    • run as a service
    • vibrate
    • multi-touch
    • etc..
  • Not all browsers on mobile fully support this, but it is coming (as spoken of on the google PWA summit 2016 Amsterdam).
    • Safari is currently not actively pursuing PWA, but recently announced they will start making effort (no solid promises there). This probably has everything to do with losing grip on the app ecosystem and possible loss of market share.
  • Some native look and feel for default behavior may differ from the native experience
  • Optimized for 2d web apps. Web Assembly and 3D are coming, but for now one can expect to get highest 3d performance on native.

When to apply

  • Do this if you have an informative app. PWA isn’t really suitable for gaming. It’s not that you can’t, it’s just that usually native apps are better at this or have nicer tools to quickly and effectively approach your goal.
  • Do this when all animations and visual elements are designed and accounted for.
  • Do this when your MVP has best chance of surviving when it’s rolled out on more than one device type

Cordova

Cordova might be a good fallback for PWA on IOS to get maximal coverage on all platforms. Cordova basically shows a webview in an app and loads your web-app from it’s static cache.

Pros

  • It mitigates possible current platform issues, since the entire PWA built app can be wrapped in a native container.
  • There are tonnes of plugins that enable direct usage of hardware or other device features
  • Extendable per operating-system (e.g. when a sync-adapter should be shipped with the app)

Cons

  • Webviews use software rendering and are degraded versions of browsers. Therefore the app will never be as fast as a native or PWA.
  • Publishing to markets comes with all rigidity of publishing to markets e.g.
    • maintain backwards compatibility on all interfacing endpoints, expect users not to update / upgrade
    • slow iterations and bundling of features (release trains to prevent flooding of updates). Also introduced slowness in acceptance by market
    • having accounts for all platforms, compiling against all platforms
  • mandatory compliance with permissions and updates of these policies

When to apply

  • As a fallback wrapper for PWA. I wouldn’t apply this anymore without PWA as foundation t.b.h.

Native technologies (per platform)

Pros

  • fast when operational (usually slower boot speeds than PWA), especially for high performance apps like games
  • when standard device interaction is preferred (like how pulldowns look and how some animations are), this is the best way to go.

Cons

  • you need intricate knowledge on the specific
    • languages
    • development processes
    • device capabilities
    • market rules
    • platform updates
  • codebase is not reusable for other platforms
    • that means that the effort will have to be made twice
    • discrepancies in functionalities between platforms will form
    • you’ll need more people in order to maintain multiple platforms / versions
    • double maintenance, updates will become cumbersome and expensive
  • mandatory installing through appstore, no ‘checking out’ through browser before installing
  • slow iterations in releasing, mandatory release train orchestration
  • mandatory compliance with permissions and updates of these policies

When to apply

  • when developing an MVP, a specific device could be targeted. This can only be done if the success and KPIs of the MVP are not dependent on this factor. Do this only when you are sure enough that the other platform either doesn’t matter, or enough resources will be available to also develop for the other platform
  • when developing for e.g. mobile watch technologies
  • when developing a game

Cross-platform technologies (like xamarin)

Pros

  • one language to rule all app-based platforms

Cons

  • no portability to desktop
  • unfamiliar development process. I’ve never seen many people work on this at the same time
  • harder to find people for, thus creating a strong human dependency (SPOF)
  • there’s a transpilation process. Since native technologies are moving very rapidly, it’s only the question of these technologies can keep up with the pace of multiple OS Vendors.
  • the longevity of native OSses is high, but there is no guarantee on technologies that depend on these OSses.

When to apply

  • When native is preferred over PWA, but cross-device (excluding desktop usage through browser) is really important
  • When a performant application on IOS is key (since the Cordova fallback webview for PWA doesn’t perform that well)
  • When you develop a game and found people that are dedicated and skilled in developing for these technologies.

Where business logic isolation fails with RDBMSses

For: software architects, DBAs and  backend developers

Databases get a lot of attention and they should. In a mutual relationship code gets more and more dependent on a database, and a database grows by demand of new functionality and code.

In order to be as resilient as possible to future changes, in RDBMS-land one should strive for getting the highest normal-state for their data. If you’ve already decided that you have to have an RDBMS to clear the job, take this advice: screw the good advice ‘never design for the future’. Because the future of your application will look very grim if you don’t carefully consider your data scheme (I assume considerable longevity and expansion of functionality on the application here).

So.. What’s this blog post about?

There is tight coupling involved between a database structure and an application. There’s no way around it. And that’s okay. But what I think is not okay, is setting rules about the  relations more than once. This is prone for discrepancies in logic between the database and your codebase. There can be countless triggers, foreign key-constraints that block, cascade or set to null,  and you wouldn’t know about them unless you perform your action and analyze the result, or you’ll have to mimic the database logic in your code (which will break in time due to the discrepancies).

Making the issue visible

Let’s first fire up a database

╭─tim@The-Incredible-Machine ~
╰─➤ sudo apt-get install mysql-server-5.7

And populate it with some data. I’ve found this example database on github. If you also use it to learn from or with, please give the repo a star so people know they aren’t putting stuff out for nothing.

╭─tim@The-Incredible-Machine ~/Git
╰─➤ git clone git@github.com:datacharmer/test_db.git
Cloning into 'test_db'...
remote: Counting objects: 94, done.
remote: Total 94 (delta 0), reused 0 (delta 0), pack-reused 94
Receiving objects: 100% (94/94), 68.80 MiB | 1.71 MiB/s, done.
Resolving deltas: 100% (50/50), done.
Checking connectivity... done.

╭─tim@The-Incredible-Machine ~/Git/test_db ‹master›
╰─➤ mysql -u root -p < employees.sql
Enter password:
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
INFO
LOADING departments
INFO
LOADING employees
INFO
LOADING dept_emp
INFO
LOADING dept_manager
INFO
LOADING titles
INFO
LOADING salaries
data_load_time_diff
00:01:02

In order to see what we’ve got, I reverse-engineered the database diagram from the database. This sounds harder than it actually is. Open MySQL Workbench, make sure you’ve established a connection with your running MySQL service, go to tools tab “database” and use the “reverse engineer” feature. Workbench will create a schema for you, a so called EER (Enhanced Entity Relationship Diagram).

example_db_structure.png

EER Diagram of example database

We can basically see that all relations are cascading when a delete occurs, and restricting when an update occurs. So when an employee gets deleted, he or she will be removed from the department, will not be a manager anymore and all history of salaries and titles will be removed.

So now let’s look up one single department manager

mysql> select * from dept_manager limit 1;
+--------+---------+------------+------------+
| emp_no | dept_no | from_date  | to_date    |
+--------+---------+------------+------------+
| 110022 | d001    | 1985-01-01 | 1991-10-01 |
+--------+---------+------------+------------+
1 row in set (0,00 sec)

And ask the database to explain it’s plan upon deletion of the employee that we know is a manager.

mysql> explain delete from employees where emp_no = 110022;
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
| id | select_type | table     | partitions | type  | possible_keys | key     | key_len | ref   | rows | filtered | Extra       |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
|  1 | DELETE      | employees | NULL       | range | PRIMARY       | PRIMARY | 4       | const |    1 |   100.00 | Using where |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
1 row in set (0,00 sec)

So what is it I miss?

I am missing an Explain-like function, that doesn’t give me meta-info about the query-optimizer, but actual info on what would happen -in terms of relations- if I where to remove an entity from my database given a specific query.

I would expect that it would return data like:

  • which table
  • how many records will be affected
  • what would be the operation (restrict, cascade, null)

And just like an explain can get extended, running this in an extended mode could also yield a group-concatenated list of primary keys, separated by comma and csv fanciness to not break the string (using string delimiters, escape characters, whatever you feel needed).  Imagine the fancyness you could unleash with actually informing your user which record is the culprit.

Name me some examples where I need this!

Lets imagine we’ve created a CRM-like system. For now, let’s imagine we have

  • organizations
  • addresses
  • invoices
  • contacts
  • notes

Some questions that might arrise are:

  • Can I delete the organization, or is there still an unprocessed invoice attached (blocking constraint)? I’d rather not present the user the delete-button and give a reason why it cannot be done, then concluding this issue and rolling back on the action.
  • When I delete an organization, what will go with it?
    • Will processed invoices also be removed
      (hope not! Better set to null in this case, the invoice should (besides the org.id) also store a copy of all relevant org data)?
    • Will unprocessed invoices be removed?
      (Maybe best to block in this situation, since there is still money potential  or reason to assume the user doesn’t want this)
    • will notes be removed?
      (You could do this, but only if you are very verbose about it)
    • will contacts be removed?
      (Often you don’t but since these kinds of relations are often many-to-many, these coupling-records should be removed)
  • What do I actually know about this relation? What’s exactly connected? Super nice to dynamically create graphs on how data is connected and how relevant this data actually is.

Why solve it in the database?

I have worked on two applications that counted each in excess of 150 tables that where all interconnected, and I can really say that investing significant effort in philosophizing on your database schema isn’t a luxury, it’s a must.

A database contains atomic values. It lays the foundation where your code starts to work with. That inherently means, that whatever we can do at that level to protect it, we should.

A couple of advantages of solving as much as possible in the database

  • If multiple tools or applications connect to the database, they will all have to follow the same conventions
  • whatever’s already in the database, doesn’t have to be transferred towards the database, and thus saves bandwidth.
  • Foreign key constraints use indexed columns by nature. Your queries will run faster, since the data where your tables generally relate to each other are already indexed
  • You don’t have to rely on super-transparent naming (you should do that anyhow by the way), but everyone that looks to your database will understand how tables relate to each other.

What’s next?

With regard to Database-related topics, there are a couple of topics that I’d like to cover in the future like:

  • Caches
    • microcaches, denormalizing data
    • cache-renewal strategies
  • Versioning of datastructures
  • Query optimization
    • do’s and don’ts on writing of queries
    • methods to optimize

Do you think that I’m off, missing some recent progression in this area or just want to chat? Drop me a line and let me know what you think!

Creating a resilient Front-End build process

For: intermediate front-end developers and expert developers that need a quick reference.

There are lots of ways to get your front-end architecture to the client. Usually the basic concepts seem easy and quickly done, but you will soon find yourself

  • make an effort to integrate it in a running environment,
  • make it work for the world (browser compatibilities and such)
  • handle external dependencies
  • write code that’s understandable for your colleagues (and yourself in 6 months)

Yep you’ve guessed it, with every demand you put on your code, the effort will go up exponentially.

This talk is not about how to manage your project. There’s many good books (e.g. Eric Ries – The Lean Startup) and articles to be found about this. This talk focusses on some core steps that I generally like to make to get an optimal work environment that gives me the least amount of friction during development.

During development you will work with lots of individual files, comments, testers and such, to ensure that the project is structurally sound, but also understandable for humans. But as soon as we deploy for our client, there will only be machines interpreting your code, and we’d like to get rid of all that isn’t strictly necessary and really get the highest performance we can get.

Assumptions

I have to make some assumptions about your environment, otherwise I would have to go all the way back to installing linux. So the things that I assume are:

  • you are devloping / deploying on linux machines under the debian architecture (I use Ubuntu)
  • you have NVM (Node Version Manager) installed (https://github.com/creationix/nvm). I don’t include NVM in the build process, since it is installed with a shell script and I consider piping a curl-response to bash as a potential hazard for your organization.
  • you have basic knowledge of linux, client-server over http and javascript
  • you have installed sass

Versions

One issue that is persistent over the years, is versions of dependencies. Sometimes a dependency gets updated and sub-dependencies cannot be resolved anymore. A solution for that resides in the combination of NVM, NPM and Bower.

You might notice that I refrain from using global modules. I do this to detach the project as much as possible from anything that is, or should be available on your machine. This way, we can also ensure that we use the correct version, and not an unknown version that is set globally.

Node

We first need to define which version of Node we’d like to use. Usually, at the time of developing we’d like to use the latest stable.

╭─tim@The-Incredible-Machine ~
╰─➤ nvm install stable
Downloading https://nodejs.org/dist/v7.0.0/node-v7.0.0-linux-x64.tar.xz...
######################################################################## 100,0%
WARNING: checksums are currently disabled for node.js v4.0 and later
Now using node v7.0.0 (npm v3.10.8)

Here you see my machine pulling in the latest version of Node, which at this time is 7.0.0.

You can see which versions are available on your machine like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm ls
   v5.5.0
   v6.8.0
-> v7.0.0
system
node -> stable (-> v7.0.0) (default)
stable -> 7.0 (-> v7.0.0) (default)
iojs -> N/A (default)

And you can switch between them like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm use 7.0.0
Now using node v7.0.0 (npm v3.10.8)

Now we have Node in place, it can supply us with tools that we use to build our solutions with. We’ll first need to create a project.

 

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master›
╰─➤ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (build-process)
version: (1.0.0)
description: Example build process
entry point: (index.js)
test command:
git repository: (https://github.com/timmeeuwissen/build-process.git)
keywords:
author: Tim Meeuwissen
license: (ISC)
About to write to /home/tim/Git/build-process/package.json:

{
 "name": "build-process",
 "version": "1.0.0",
 "description": "Example build process",
 "main": "index.js",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 },
 "repository": {
 "type": "git",
 "url": "git+https://github.com/timmeeuwissen/build-process.git"
 },
 "author": "Tim Meeuwissen",
 "license": "ISC",
 "bugs": {
 "url": "https://github.com/timmeeuwissen/build-process/issues"
 },
 "homepage": "https://github.com/timmeeuwissen/build-process#readme"
}


Is this ok? (yes)

Node is basically a javascript-engine running in a linux environment (e.g. on a server). There are lots of great tools written in node to interpret and mutate your project’s files to become client-friendly.

Bower

Bower is one of these tools. It enables to get dependencies for your front-end architecture. Whenever you feel like using jQuery, lodash, react, material-design or whatever you prefer to use, you will always need to get the dependencies, structure them in a certain way and keep track of their versions. Bower is NPM for front-end related components and does exactly that.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i bower --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└── bower@1.7.9

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/bower init
? name build-process
? description Example build process
? main file index.js
? keywords
? authors Tim Meeuwissen
? license ISC
? homepage https://github.com/timmeeuwissen/build-process
? set currently installed components as dependencies? Yes
? add commonly ignored files to ignore list? Yes
? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes

{
 name: 'build-process',
 description: 'Example build process',
 main: 'index.js',
 authors: [
 'Tim Meeuwissen'
 ],
 license: 'ISC',
 homepage: 'https://github.com/timmeeuwissen/build-process',
 private: true,
 ignore: [
 '**/.*',
 'node_modules',
 'bower_components',
 'test',
 'tests'
 ]
}

? Looks good? Yes

TypeScript

I rather use typescript than I use plain JS. Typescript is a superset of JS, that needs to be pulled through a compiler in order for it to be able to run on a client. There are always reasons not to do stuff, but I’d like to share my reasons to do it anyhow.

  • It is developed and maintained by Microsoft. This isn’t the smallest guy in town, and they do an excellent job at it.
  • It is a superset, and depending on your settings you can or cannot use typing wherever you want (May I kindly request you enforce every variable to be typed strictly, for reasons to follow)
  • Code-completing gets a hell of a lot more fun for your IDE (I personally really like Visual Studio Code on Linux from Microsoft. It’s free, try it!)
  • You can always work in the latest standards. Depending on your compiler arguments the code will be transpiled to any given EcmaScript standard you require for your company. The TypeScript compiler nicely polyfills whatever isn’t available in that ES version, and as time moves on, browsers get better, and your code will deprecate less fast (you can export to ES6 on any given day of the week e.g.).
  • You can more easily apply your backend architecture skills when you work with the latest ES version.
  • Your fellow developers will exactly know the input and output of each and every function without knowing the intricate details of your application.

Packing and transpiling

It’s a safe bet that we will create lots of documents that will all depend on each other. Browserify is able to follow these dependencies and combine them in to one file. There is a plugin called tsify, which basically runs the typescript compiler on the files before or after the merging has happened.

Lets add it to our project:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i browserify tsify typescript --save-dev
build-process@1.0.0 /home/tim/Git/build-process
├─┬ browserify@13.1.1
│ ├── assert@1.3.0
…
…

Get the google closure compiler. We will use this to pipe the output of browserify through.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i google-closure-compiler --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ google-closure-compiler@20161024.1.0

Typings

Now, not every project is written in TypeScript, but we would like to be able to rely on their behavior within our files. E.g. jQuery might return some structure after invoking it, and we want to be able to recognize that output and work with it as such. Typings is a library filled by lots of wonderful people with interfaces of these external dependencies, so you don’t have to do it yourself!
Lets get it first:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i typings --save-dev 1 ↵
build-process@1.0.0 /home/tim/Git/build-process
└─┬ typings@1.5.0
├── archy@1.0.0
…
…
╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/typings init

Sass

It’s important that every part of our application is structured in such a way that someone else understands what he or she is looking at. This includes the style. Style documents get easily overlooked and deemed as less important, but in my experience they are often responsible for the biggest part of the technical debt. Files with thousands of lines of style that react with the html, and no way to understand or properly refactor aren’t an uncommon sight.

In a separate document I plan to get deeper in how you can structure in such a way that you won’t build your own little jungle of css. Here, I will remain to focus on the build-steps and high-level reasoning.

Sharing of configuration and building

It often happens that some variables are shared across front-end architecture. Examples might vary between a path to the CDN or a max-amount of items within a caroussel.

Assuming that you already have sass installed, I could encourage you to also install sass-json-vars.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ sudo gem install sass-json-vars

Because we always want to compile our files this way, it’s handy to create a helper that helps with picking up all scss files that match a pattern, and converts them to a normal css file.

I created a directory helpers with the file build_css.sh that basically contains this:

#!/bin/bash

# $1 = directory to scan for documents
# $2 = directory to put finished css documents in

find $1 -name [^_]*.scss 2> /dev/null | while read input
do
  output=$(echo $input | sed s@^$1@$2@ | sed s@\.scss$\@\.css@)
  outputdir=${output%/*}
  mkdir -p $outputdir
  sass -r sass-json-vars -t compressed $input ${output}
done

Call it with an input and an output dir as first and second argument. Nice helper right?

p.s. don’t forget to make it executable.

Basic file structure

Now that we have our external dependencies in, our directory structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── helpers
│   └── build_css.sh
├── package.json
└── typings.json

 

(I’ve omitted some documents for the tree-view, you can do this yourself also by creating the alias in your .bashrc: alias tree=”tree -C -I ‘vendor|node_modules|bower_components'”)

Lets add some extra folders to give some directions where stuff should go.

First all public stuff.

Here’s where all scripts that can be visited by a browser will reside. All compiled files will be written to these directories. Why not dist? Since these projects are intended to be a dependency by other scripts, other parts like sass or typescript files are also part of the stuff that’s distributed, but those should never go ‘public’ so there you have it :-).

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p public/api public/css public/js

 

Now the source stuff.

There will occur situations in which you want to have a special interface for an external dependency. Typings will create its own dir, but here we have a space in which we can put our own custom ones.

Sass is split in multiple files which don’t have to be converted individually (partials) and functions that cover some visual logic (mixins).

The conf dir will have ‘constants’ variables that will become fixated in the code at compiletime.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p sass/partials sass/mixins ts helpers typings_local conf

By now our structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── conf
├── helpers
│   └── build_css.sh
├── package.json
├── public
│   ├── api
│   ├── css
│   └── js
├── sass
│   ├── mixins
│   └── partials
├── ts
├── typings.json
└── typings_local

11 directories, 4 files

Create a .gitignore. It makes it so much nicer if you don’t have to store all external dependencies in your own repo. Notice that all files in public are checked in as normally. Optionally you could run a post-commit hook that builds the project so you always know that the built files represent the state of the original source files.

node_modules
bower_components
typings
.sass-cache
ts/**/*js

Tying it together

There are so so so many ways to run tasks. You can use Grunt, Gulp or all kinds of fancy task-runners. Using task-runners can bring many advantages. But for me, a build process is something that should be intuitive whether or not you are familiar with a project or even a programming language. Linux has a common make process, and in my opinion whatever you want to expose as build steps should go through that mechanism. Makefile.

So if you decide on going with plain bash scripts, Grunt, Gulp or whatever you like, always make sure that your endpoints are also mapped in your makefile. In this way you can reliably build all your projects – but also detect issues on all your projects – in the same way, no matter what’s running underneath the hood of the project.

Since there isn’t any real exciting stuff going on in this project, we already have to do a makefile, I see no reason to already start implementing one of these task-runners.

Let’s make a Makefile:

.PHONY: all clean get-deps build build-js build-css

all: get-deps build

clean:
    -rm public/css/*.css
    -rm public/js/*.js
    -find ts/ -name "*.js" -type f -delete

get-deps:
    nvm install 7.0.0
    nvm use 7.0.0
    npm i
    node_modules/.bin/bower i
    node_modules/.bin/typings i
    sudo gem install sass-json-vars

build: build-js build-css
build-js:
    node_modules/.bin/browserify -p [ tsify --target es3 ] ts/app.ts \
        | java -jar node_modules/google-closure-compiler/compiler.jar \
        --create_source_map public/js/app.map --source_map_format=V3 \
        --js_output_file public/js/app.js
build-css:
    helpers/build_css.sh sass public/css

serve:
    node_modules/.bin/static-server -i index.htm public

I’ll explain briefly what happens

Make clean clears the public folders, since they can be regenerated. It also removes .js files in the ts folder. Some IDEs create these files to test their validity.

Make get-deps gets the dependencies for the project. This can be ran at your test and merge servers every time before building.

Make build builds the JS from TypeScript, drags it through browserify to create one file and drags it again through the google closure compiler to garble it and optimize it. Once done, it creates the CSS from SASS

Make serve starts a super-simple server that enables you to test this front-end application visually on-screen.

This small server that helps you during development can be very handy. For now we don’t require any fancy rewriting, proxying or server-sided processing, so staticly serving assets should suffice. Install the package by running:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i static-server --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ static-server@2.0.3
├─┬ chalk@0.5.1
…
…

Seeing it work

Lets populate the project with some base values in order to test the build process.

Configuration of the app that’s shared between css and js.

conf/app.json

{
  "color": "blue"
}

Entrypoint for css

sass/app.scss

@import '../conf/app.json';
@import 'partials/_example';

Set the background color to the color in the variable, and center the color name as text on the page

sass/partials/_example.scss

html,
body {
 width: 100%;
 height: 100%;
 line-height: 100%;
 background-color: $color;
 text-align: center;
 font-size: 40vw;
}

An interface to define what can be expected from the configuration.

ts/i_config.ts

interface IConfig {
 color: string
}

export default IConfig;

A basic javascript file that replaces the content of the body element to show that the config variable “color”

ts/app.ts

/// <reference path="../typings_local/require.d.ts" />

import IConfig from "./i_config"

let config = <IConfig>require("../conf/app.json");


window.onload = () => {
 document.body.innerHTML = config.color;
}

In order to load the external JSON file:

typings_local/require.d.ts

declare var require: {
 <T>(path: string): T;
 (paths: string[], callback: (...modules: any[]) => void): void;
 ensure: (paths: string[], callback: (require: <T>(path: string) => T) => void) => void;
};

The html file that combines it all together

public/index.htm

<!DOCTYPE html>
<html>
<head>
 <meta charset="UTF-8">
 <title>Build Process</title>
 <link rel="stylesheet" href="css/app.css">
 http://js/app.js
</head>

<body>
 JS not loaded
</body>

</html>

now run:

make clean build serve and see what happens!

What happens afterwards

This sets a base, but it’s far from done. And since it’s my first blogpost, I’ll first need to assess how this will go.

I plan on writing on lots of topics, but in sequel and relation to this post I’m considering stories about:

  • ServiceWorkers, PWA.
    • TypeScript
    • Caching
    • Cache manipulation
  • TDD, BDD automated testing
    • karma
    • browserstack
    • jasmine
    • chimp

Let me know what you think!

p.s. you can find this code at https://github.com/timmeeuwissen/build-process