A personal annual ‘retro’ and ‘grooming’

It’s easy to see the relevance of knowing where you are in space, but sometimes it’s good to reflect where you are in time. When the clock of the year changes and sheds it’s last hours to rise again like a phoenix from its ashes, it’s a good moment to look around, and see what time lies behind you, but also to think about what you might expect in front of you. This is my attempt to do so.

2016 was a year of hard work, dedication, determination, ups and downs, love, life, contemplation, vision, communication, valuable insights and new takes on old concepts. I had the opportunity to meet people that influenced my career and personal life in a big and positive way.

In Q4-16 I’ve decided to start this blog. Why? I want to challenge myself to formulate knowledge in a more tangible way. If I can explain myself in clear and easy to understand terms when I write it down, I create a foundation from which I can work when I start communicating things verbally. It’s a training to be concise, be valuable, convey knowledge and establish opinions.

I started with two series, which resemble my current lines of interest within my work field.

  • Full-Stack
    In order to understand how things work, one must go back to the essence of how – but even more important- why it is constructed. In this series I try to touch most parts by making a small application from the ground up.
  • Leadership
    Proper leadership is the craft of helping others to develop theirselves, their knowledge and inspire them to bring up the best in theirselves.

So, what’s coming up?

I’m currently working on a couple of posts at the same time. I expect that the update interval will go down a bit, because of moving to a new house and a third child on the way, but nevertheless I’m determined to keep bringing some practical and theoretical stuff to your screens.

Some to expect in 2017:

  • I have had a (very good) training by Isabelle Orlando on presentation skills and influencing skills. Of course I can never equate the effect of that training in some words on the screen, but therewhere absolutely some interresting key points that everybody can use and think about to develop their skills. I would like to give a short summary on these.
  • MoD is moved to a VPS. During that migration there where a couple of default steps that had to be done. Stuff like, setting up nginx, implementing Letsencrypt certificate to be able to run on https, migrating from wordpress.com to a vps hosted domain, migrating statistics, setting up WordPress and all packages needed for it to run (php and its modules), etc. I can imagine that there are a lot of people that have these issues, are a little bit tech savvy but don’t dare to take that step. An overview of how you do these things might help, not only to get that stuff set up, but also to stimulate familiarizing on more parts then just one (hence full-stack)
  • I currently have the two next steps of the Full-Stack series in concept. I’m slightly struggling to remove complexity, e.g. doing TDD on a self-made controller in an MVC setup for the todo list we’ll be working on
  • General progression on the series
    • also some more posts about stuff like monitoring, log processing, scalability and stuff like that. Even though I want to keep progressing steadily towards a full product, I also want to divert every now and then and skip some steps (which should fit again later when the series have progressed some more)
    • full-stack is a hands-on series, I want to create more of an equilibrium with the leadership series where we philosophize somewhat more
  • Lots of book reviews, summaries, references to good videos, audio fragments and websites. I intend to expand the library significantly.

 

For now, the books are resting in a crate for moving. They’ve found their position in space and time.

It’s now time for our families and friends.

 

I wish you all a magnificent 2017 that is filled with passion, energy, balance, health, knowledge and wisdom!

Tim

Behavioral tests / BDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing BDD and / or behavioral-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for Behavioral Testing

Lets start with setting up a testsuite.

We usually need stuff like

  • something that connects to a browser
  • something that runs the tests
  • something that interprets the tests
  • something that compiles all needed files

There is this wonderful package called chimpjs that already helps us out on most of these facets.  It does so by integrating and sprinkling magic over the following tools:

Let’s install it and see from there.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD› 
╰─➤ npm i chimp ts-node --save-dev
╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ ./node_modules/.bin/typings i cucumber chai --save-dev --source=dt

Configuring Chimp

Let’s set up chimp. Chimp is primarily a wrapper and seamless integration of multiple test frameworks, so it might not come as a surprise that we can set config options to these individual frameworks. By default the configuration options will be as such:

https://github.com/xolvio/chimp/blob/master/src/bin/default.js

These options can be overridden in our own file, and we have to, because chimp by default isn’t set up to use TypeScript.

Create a file chimp.conf.js.

module.exports = {

  // - - - - CUCUMBER - - - -
  path: './feature',
  compiler: 'ts:ts-node/register'

};

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test bdd

and add the rules (extend test if you’ve also done the tdd post):

 

test:
    node_modules/.bin/chimp chimp.conf.js --chrome

bdd:
    node_modules/.bin/chimp chimp.conf.js --watch

Let’s also create the proper directories

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ mkdir -p feature/step_definitions

Create some tests

In order for us to know if we’ve properly set up the test framework, we want to create some tests. Since we’ve already created some nonsense during the creation of the generic build process, we’ll continue on that.

First create the .feature file

The feature file should tell in plain English what feature we expect, and how it behaves in different scenarios.

in: features/config.feature

@watch @feature

Feature: Seeing the effect of the config on the screen
  In order to know if the config was correctly applied,
  As a Developer
  I want to test some of the aspects of the config on the screen

  Scenario: Check if background color is corect
    Given the config has the color set to blue
    When we look at the website
    Then I should be having a blue background

Then we write implementation for this feature

The featuretest as written, can not directly be interpreted by our test framework. Our script just doesn’t know what ‘background color’ means, and what element is been intended to check. So that’s why we create support for these steps. The nice thing is, that you might notice some punch holes in the sentences. Like ‘blue’, might be switched for another color, and ‘background’ might be ‘font-color’ or something along these lines. If you cleverly analyse your scenarios, you might become able to recognise standard patterns that you can re-use.

Be careful! A common caveat is that you start writing a language processor. Don’t do it! Tests should:

  • be straightforward
  • be easy to understand
  • have no deep connections with other tests 

Here’s the example implementation of the feature scenario. Put it in feature/step_definitions/config.ts

/// <reference path="../../typings_local/require.d.ts" />
import IConfig from "../../ts/i_config"

let config = <IConfig>require("../../conf/app.json");

export default function() {

  this.Given(/^the config has the color set to ([^\s]+)$/, function (color: string) {
    if (config.color !== color) {
      throw "Color in config mismatches the test scenario";
    }
  });

  this.When(/^we look at the website$/, function () {
    this.browser.url('http://localhost:9080');
    return this.browser.waitForExist('body', 5000);
  });

  this.Then(/^I should be having a ([^\s]+) background$/, function (color: string) {
    let browserResponse = this.browser.executeAsync(function(color: string , done: (response: boolean) => void) {
 
      let compareElem = document.createElement("div");
      compareElem.style.backgroundColor = color;
      document.body.appendChild(compareElem);
 
      let bodyElem = document.querySelector('body');

      done(
        window.getComputedStyle(compareElem).backgroundColor == window.getComputedStyle(bodyElem).backgroundColor
      );
    }, color);

    if (!browserResponse.value) {
      throw "BackgroundColor didn't match!";
    }

  });

}

Running the tests

By now we have set up TDD and BDD tests with TypeScript.  A simple

╭─tim@The-Incredible-Machine ~/Git/build-process ‹BDD*› 
╰─➤ make test

Should give you something like this:

Conclusion and notes

We are now fully able to write our tests – feature as well as function – in TypeScript, and have integrated them in our example build-process. We can run these tests on our own machine to verify our project locally.  BDD and TDD are set up separately so that we have more grip on either of the testing solutions and prevent coupling where not needed.

We are however not completely done yet.

  • We will have to set up some CI / CD make-tasks that can be ran at a headless server since we now leverage the browser in our own OS.
  • We will need make sure our watchers and compilers are set up properly, in order for BDD and TDD to run nicely in the back while developing our code.

We will go more in-depth on those aspects when we start hooking our project up to nginx and really start developing an application.

Changes applied in this blog post can be found at github.

Suggestions, comments or requests for topics? Please let me know what you think and leave a comment or contact me directly.

The vast and free source of power that we often overlook

I am convinced that a compliment that I have for someone else, isn’t mine to keep.

We often keep our praises and compliments for someone else for ourselves. Sometimes they are shallow, like seeing that a female co-worker has made an extra effort on her hair today or someone bought new shoes. But sometimes they run deeper, like appreciating the time someone has taken to educate you on something or be thoughtful of your private situation.

It is key to express yourself. Speak up! Inner whispers cannot be heard.

Fear

We often fear that we miscommunicate, so we refrain from communicating at all. Thoughts like:

  • Does he think I try to curry favors
  • I’m afraid that she thinks I try to hit on her
  • This person always works this good, giving a compliment now is silly
  • I think it’s inappropriate for me to say this since he or she is so much higher in the chain

are quickly in mind. There’s thousands of reasons to not do it, and it takes courage to do it.

Why should you do it anyhow?

Even though it’s hard, there are some good reasons to do it anyhow. You will see that when you practice this more often, the process will become much easier after a while.

  • Positivity is infectious. A single positive word from a person you respect dearly can make you go home and tell your friends and family about it. You did well, made effort and somebody saw and valued it. Your day can be filled with an empowering feeling that makes you perform better than ever.
  • Positive reinforcement steers better than punishment. Instead of telling what shouldn’t be done, appraise what should be done and people will follow that route.
  • It’s okay to give people compliments when you feel insecure if they are appropriate, just make sure you also tell them your doubts (in not too much detail) about giving them.
  • Telling that things are bad, will make people lose their faith and lust to work. The inverse of that is also true. Telling people they are doing good will make them feel reinforced and want to become better.
  • Threat others the way you want to be treated yourself.
  • By being thoughtful, you can create a bond of trust. This bond (especially for leaders) is necessary when times are more rough and you need your influencing skills to steer the ship with the nose in to the waves.

For leaders

To put some more emphasis on the last point, remember that trust is:

Of the three factors you can influence as a leader, at least one third is controlled by intimacy which is built by honesty, respect and a watchful eye to the person and their situation and needs. A kind sincere recognition can go a long way!

Conclusion

Take the feeling that you have when you feel recognized and verbally rewarded. That power is also within you to give to someone else. Be carefull when to apply it, and don’t overdo it. But be confident that when you think it’s time to do it, it should be done without reconsidering. Bite your tongue and wait for the response. Observe and learn from what you’ve just done.

Be conscious of the power you have unleashed to shape the day of someone else

 

 

Why and When to do Behavioral or Test Driven Development (B/T)DD

For: Teamleads, Architects, Entrepreneurs and QA members that are searching for a path to higher quality.

Generally, testing is perceived as boring, time consuming and ‘expensive’. This is also the first question business people will ask when you propose it. ‘Fine, but how much does it cost?’. This article should give you some sort of a hand-hold to determine what it might bring you and if the time is right.

How to test your stuff, isn’t as valuable if you don’t have a solid understanding of the Why and When. I have written a post on how to implement TDD in a TypeScript build-process, about the how. Now it’s time for the reasons behind it.

When to start with automated testing

It’s in my opinion foolish to immediately start with writing tests when you start with creating a new product. Often you really don’t know what the product will become (you might think you do, but really, usually you don’t, read about this in The Lean Startup from Eric Ries). There will be many iterations of pivoting that will cause your application to go a completely different way when you build an MVP (Minimally Viable Product).

But once the product found it’s way in to the market, and the goals become more and more long-term, your focus will start to shift from creating new stuff, to make sure we create the good stuff. Code gets refactored all the time to be more performant and better readable. But stuff will break all the time.

At this moment your code will – and should – be tested to maintain a certain level of quality. This is the moment Automated Testing steps in.

Why would I do automated testing

I mean, developers know what they’ve done right? They can check what they’ve created?

That’s the general over-simplification we hear. It’s true in some sense, and happens always to reach your Acceptance Criteria, but it won’t suffice. With automated tests you can:

  • Run automated tests before merging to stable to know you are safe and automated rollout to staging or even production (Continuous Integration / Deployment)
  • Test against hundreds of browsers and their versions, on different devices and operating systems
  • Prevent regression
  • The tests define functionality. They are the place you can go to to find an example of integration
  • Do code coverage checks that give you information on how much of your code is covered by tests.
  • They cut time and reduce the risks that you have when going to production. It builds in a certain amount of certainty. Whenever a bug does slip in, you can write new tests to check for that issue. This makes sure that the exact bug doesn’t re-occur (hence regression tests)

What does the T/B DD stand for?

TDD means Test Driven Development

When you read it carefully, you see “Driven Development” trailing the first word “Test”.  This basically means: write your tests before you start writing any code at all. So, what are the benefits of doing this?

  • By writing your test first, you’ll have to think about the thing you are creating. Think about the outlines, the how and what. Since you are still not really hands-on, you also don’t have to improvise and be burdened by hacking stuff in to make it work. You just think about the feature or the unit that you want to add to the system. This makes your code mode atomically correct once you start writing and you’ll spot issues before they arise.
  • Now we know wat we can expect from the thing we want to create, we start running the test. The test will fail since the code for your new test isn’t there yet.
  • You iterate over your code to make your tests green, and if needed add other tests if the functionality isn’t sufficient yet, and start this process all over again.
  • You’ll deliver code that is restricted to what’s asked from it, not what future questions might be. You’ll deliver code that works, and what others can rely on.  All features are documented.

TDD is often synonym for Unit Testing. Unit testing means, each unit of your code should be testable as a separate black-box apart from the complete system. You’ll see that you’ll be mimicking the file structure of the original project. Don’t merge the two together although this might be tempting! Tests should run separate from your production code. Your code should never rely on your tests. Your tests should only rely on units of your code.

BDD stands for Behavioral Driven Development

So the ‘Driven Development’ is exactly the same as for TDD, but in this case we don’t test units, but the sum of their outcome. These units combined create an experience for the user, and your scrum stories rarely contain AC’s with the granularity of a Unit. To properly test the code, we’ll have to do integral tests that measure if all criteria for AC’s are met.

With BDD we:

  • click through the website
  • expect functionality to be there
  • finish operations, like creating a basket and ordering
  • test if the outcome is as desired.
  • If not correct, take screenshot, log errors and alarm.

This means:

  • have all AC written in a structural way
  • test AC in human readable text against our production and test environment
  • develop a feature base (since we already keep track of AC) that informs the next person about the what, why and how of all features in our application.

I can advise ChimpJS to do this for you, while writing your tests in Cucumber syntax. This is friendly for as well as Business as Tech. Here’s a great example of how that would look like!

 

What does testing cost?

So to finish with the first thing that will be asked. Investing in automated testing does cost money, but a lot less than humans doing the same thing.

The question really is, how much can you afford to screw up?

If the answer to that is: I don’t mind, then don’t do it. Because you wouldn’t hire a human to test either.

If the answer is: I do mind a bit but I don’t want to invest too much, then make sure that all software that is tightly related to KPIs is automatically tested. You will see that over time it will give you more than it costs you.

If testing is already an integral respected part of your deployment routine but not digitized yet, I would say: read this article again and draw your own conclusions.

 

Testing isn’t giving you 100% assurance. Nothing does. But you can always try to become better at what you do, and with that idea in mind be sure that you structurally test whenever changes are being made. I once spoke with a CTO that had a complete division of testers, that wrote tests apart from the development teams. To my recollection both teams where about equal in size. What we should learn from this is: when there is much at stake, you must do more to make sure things go right.

How much is for you at stake?

Let me know what you think in the comments! Want more of this? Use the Poll on the right of the screen, comment or contact me!

Unit-tests / TDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing TDD and / or unit-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for unit testing

So what do we need:

Some testing framework (we go with Jasmine)

There are lots of unit-test tools (mocha, chai, sinon, etc) out there that are really good. I at this moment prefer Jasmine. It’s well documented, stays relevant, serves an atomic purpose, configurable and has plugins separated through the NPM repo.

Some orchestrator / runner

We need an orchestrator to launch our tests in browsers. We use Karma for this.

Some browsers

There’s so many you can use, but also should use. Karma facilitates that you can hook up your own browser (open multiple on multple machines if you want) to test with. If that’s too manual you can go with solutions like: PhantomJS, chrome automated testing with Selenium / Webdriver, or doing it through Browserstack and have it tested on multiple versions of multple browsers on multiple operatingsystems and their versions. Lucky you, te runner we chose (Karma) supports interfacing with all of these as part of your testline.

Some reporters

What would we need to get a bit of grip and feeling with our test-process.

  • spec – show the entire spec of the units
  • coverage – we want to know if we’ve actually covered most of our logic (again, why you would like to do this will be described in another article)

 

You convinced me, two thumbs up, let’s do this.

So our lovely NPM can help us quite a bit with this. Do as followed:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ npm i jasmine-core karma karma-browserify karma-browserstack-launcher karma-jasmine karma-phantomjs-launcher karma-coverage karma-typescript-preprocessor phantomjs-prebuilt watchify karma-spec-reporter --save-dev

Next chapter.. ;-).

Instructing the runner

Karma needs to know a bit about what it should do when we’ve asked for a test.

karma.conf.js

module.exports = function (config) {
 config.set({
   basePath: '',
   frameworks: ['browserify', 'jasmine'],
   files: [
     'spec/**/*_spec.ts'
   ],
   exclude: [],
   preprocessors: {
     'spec/**/*.ts': ['browserify','coverage']
   },
   browserify: {
     debug: true,
     plugin: [['tsify', {target: 'es3'}]]
   },
   reporters: ['spec', 'coverage'],
   port: 9876,
   colors: true,
   logLevel: config.LOG_INFO,
   autoWatch: true,
   browserDisconnectTimeout: 1000,
   browserDisconnectTolerance: 0,
   browserNoActivityTimeout: 3000,
   captureTimeout: 3000,
   browserStack: {
     username: "",
     accessKey: "",
     project: "build-process",
     name: "Build-process test runner",
     build: "test",
     pollingTimeout: 5000,
     timeout: 3000
   },
   coverageReporter: {
     type: 'text'
   },
   customLaunchers: {
     ie10: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "7",
       browser: "ie",
       browser_version: "10"
     },
     chrome: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "10",
       browser: "chrome",
       browser_version: "latest"
     },
   },
   browsers: ['PhantomJS'],
   singleRun: false
})}

don’t forget to create the directory that is scanned for your _spec.ts files

 

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test tdd

and add the rules:

test:
    node_modules/.bin/karma start --single-run

tdd:
    node_modules/.bin/karma start

 

Getting definitions of Jasmine

Since your code is written in TypeScript, your tests preferably are also written in TypeScript. You’ll need some definitions of the capabilities of Jasmine in order to use it properly. Luckily the people of typings are geniuses and supplied such a definition for us!

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
 ╰─➤ node_modules/.bin/typings i jasmine --source="dt" --global
 jasmine
 └── (No dependencies)

 

Test if we can test

Oh boy that is a nice title :-). Let’s write some nonsense first, so we can write tests for it later.

The nonsense

Now create some simple example module like ts/example_module.ts:

type someCallback = (someString: string) => string;

export default class example_module {

  constructor(private someVar: string, private callback: someCallback) {

  }

  public some_method(){
    console.log('some method ran!');
  }

  public get_string(): string {
    this.some_method();
    return this.callback(this.someVar);
  }

}

 

There’s a range of nonsense that can be applied in even more bizarre ways that I don’t intend on pursuing  if you don’t mind. This should suffice 🙂

Let’s test this nonsense

Create this testfile in spec/example_module_spec.ts

Generally it’s a good idea to separate the tests from the project since they otherwise clutter the area you’re working in. But do try to mimic the structure that’s used in your normal ts folder. This allows you to find your files efficiently. We append _spec to the filename, because when your project grows, it’s not uncommon to create a helper or two, which shouldn’t be picked up automatically.

/// <reference path="../typings/index.d.ts" />

import ExampleModule from "../ts/example_module"

describe('A randon example module', () => {

  var RANDOM_STRING: string = 'Some String',
      RANDOM_APPENDED_STRING: string = ' ran with callback',

      callback = (someString: string): string => {
        return someString + RANDOM_APPENDED_STRING;
      },
      exampleModule: ExampleModule;

   /**
    * Reset for each testcase the module, this enables that results
    * won't get mixed up.
    */
   beforeEach(() => {
     exampleModule = new ExampleModule(RANDOM_STRING, callback);
     spyOn(exampleModule, 'some_method');
   });

   /**
    * testing the outcome of a module
    *
    * Should be doable for almost all methods of a module
    */
   it('should respond with a callback processed result', () => {
     let response = exampleModule.get_string();

     expect(response).toBe(RANDOM_STRING + RANDOM_APPENDED_STRING);
   });

   /**
    * testing that specific functionality is called
    *
    * You could make use of this, when you expect a module to call
    * another module, and you want to make sure this happens.
    */
  it('should have called a specific method each time the string is retrieved', () => {
    // notice that, because of the beforeEach statement, the spy is reset
    expect(exampleModule.some_method).toHaveBeenCalledTimes(0);

    // execute logic twice
    exampleModule.get_string();
    exampleModule.get_string();

    // expect that the function is called twice.
    expect(exampleModule.some_method).toHaveBeenCalledTimes(2);
  });
});

The result:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ make test
node_modules/.bin/karma start --single-run
08 12 2016 22:58:35.109:INFO [framework.browserify]: bundle built
08 12 2016 22:58:35.115:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
08 12 2016 22:58:35.116:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
08 12 2016 22:58:35.130:INFO [launcher]: Starting browser PhantomJS
08 12 2016 22:58:35.380:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#RxSPDX6Lu-LvxyP2AAAA with id 76673218
PhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0.04 secs / 0.001 secs)
--------------|----------|----------|----------|----------|----------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
--------------|----------|----------|----------|----------|----------------|
 spec/        |      100 |      100 |      100 |      100 |                |
 app_spec.ts  |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|
All files     |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤

Or as my lovely console likes to tell me in more color:

build-process-tdd.png

 

Check the PR for the actual code.

Want more? Any ideas for the next one? Let me know or use the poll on the right side of the screen!

Library update Dec 2016

I’d like to share some good resources I use to educate myself. There are so many good books, tutorials and talks out there, and I think it’s good to start a reference. Our technical little world has grown well past ‘little’, and hunting for new information can be quite the challenge. Why should we all dig for the same gems.

If you think I’m missing some important stuff (no worries, I ab-so-lutely will, as I’m just starting this), PLEASE send me a message so I can add it.

I will send a blog-post every once in a while (when I gathered enough new stuff in the library section) and publicize this in a blog-post. Please by all means send me links to

  • articles
  • books
  • images
  • movies / videos
  • websites
  • tutorials
  • or whatever you think fits

that can enrich your peer developers’ technical skill set.

Current topics in the library

  • Browser Performance
    • General auditing (Links to website)
    • Jank (Links to website)
  • Databases
    • CQRS and EventSourcing (video)
  • Entrepreneurship
    • Lean Startup (bookreference)
  • Enterprise Stack
    • Uber (Links to website)
  • Machine Learning
    • A good place to start (Links to learning platform)
    • Humans and cognitive bias (Image)
    • MIT Open Course Ware (Playlist of videos)
  • Microservices
    • Definition (Links to website)
    • Applications (links to some relevant applications)
    • Databases in the cloud
      • How Netflix does it (links to website)
      • How Uber does it (links to website)

Why use Story points or Time for resource tracking

For: team leads and Entrepreneurs

Running a service oriented business isn’t the same as running a product oriented business.  There’s a major difference and over the course of time I’ve learnt where the differences are when it comes to resource tracking, and how that may or may not affect your business.

Resource Tracking based on Time (more service oriented)

Pros:

  • enables to have specific billing, which feature costs how much money exactly
  • prospects and invoices can be compared to see how budgets are met
  • you can track individual progress and troubleshoot on real fine level when things aren’t going well
  • works exceptionally well for time-to-time small ad-hoc services

But the cons weigh more heavy:

  • time tracking is a pitfall for managers to start micro-managing
  • it kills creativity
  • it drives quality down (you assess on time, not on result)
  • there’s large amounts of overhead and overthinking for the developer
    was I effective this 15 minutes? And, I had to google a lot for this feature, should the customer pay for this?
  • employee satisfaction but also effectivity is strengthened when the employee feels at comfort. The best idea’s come to you when you’re not actively trying to solve something. Opportunity for relaxing is in that sense just as important for the employer than the employee (mind you, there should be a good balance here. Of which part of can be obtained by having a good thorough intake). When the employee has to meet up to the set 8 hours of his job he’ll feel at discomfort when he sat down and stared out of the window for half an hour, even though this time might have solved lots of other stuff. At the end of the day he or she will start with creative bookkeeping which will result in lots of negative energy that could have been used for positivity.

Resource tracking with story points (more product oriented)

When you start resource tracking with story points:

  • all points will be relative to one another.
  • developer doesn’t have to think about the client. They just have to think about what it’s relatively costing to another task
  • tasks get easier separable, since they can now be defined without having to speak in understandable business terms.
  • story points have relative value. This eliminates that the speed of the individual developer is weighed in the estimation
  • more focus towards quality

But how do you sell this:

  • Measure in Complexity and Uncertainty, not Effort
    • Complexity consists of how hard it is to clear the job. E.g. it touches lots of repositories, we have to align with lots of people and the subject is very delicate. This would be a high complexity.
    • Uncertainty gets weighed in, because it is key that during grooming sessions this factor gets reduced to a minimum. The more certain something is, the smaller the task can be, the better it is estimable but also deliverable for the allocated number of story points. So if either your PO or yourself don’t have high confidence and feel uncertain how to solve the task, you should start splitting what is clear and what isn’t, to maintain deliverable stories. External dependencies are uncertainties as well.
    • Effort gets pulled out. It’s silly to do simple stuff for lots of times and your customer shouldn’t have to pay for silliness. This is where you have to play smart, and say: changing all files by hand would take me 10 hours, so I will have to write a converter that does exactly this, this and that, which will only drive up the complexity, and thus the investment in to getting this topic solved.
      By doing this, you get rid of your legacy topics. Programmers should be lazy and automate everything they can. This should be part of the routine, because your PO ‘pays’ for your routine. Just be cautionate to not over-do it.
  • Each team will start establishing a baseline of story points that they can process in a time slot. This is your so-called ‘velocity’. You can easily divide the time over the storypoints and see what the cost would be.  Because your team is focussing on what it would cost in relative terms, instead of fitting deliverance of functionality in a timeslot (which is doomed to fail), you can calculate the cost to analyse ROI versus expected delivery date. You could then also decide to buy from external sources, since you have a good idea what it would cost doing it in-house.
  • Run through the story and write down all steps that need to be done. Now everybody should have a ground level understanding of how to solve this story. Buy a set of scrum-pokercards, count down and let everybody throw down a card. No significant differences? Quickly reach consensus and take the average if no-one objects. Some super high or super low? Let them explain and let the team learn from this perception.

Conclusion

As you might see I’m in huge favor of working in a product oriented environment. This is not always a possibility given your business model and current list of clients. If you would like to go with story points, but your clients are not ready yet, try to do this:

  • find a good size client, preferably one that favors on-time delivery more than super-detailed invoices. You need one or two good sized ones, because this will work best when you allocate one (or preferably multiple) full sprint with a complete team on this.
  • build a business case, show them your intent to deliver more consistently and be willing to invest a bit yourself. You can decrease your own investment over the course of time, but all process changes suffer from inertia so give it some of your own momentum.
  • learn from your first couple of attempts. It is more key to persist in the process than to immediately have the good numbers.
  • make sure you deliver, so make sure you have small stories with almost no uncertainties.
  • Once you have a basic idea of how much the team can do, commit slightly under it and use that room for delivering quality and optimizing your process. This allows you eventually to move the baseline up

You’ve now done what a developer would do. Apply an abstraction layer over business metrics in order for the team to work with their own currency. You’ll have a more productive, more motivated and more reliable team as a result.

Any thoughts? Let me know!

Turtles all the way down

A story about the risk of over-abstraction and false assumptions on technical debt.

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

The first time I’ve got in contact with this story is when I read the book Godel Escher Bach. Although it seems hilarious, it also points out that there is no solution for human stubbornness and lack of logical thinking. It is an exceptional example of missed credits for the inertia of mankind’s cumulative factual cognition, or perseverance of human-induced stories one could believe in.

Not soon after, I started recognizing some of these silly patterns in my own behavior. Of course regarding personal and behavioral stuff, but it somehow concerned me more that I noticed that these patterns can easily be found in day-to-day technical tasks. A recurring theme in my software seemed to be some serious over-engineering with abstractions over abstractions, all to separate concerns wherever possible and isolating whatever could be isolated. The layers of abstraction grew so thick that they became very hard to follow for anyone else including my future me, and I realized I needed some serious re-prioritization of what I perceived as good practice in software development.

The rule of three

It is so, so hard to leave technical debt when you’ve had a history full of it. This becomes the second nature of a developer. Remove any tech-debt up-front before it bites you in the ass afterwards. But there’s a risk in this.

We tend to prematurely optimize our code. But the risk here is that we optimize without knowing the full set of features that are required. This is when I introduced the rule of three (which sometimes is ignored, sometimes the rule of two, but don’t tell anybody okay?).

Only when you’ve seen similar functional demand occur three times, start clustering the functionality and isolate the individual concerns.

By the time you’ve implemented a third similar functionality (which usually needs some adaptions to work in a specific situation), you can tell something about the environment the component should work in.

Set your KPIs

(For those who don’t know, Key Performance Indicators, the stuff that tell you if you are doing the right thing or should pivot your efforts).

This might seem strange, but make sure you set your definition of done straight before you start the development of new features. The definition should only encompass the creation of functionality. Not the how. Just the what. Don’t create elaborate structures, but try to get to your goal the fastest way possible.

Honoring:

  • transparency
    Read your code, and let someone else read it (peer reviews). If it’s not clear what it does or how it works, it’s not good enough
  • Usage of other modules (DRY (Don’t Repeat Yourself))
    Don’t do work that’s already done
  • Don’t implement features you don’t directly need (KISS, Keep It Simple, Stupid)
    I guarantee you that the functions you consider nice to have but unused, will be the first to bring your code to a grinding halt.

You’ll need these KPIs! Because odds are that you won’t feel good – at all – about the product you’ve just delivered. There’s ALWAYS a better way to do things, and that shouldn’t drag your just created real-life value down. Satisfy your KPIs and feel satisfied. But watchful.

Observe

Take notes along the path of deliverance. Mark the project as Concept and MVP (although functionally you might feel you’re there, you can sometimes treat functionalities as separated products) and keep track of it. Observe all stuff needed in the future and observe if your suspicion of lacking features, abstractions and re-usage of code are right. If so, don’t be shy to become your own PO and create a story that removes tech-debt. If your relation to your usual PO is one that has trust in it’s fundament, he should respect this story as much as any other feature request and allocate time to remove this technical debt.

Apply validated learning

By waiting to apply all these abstractions, you enable validated learning (beautifully described by Eric Riess in The Lean Startup) to more or less scientifically confirm the future of the feature (the standard definition used in validated learning), but also the need and the focus of the future optimization.

Bottom line: You’ll spend less time, on stuff that get’s thrown away.

It’s not turtles all the way down anymore. It’s just a bunch of oddly stacked turtles on a ridge in some water on a planet.

What follows after this.

I’d still like to write a blog post about testing code. This article about levels of abstractions relates to that future testing blog post in so many ways.

If you’d like me to put some focus on that, let me know by using the poll on the right side of the screen!