Criticality Mode and Circuit Breakers

(This is a crosspost from an article I wrote @ https://medium.com/jumbo-tech-campus

Ideally our frontend solution is a unicorn that eats rainbows and poops butterflies. In reality it’s often a piece of software reliant on other pieces of software that all have the potential to break for whatever reason, relaying that problem to our customers.

We tend to be feature driven and primarily look at the happy flow of our software. This has everything to do with the Pareto Principle:

It was … discovered that in general the 80% of a certain piece of software can be written in 20% of the total allocated time. Conversely, the hardest 20% of the code takes 80% of the time.

And at the same time:

… Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated.

Now, it’s not said that this 20% of the former statement fully overlaps with the latter, but there’s at least a correlation here. The 20% that ‘needs to be done right’ takes 80% of our time, and is prone for shortcuts.

One of these shortcuts is often designing for failure. We tend to forget about what should happen in case our solution doesn’t work like intended. That’s a problem, because in example a full outage costs actual money (conversions) and can become a major detractor for your public image and retention rate.

Criticality Mode

Failure can manifest itself in many ways, and for many reasons. This makes it so that you’ll need to be able to pull the plugs on a macro level. In the case of a webshop, you might want to implement three levels:

Criticality Green

No general level of criticality assigned. All features function the way they should (implementing circuit breakers, as described in the next chapter).

Criticality Orange

All critical functionalities (like finding and showing items, adding items to a basket and placing orders) are operational as expected, for all other (non mission critical) functionalities we trigger their circuit breakers actively (again, more on that in next chapter). It’s important that you inform your customer that you are running your shop with reduced functionality.

Once you go orange, you’ll be able to process orders, make money and have the ‘best’ experience given the circumstances, whilst reducing strain on the backend so you can fix what’s broken or run major updates.

Criticality Red

When you go ‘red’, you basically disable ALL backend traffic. This essentially means you’ll have to serve a static website without interactive functionalities.

This is the first and the easiest implementation of criticality mode. What you’ll need to create in order to do this is:

  • pick a random cloudprovider other than the one you host your operations on (you’ll want this when e.g. you by accident ran a terraform that deleted vital operational pieces of your operation)
  • schedule daily a recursive wget command to your homepage that writes its output to a storage bucket
  • make sure the wget sends a header when it crawls. Like x-criticality-mode: red.
  • do a first pass over your solution and condition interactive logic to not show (no add-to-basket CTA’s, no basket at all, stuff like that).
  • condition a banner to show on top of your page to inform the user in case the header is provided.
  • Adjust your load balancer to route — based on the preferred criticality mode — all external traffic towards the static bucket. Each response should be fitted with a no-cache header, which allows you to quickly come back from this mode when needed.

Going red enables the client to still see your products but postpone his or her order. Since your site basically progressively enhances once you go orange or even green, the conversion loss will be minimised and the perception of the quality of the digital solution will be relatively high.

Circuit Breakers

The Circuit Breaker Design Pattern comes down to this:

If you know a backend is under pressure, trying to make more connections to it or start waiting for it makes no sense.

So instead of making more and more connections that cannot be resolved, bringing down your entire stack (because of timeouts and dog-piling), you start to inform the user beforehand that the functionality isn’t as expected.

This comes with some implications:

As a developer, you should make your stakeholders aware of the components your solution is relying upon and how they can fail.

As business, you should figure out a way — together with your developers — to ensure the best behaviour when that situation occurs.

This circuit breaker should evidently be tested and verified in an automated fashion to ensure the longevity of the solution. That inherently means you’ll have to be able to trigger the circuit breaker yourself. A good way to model this into your landscape is incorporating the CB in a feature-flagging system.

Feature Flagging is a way to enable or disable features for certain groups, or percentages of traffic on your website. You can go wild with user segmentation and A/B testing and such, but for the goal of this article I’d like to highlight a mode that you can relatively easily implement that resembles criticality mode.

  • Each feature should be listed
  • At which criticality level should this feature be showed? (Green = only when fully operational, Orange = mission critical feature, red = can be crawled and has no interactive functionality (or stripped when crawled))
  • What is the target state of the feature? (Green = feature is operating nominally, Orange = trigger circuitbreaker, Red = remove feature from external traffic).

This setup allows you:

  • to hide features from external traffic, but test it on production for a specific target audience anyhow
  • to reduce stress on a particular backend function, showing reduced functionality to a user while not degrading the rest of the digital solution
  • to manage which features are showed in which criticality state

Scale fast effect (Conway’s law)

(This is a crosspost from an article I wrote @ https://medium.com/jumbo-tech-campus)

At Jumbo, we’ve started to scale fast within a short amount of time. E-Commerce is not something we do on the side anymore. It’s part of our core. It’s who we are.

We’ve started scaling the digital landscape hard. And like children, when you grow fast, you might experience some growing pains every now and then. In this article I’m writing a chronological sequence of things that happen when you scale your development effort fast.

When you know you are in a fast scaling organisation and feel lost every now and then, this article might bring you perspective of where you are situated and what steps will follow to absolution ;-).

Where it starts

Given that this article is in regards to scaling fast, the origin of your digital adventure is somewhere along

  • It’s a complete new endeavour
  • The potential is clear but there was simply not enough money
  • The potential was unclear, it wasn’t a priority

Whatever the cause might be, the scale is small. This inherently means that you’ll have a small amount of people, steering a small amount of developers (internal or externally), working on a small amount of products.

Digital is part of who we are

At some point in time you manage to prove that this is what we should do. This is what will push your company into the next era. But in order to get more features and attract a bigger audience (or get the ability to attend your current audience), your company will need to make a bet. They will need to invest real money over a period of time before they get a return on that investment.

The first thing that will happen is that the company starts hiring people that can help them forward. The issue is that — in the current market climate —, the ratio developer to job is in the favour of the developer. Meaning that in order to attract the best developers, you’ll have to compete with the best tech campuses out there.

This is a struggle, because you’ll have to adjust your expectations. Unfortunately allocating a lot of money doesn’t inherently give you all these intrinsically motivated people you hope for.

One way to scale quick here is hiring externally with companies as partners to start setting up your new organisational structure.

You’ve managed to internalise some development teams. With these teams, you’ll become able to create a culture that attracts the people you search for. The scale feels big(ger). You’ll still have one stream of business demand, but you’ll have multiple teams working on features. These features still flow into one application, but life is good, for a while.

Features features features

You now have the workforce to work on many things at the same time. This means that your business becomes able to put themselves close to the fire in order for them to make sure you are building the things that pay the bills. Their PO’s will take place in your teams and you’ll set up a process that helps in prioritising the demand. The moment you create these teams is also the moment Conway’s law starts to bite you. It states:

Organisations which design systems … are constrained to produce designs which are copies of the communication structures of these organisations

Lots of new features will be implemented. You’ll learn as a business that it’s sometimes best to apply validated learning. Set smaller goals, define how to measure them, validate the success, continue your path or improve and adjust course.

Your tech department will however learn that not all progress is measurable in the definitions of pageviews, turnover, performance and these metrics. Some effort is made, because of an ideology.

And Jumbo is big on that. We believe. We believe in Service with a smile. We believe in being every day low price. We believe in a pleasant shopping experience. We believe in a winning mentality, a positive attitude and an ability to overcome whatever it is you need to overcome.

Unfortunately, pushing all these features usually leads to a drain in Performance, lots of bugs, dissatisfaction on working on the product and lots of troubles keeping the boat floating. This is a hard, but a good spot to be in. You’ve proved that there is a huge demand for the course you’ve charted, you’re just not there yet to be able to cope with the demand.

Ownership

Because you are working on óne application, it becomes increasingly hard to take ownership of your product. You’ll see product owners focussing on the new features they want to have in, but not focussing on the quality of the product. They’ve essentially became Problem Owners.

It’s a logical thing though. Since all functions are entangled in this one monolith, it’s impossible for them to take the ownership even if they’d want to. What we need is to crumble the monolith into pieces that can be owned. In order for them to be adopted by business to become something that they can take ownership off.

What you need here is a push from Development as well as from Business in unlocking business Capabilities. In order to determine the business capability you have to ask yourself: If I had a business and I would spend money on this, what would it enable me to do? Concrete examples would be:

  • the ability to process payments
  • the ability to send push messages to my customers
  • the ability to know where the order now physically is

Each ability is atomically defined. This inherently means that when I develop the functionality: ‘send a message 15 minutes before the order reaches the customer’, unlocks these building blocks (Business Capabilities) my other processes also benefit from. The more capabilities you unlock, the easier it becomes to combine them and service future business demands.

Development should dissect new incoming projects into business capabilities. This takes maturity in the sense that you will have to understand the challenges from the perspective of your customer, rather than your own technical perspective. Each piece of the puzzle has to be allocated around a business domain and serviced accordingly.

Business should start looking at their feature requests a bit different as well. Their opportunity versus cost analysis should deepen a bit, taking the unlocking of features into account.

Let’s say, we have three epics

  • 8 effort, 8 cost : Notify people whenever their basket offers expire
  • 5 effort, 5 cost : Send messages when we are 15 min away with order
  • 3 effort, 3 cost : Show average delivery time for the current order

None of these are ‘low hanging fruit’ you’d say. But what if I’d say:

  • if the 5/5 effort has been made whilst unlocking its true business capabilities (know where the order is, format a personalised message and send push messages to customers),
  • the 8/8 becomes an 8/3 (because we already can send push messages and personalise them)
  • and the 3/3 becomes a 3/1 (because we already collect metrics on delivery times)?

What we evidently miss is a factor that multiplies the opportunity value due to its unlocked capabilities.

You’ll get valuable things cheaper, ánd the entire room starts knowing about the capabilities, which enables you to allocate the capabilities at their respected owners.

You now have a service oriented architecture where people feel (and are) responsible for. New features won’t be accepted if they impact one of these services in a negative way. Bugs will be hunt, performance will be on top of mind. A project manager will have to talk with Product Owners to be able to integrate new functionality into their systems. A natural guard has been created.

Dev and Ops

If you create it, and you are responsible, you should run it. If you can’t run it yourself, you can’t be held fully responsible.

You should prevent a blame culture at all time. If someone can be blamed (rightfully or not), it will set a negative context. It poisons the atmosphere. It’s a constant excuse to underperform. And it might not be evident, but some take real comfort in this situation, it gives them power and personal validation to be able to be the hero when trouble arises. Therefore they might not be inclined to actually solve the problem at hand.

If you want to be the best, you should be in control. So if you need to roll an update now, nobody should be between you and the deployment. If you temporarily need more resources, you should be able to pull them. Of course you need to operate within boundaries, but for the simple stuff you should be empowered to ‘do it yourself’. And when something breaks, you should be held responsible. Only then you have the speed (agility) to improve continuously. Someone else cannot be held accountable for the software you wrote. And even more relevant, you won’t release bad code because it’s your head that rolls when it goes wrong.

DevOps doesn’t mean you do everything yourself though. But Ops becomes facilitating rather than steering. Steering Ops makes a lot of sense as long as your landscape primarily revolves around external applications, but when you build your own application it becomes an major blocker unless you can make the transition to facilitating the teams.

The good thing is that you, like no other, know how to prevent these problems. Becoming DevOps means, taking care of Quality Assurance within the designs of your code and deployment as well.

Conclusion

Whenever you find yourself lost in a transition due to scaling up rapidly, know that you just haven’t reached the operational optimum yet, and the turmoil you experience is needed to get to the next stage of maturity. Conway’s law is not a law because it has to be followed. It became a law because it’s a cascade of logical steps that will happen when you are in a certain situation.

If you can identify yourself with one of the steps in the document, find peace in that eventually your situation (with the willingness of all involved) will resolve to a well oiled machine. Transitions just take some time.

The obvious BFF, an old friend

When you are making big changes to your current stack, or even start with developing an entirely new stack, you’ll immediately hit a familiar discussion. What technology will we use.

Some choose to take an ostrich approach by sticking the head in the sand and pretending this isn’t a relevant topic. Usually this means that the first one that rushed to the scene will either start in his or her favourite language, or the most promising exotic new language they know. We repeat what we know, and therefore act like sheep.

Is this a bad thing? Well, not necessarily because the prime goal is getting the job done, but with making a choice you take the good and the bad, and it’s relevant your choices are as much as possible in line with your goals.

This post aims on giving some considerations regarding valuable technologies for a web-based Backend For Frontend ( BFF, not to confuse with Best Friends Forever 😉 ).

What is the bare minimum?

At the moment of writing, we are simply bound to some technology standards.

  • HTML as a structured data source
  • CSS to style that data
  • JavaScript to provide interaction

And then we need:

  • A browser (let’s call it client), able to combine these three to convey an ‘experience’ to the visitor
  • Something that generates or relays the content towards the client (let’s call this server)

Whatever we generate in our backend, it will (almost) always be converted to these technologies, and transferred from a server to a client. Since we have a minimal dependency on this, we should take this into consideration when we start making choices down the line for tiers that provide for this output.

User expectations

Web Applications tend to be more and more interactive nowadays. Where we once navigated between pages and expected a load-time in between, we now expect a seamless integration of each feature or information source without constantly reloading. More and more interactivity is demanded. From adding an item to your basket to playing a song while navigating for the next one directly in the browser, it’s all a demand on interaction, and thus our old friend, JavaScript.

On the other hand we need to be able to provide at least the content through the conventional model of server sided rendering. That is important for e.g. crawlers but equally relevant for clients that are less enabled to receive the intended experience (think of accessibility).

With the rapidly increasing movement towards Progressive Web Apps (PWA) we start an enablement of even more features. Think of accessing the camera, working offline, sending push messages, doing payments, obtaining GPS data, storing data locally — all in the browser with app-like behaviour. Mastering JavaScript is more than a nice-to-have skill for your next colleague; it’s a necessary skill.

Front or Back-end, who cares, really..

Because we wish to enrich and invest on all that is happening in the client that directly interfaces with the user, we basically would like the server side of things to follow suit. All effort that is focussed on keeping these experiences in sync is a waste of time, energy and money. Since we are bound to JavaScript in the client, but can choose everything in the backend, why not just choose Javascript for the backend and leverage a framework that enables the code to be written only once?

The bigger your application grows, the more repetition of elements you’ll find. We’ll need to address that as well. Ideally we want to isolate these interface elements and all that belongs to them. It’s important to parameterise them, isolate all style, data, and interaction for them so they are easy to comprehend and adjust.

We want to be concise. We can do this by reducing the amount of characters we have on our screens and giving the ones we do see more value. This means more bang for buck. Think of things like:

  • utilise the whitespace, don’t make it a preference, make it meaningful (what e.g. Python is praised for)
  • remove as many special characters as can be done and make the written words more descriptive
  • abstract away all compatibility with older versions
  • close knowledge gaps as good as possible
  • isolate as much as possible to the subject at hand
  • and when all the above is done, there should be no need for type definitions. They provide a false sense of safety and often add more confusion and fluffiness than they yield any benefit. Remove as much characters from the solution as possible.

Conclusion

So when all the above would be translated to languages or technologies, the following would be my preference:

  • Write your data structures in Pug
  • Write your style in Sass
  • Write your code in CoffeeScript

Because all of these adhere exactly to the previously stated requirements.

Then we can leverage:

  • Vue for component isolation
  • and Nuxt on Node.js for front / back end rendering.

which will take care of the isolation and utilising a single source of code for client as wel as server rendering of our solution.

At Jumbo we are transitioning to a Javascript Backend For Frontend (BFF) running Nuxt with Vue already. Even though we don’t have a policy on the higher level languages we use to compile towards JS, CSS and HTML, we are well on our way in our revolution towards a JS BFF separated from any datasources with a Service Oriented Architecture.

And maybe that’s the beauty as well. As long as you accept that JS as server sided language and having isolated components makes most sense, you have the correct outlines. The higher level programming languages used can take all the time they need to prove or disprove themselves. But whatever you do, don’t act like sheep and do what’s done so many times before. Reconsider the options whenever you start something new.

This article is a cross-post from my original post @ https://medium.com/jumbo-tech-campus/the-obvious-bff-an-old-friend-4fdc1ad1e6d for Jumbo Supermarkets NL.

A personal annual ‘retro’ and ‘grooming’

It’s easy to see the relevance of knowing where you are in space, but sometimes it’s good to reflect where you are in time. When the clock of the year changes and sheds it’s last hours to rise again like a phoenix from its ashes, it’s a good moment to look around, and see what time lies behind you, but also to think about what you might expect in front of you. This is my attempt to do so.

2016 was a year of hard work, dedication, determination, ups and downs, love, life, contemplation, vision, communication, valuable insights and new takes on old concepts. I had the opportunity to meet people that influenced my career and personal life in a big and positive way.

In Q4-16 I’ve decided to start this blog. Why? I want to challenge myself to formulate knowledge in a more tangible way. If I can explain myself in clear and easy to understand terms when I write it down, I create a foundation from which I can work when I start communicating things verbally. It’s a training to be concise, be valuable, convey knowledge and establish opinions.

I started with two series, which resemble my current lines of interest within my work field.

  • Full-Stack
    In order to understand how things work, one must go back to the essence of how – but even more important- why it is constructed. In this series I try to touch most parts by making a small application from the ground up.
  • Leadership
    Proper leadership is the craft of helping others to develop theirselves, their knowledge and inspire them to bring up the best in theirselves.

So, what’s coming up?

I’m currently working on a couple of posts at the same time. I expect that the update interval will go down a bit, because of moving to a new house and a third child on the way, but nevertheless I’m determined to keep bringing some practical and theoretical stuff to your screens.

Some to expect in 2017:

  • I have had a (very good) training by Isabelle Orlando on presentation skills and influencing skills. Of course I can never equate the effect of that training in some words on the screen, but therewhere absolutely some interresting key points that everybody can use and think about to develop their skills. I would like to give a short summary on these.
  • MoD is moved to a VPS. During that migration there where a couple of default steps that had to be done. Stuff like, setting up nginx, implementing Letsencrypt certificate to be able to run on https, migrating from wordpress.com to a vps hosted domain, migrating statistics, setting up WordPress and all packages needed for it to run (php and its modules), etc. I can imagine that there are a lot of people that have these issues, are a little bit tech savvy but don’t dare to take that step. An overview of how you do these things might help, not only to get that stuff set up, but also to stimulate familiarizing on more parts then just one (hence full-stack)
  • I currently have the two next steps of the Full-Stack series in concept. I’m slightly struggling to remove complexity, e.g. doing TDD on a self-made controller in an MVC setup for the todo list we’ll be working on
  • General progression on the series
    • also some more posts about stuff like monitoring, log processing, scalability and stuff like that. Even though I want to keep progressing steadily towards a full product, I also want to divert every now and then and skip some steps (which should fit again later when the series have progressed some more)
    • full-stack is a hands-on series, I want to create more of an equilibrium with the leadership series where we philosophize somewhat more
  • Lots of book reviews, summaries, references to good videos, audio fragments and websites. I intend to expand the library significantly.

 

For now, the books are resting in a crate for moving. They’ve found their position in space and time.

It’s now time for our families and friends.

 

I wish you all a magnificent 2017 that is filled with passion, energy, balance, health, knowledge and wisdom!

Tim

The vast and free source of power that we often overlook

I am convinced that a compliment that I have for someone else, isn’t mine to keep.

We often keep our praises and compliments for someone else for ourselves. Sometimes they are shallow, like seeing that a female co-worker has made an extra effort on her hair today or someone bought new shoes. But sometimes they run deeper, like appreciating the time someone has taken to educate you on something or be thoughtful of your private situation.

It is key to express yourself. Speak up! Inner whispers cannot be heard.

Fear

We often fear that we miscommunicate, so we refrain from communicating at all. Thoughts like:

  • Does he think I try to curry favors
  • I’m afraid that she thinks I try to hit on her
  • This person always works this good, giving a compliment now is silly
  • I think it’s inappropriate for me to say this since he or she is so much higher in the chain

are quickly in mind. There’s thousands of reasons to not do it, and it takes courage to do it.

Why should you do it anyhow?

Even though it’s hard, there are some good reasons to do it anyhow. You will see that when you practice this more often, the process will become much easier after a while.

  • Positivity is infectious. A single positive word from a person you respect dearly can make you go home and tell your friends and family about it. You did well, made effort and somebody saw and valued it. Your day can be filled with an empowering feeling that makes you perform better than ever.
  • Positive reinforcement steers better than punishment. Instead of telling what shouldn’t be done, appraise what should be done and people will follow that route.
  • It’s okay to give people compliments when you feel insecure if they are appropriate, just make sure you also tell them your doubts (in not too much detail) about giving them.
  • Telling that things are bad, will make people lose their faith and lust to work. The inverse of that is also true. Telling people they are doing good will make them feel reinforced and want to become better.
  • Threat others the way you want to be treated yourself.
  • By being thoughtful, you can create a bond of trust. This bond (especially for leaders) is necessary when times are more rough and you need your influencing skills to steer the ship with the nose in to the waves.

For leaders

To put some more emphasis on the last point, remember that trust is:

Of the three factors you can influence as a leader, at least one third is controlled by intimacy which is built by honesty, respect and a watchful eye to the person and their situation and needs. A kind sincere recognition can go a long way!

Conclusion

Take the feeling that you have when you feel recognized and verbally rewarded. That power is also within you to give to someone else. Be carefull when to apply it, and don’t overdo it. But be confident that when you think it’s time to do it, it should be done without reconsidering. Bite your tongue and wait for the response. Observe and learn from what you’ve just done.

Be conscious of the power you have unleashed to shape the day of someone else

 

 

Why and When to do Behavioral or Test Driven Development (B/T)DD

For: Teamleads, Architects, Entrepreneurs and QA members that are searching for a path to higher quality.

Generally, testing is perceived as boring, time consuming and ‘expensive’. This is also the first question business people will ask when you propose it. ‘Fine, but how much does it cost?’. This article should give you some sort of a hand-hold to determine what it might bring you and if the time is right.

How to test your stuff, isn’t as valuable if you don’t have a solid understanding of the Why and When. I have written a post on how to implement TDD in a TypeScript build-process, about the how. Now it’s time for the reasons behind it.

When to start with automated testing

It’s in my opinion foolish to immediately start with writing tests when you start with creating a new product. Often you really don’t know what the product will become (you might think you do, but really, usually you don’t, read about this in The Lean Startup from Eric Ries). There will be many iterations of pivoting that will cause your application to go a completely different way when you build an MVP (Minimally Viable Product).

But once the product found it’s way in to the market, and the goals become more and more long-term, your focus will start to shift from creating new stuff, to make sure we create the good stuff. Code gets refactored all the time to be more performant and better readable. But stuff will break all the time.

At this moment your code will – and should – be tested to maintain a certain level of quality. This is the moment Automated Testing steps in.

Why would I do automated testing

I mean, developers know what they’ve done right? They can check what they’ve created?

That’s the general over-simplification we hear. It’s true in some sense, and happens always to reach your Acceptance Criteria, but it won’t suffice. With automated tests you can:

  • Run automated tests before merging to stable to know you are safe and automated rollout to staging or even production (Continuous Integration / Deployment)
  • Test against hundreds of browsers and their versions, on different devices and operating systems
  • Prevent regression
  • The tests define functionality. They are the place you can go to to find an example of integration
  • Do code coverage checks that give you information on how much of your code is covered by tests.
  • They cut time and reduce the risks that you have when going to production. It builds in a certain amount of certainty. Whenever a bug does slip in, you can write new tests to check for that issue. This makes sure that the exact bug doesn’t re-occur (hence regression tests)

What does the T/B DD stand for?

TDD means Test Driven Development

When you read it carefully, you see “Driven Development” trailing the first word “Test”.  This basically means: write your tests before you start writing any code at all. So, what are the benefits of doing this?

  • By writing your test first, you’ll have to think about the thing you are creating. Think about the outlines, the how and what. Since you are still not really hands-on, you also don’t have to improvise and be burdened by hacking stuff in to make it work. You just think about the feature or the unit that you want to add to the system. This makes your code mode atomically correct once you start writing and you’ll spot issues before they arise.
  • Now we know wat we can expect from the thing we want to create, we start running the test. The test will fail since the code for your new test isn’t there yet.
  • You iterate over your code to make your tests green, and if needed add other tests if the functionality isn’t sufficient yet, and start this process all over again.
  • You’ll deliver code that is restricted to what’s asked from it, not what future questions might be. You’ll deliver code that works, and what others can rely on.  All features are documented.

TDD is often synonym for Unit Testing. Unit testing means, each unit of your code should be testable as a separate black-box apart from the complete system. You’ll see that you’ll be mimicking the file structure of the original project. Don’t merge the two together although this might be tempting! Tests should run separate from your production code. Your code should never rely on your tests. Your tests should only rely on units of your code.

BDD stands for Behavioral Driven Development

So the ‘Driven Development’ is exactly the same as for TDD, but in this case we don’t test units, but the sum of their outcome. These units combined create an experience for the user, and your scrum stories rarely contain AC’s with the granularity of a Unit. To properly test the code, we’ll have to do integral tests that measure if all criteria for AC’s are met.

With BDD we:

  • click through the website
  • expect functionality to be there
  • finish operations, like creating a basket and ordering
  • test if the outcome is as desired.
  • If not correct, take screenshot, log errors and alarm.

This means:

  • have all AC written in a structural way
  • test AC in human readable text against our production and test environment
  • develop a feature base (since we already keep track of AC) that informs the next person about the what, why and how of all features in our application.

I can advise ChimpJS to do this for you, while writing your tests in Cucumber syntax. This is friendly for as well as Business as Tech. Here’s a great example of how that would look like!

 

What does testing cost?

So to finish with the first thing that will be asked. Investing in automated testing does cost money, but a lot less than humans doing the same thing.

The question really is, how much can you afford to screw up?

If the answer to that is: I don’t mind, then don’t do it. Because you wouldn’t hire a human to test either.

If the answer is: I do mind a bit but I don’t want to invest too much, then make sure that all software that is tightly related to KPIs is automatically tested. You will see that over time it will give you more than it costs you.

If testing is already an integral respected part of your deployment routine but not digitized yet, I would say: read this article again and draw your own conclusions.

 

Testing isn’t giving you 100% assurance. Nothing does. But you can always try to become better at what you do, and with that idea in mind be sure that you structurally test whenever changes are being made. I once spoke with a CTO that had a complete division of testers, that wrote tests apart from the development teams. To my recollection both teams where about equal in size. What we should learn from this is: when there is much at stake, you must do more to make sure things go right.

How much is for you at stake?

Let me know what you think in the comments! Want more of this? Use the Poll on the right of the screen, comment or contact me!

Unit-tests / TDD with TypeScript

For: developers, architects and teamleads that want to incorporate unit-testing for their TypeScript projects

A couple of blog posts ago we’ve set up a basic build-line, in particular for TypeScript. In this post we’ll get our hands-on again and apply some automagic stuff for doing TDD and / or unit-testing on our builds.

note: this post only deals with the ‘how‘, not the ‘why‘ and ‘when‘. Read this if this has your interest.

Setting up the environment for unit testing

So what do we need:

Some testing framework (we go with Jasmine)

There are lots of unit-test tools (mocha, chai, sinon, etc) out there that are really good. I at this moment prefer Jasmine. It’s well documented, stays relevant, serves an atomic purpose, configurable and has plugins separated through the NPM repo.

Some orchestrator / runner

We need an orchestrator to launch our tests in browsers. We use Karma for this.

Some browsers

There’s so many you can use, but also should use. Karma facilitates that you can hook up your own browser (open multiple on multple machines if you want) to test with. If that’s too manual you can go with solutions like: PhantomJS, chrome automated testing with Selenium / Webdriver, or doing it through Browserstack and have it tested on multiple versions of multple browsers on multiple operatingsystems and their versions. Lucky you, te runner we chose (Karma) supports interfacing with all of these as part of your testline.

Some reporters

What would we need to get a bit of grip and feeling with our test-process.

  • spec – show the entire spec of the units
  • coverage – we want to know if we’ve actually covered most of our logic (again, why you would like to do this will be described in another article)

 

You convinced me, two thumbs up, let’s do this.

So our lovely NPM can help us quite a bit with this. Do as followed:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ npm i jasmine-core karma karma-browserify karma-browserstack-launcher karma-jasmine karma-phantomjs-launcher karma-coverage karma-typescript-preprocessor phantomjs-prebuilt watchify karma-spec-reporter --save-dev

Next chapter.. ;-).

Instructing the runner

Karma needs to know a bit about what it should do when we’ve asked for a test.

karma.conf.js

module.exports = function (config) {
 config.set({
   basePath: '',
   frameworks: ['browserify', 'jasmine'],
   files: [
     'spec/**/*_spec.ts'
   ],
   exclude: [],
   preprocessors: {
     'spec/**/*.ts': ['browserify','coverage']
   },
   browserify: {
     debug: true,
     plugin: [['tsify', {target: 'es3'}]]
   },
   reporters: ['spec', 'coverage'],
   port: 9876,
   colors: true,
   logLevel: config.LOG_INFO,
   autoWatch: true,
   browserDisconnectTimeout: 1000,
   browserDisconnectTolerance: 0,
   browserNoActivityTimeout: 3000,
   captureTimeout: 3000,
   browserStack: {
     username: "",
     accessKey: "",
     project: "build-process",
     name: "Build-process test runner",
     build: "test",
     pollingTimeout: 5000,
     timeout: 3000
   },
   coverageReporter: {
     type: 'text'
   },
   customLaunchers: {
     ie10: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "7",
       browser: "ie",
       browser_version: "10"
     },
     chrome: {
       base: "BrowserStack",
       os: "Windows",
       os_version: "10",
       browser: "chrome",
       browser_version: "latest"
     },
   },
   browsers: ['PhantomJS'],
   singleRun: false
})}

don’t forget to create the directory that is scanned for your _spec.ts files

 

Extending the Makefile

Add your test-routine to the makefile

.PHONY: [what was already in there] test tdd

and add the rules:

test:
    node_modules/.bin/karma start --single-run

tdd:
    node_modules/.bin/karma start

 

Getting definitions of Jasmine

Since your code is written in TypeScript, your tests preferably are also written in TypeScript. You’ll need some definitions of the capabilities of Jasmine in order to use it properly. Luckily the people of typings are geniuses and supplied such a definition for us!

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
 ╰─➤ node_modules/.bin/typings i jasmine --source="dt" --global
 jasmine
 └── (No dependencies)

 

Test if we can test

Oh boy that is a nice title :-). Let’s write some nonsense first, so we can write tests for it later.

The nonsense

Now create some simple example module like ts/example_module.ts:

type someCallback = (someString: string) => string;

export default class example_module {

  constructor(private someVar: string, private callback: someCallback) {

  }

  public some_method(){
    console.log('some method ran!');
  }

  public get_string(): string {
    this.some_method();
    return this.callback(this.someVar);
  }

}

 

There’s a range of nonsense that can be applied in even more bizarre ways that I don’t intend on pursuing  if you don’t mind. This should suffice 🙂

Let’s test this nonsense

Create this testfile in spec/example_module_spec.ts

Generally it’s a good idea to separate the tests from the project since they otherwise clutter the area you’re working in. But do try to mimic the structure that’s used in your normal ts folder. This allows you to find your files efficiently. We append _spec to the filename, because when your project grows, it’s not uncommon to create a helper or two, which shouldn’t be picked up automatically.

/// <reference path="../typings/index.d.ts" />

import ExampleModule from "../ts/example_module"

describe('A randon example module', () => {

  var RANDOM_STRING: string = 'Some String',
      RANDOM_APPENDED_STRING: string = ' ran with callback',

      callback = (someString: string): string => {
        return someString + RANDOM_APPENDED_STRING;
      },
      exampleModule: ExampleModule;

   /**
    * Reset for each testcase the module, this enables that results
    * won't get mixed up.
    */
   beforeEach(() => {
     exampleModule = new ExampleModule(RANDOM_STRING, callback);
     spyOn(exampleModule, 'some_method');
   });

   /**
    * testing the outcome of a module
    *
    * Should be doable for almost all methods of a module
    */
   it('should respond with a callback processed result', () => {
     let response = exampleModule.get_string();

     expect(response).toBe(RANDOM_STRING + RANDOM_APPENDED_STRING);
   });

   /**
    * testing that specific functionality is called
    *
    * You could make use of this, when you expect a module to call
    * another module, and you want to make sure this happens.
    */
  it('should have called a specific method each time the string is retrieved', () => {
    // notice that, because of the beforeEach statement, the spy is reset
    expect(exampleModule.some_method).toHaveBeenCalledTimes(0);

    // execute logic twice
    exampleModule.get_string();
    exampleModule.get_string();

    // expect that the function is called twice.
    expect(exampleModule.some_method).toHaveBeenCalledTimes(2);
  });
});

The result:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤ make test
node_modules/.bin/karma start --single-run
08 12 2016 22:58:35.109:INFO [framework.browserify]: bundle built
08 12 2016 22:58:35.115:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
08 12 2016 22:58:35.116:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
08 12 2016 22:58:35.130:INFO [launcher]: Starting browser PhantomJS
08 12 2016 22:58:35.380:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#RxSPDX6Lu-LvxyP2AAAA with id 76673218
PhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0.04 secs / 0.001 secs)
--------------|----------|----------|----------|----------|----------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
--------------|----------|----------|----------|----------|----------------|
 spec/        |      100 |      100 |      100 |      100 |                |
 app_spec.ts  |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|
All files     |      100 |      100 |      100 |      100 |                |
--------------|----------|----------|----------|----------|----------------|

╭─tim@The-Incredible-Machine ~/Git/build-process ‹unit-tests*›
╰─➤

Or as my lovely console likes to tell me in more color:

build-process-tdd.png

 

Check the PR for the actual code.

Want more? Any ideas for the next one? Let me know or use the poll on the right side of the screen!

Library update Dec 2016

I’d like to share some good resources I use to educate myself. There are so many good books, tutorials and talks out there, and I think it’s good to start a reference. Our technical little world has grown well past ‘little’, and hunting for new information can be quite the challenge. Why should we all dig for the same gems.

If you think I’m missing some important stuff (no worries, I ab-so-lutely will, as I’m just starting this), PLEASE send me a message so I can add it.

I will send a blog-post every once in a while (when I gathered enough new stuff in the library section) and publicize this in a blog-post. Please by all means send me links to

  • articles
  • books
  • images
  • movies / videos
  • websites
  • tutorials
  • or whatever you think fits

that can enrich your peer developers’ technical skill set.

Current topics in the library

  • Browser Performance
    • General auditing (Links to website)
    • Jank (Links to website)
  • Databases
    • CQRS and EventSourcing (video)
  • Entrepreneurship
    • Lean Startup (bookreference)
  • Enterprise Stack
    • Uber (Links to website)
  • Machine Learning
    • A good place to start (Links to learning platform)
    • Humans and cognitive bias (Image)
    • MIT Open Course Ware (Playlist of videos)
  • Microservices
    • Definition (Links to website)
    • Applications (links to some relevant applications)
    • Databases in the cloud
      • How Netflix does it (links to website)
      • How Uber does it (links to website)

Why use Story points or Time for resource tracking

For: team leads and Entrepreneurs

Running a service oriented business isn’t the same as running a product oriented business.  There’s a major difference and over the course of time I’ve learnt where the differences are when it comes to resource tracking, and how that may or may not affect your business.

Resource Tracking based on Time (more service oriented)

Pros:

  • enables to have specific billing, which feature costs how much money exactly
  • prospects and invoices can be compared to see how budgets are met
  • you can track individual progress and troubleshoot on real fine level when things aren’t going well
  • works exceptionally well for time-to-time small ad-hoc services

But the cons weigh more heavy:

  • time tracking is a pitfall for managers to start micro-managing
  • it kills creativity
  • it drives quality down (you assess on time, not on result)
  • there’s large amounts of overhead and overthinking for the developer
    was I effective this 15 minutes? And, I had to google a lot for this feature, should the customer pay for this?
  • employee satisfaction but also effectivity is strengthened when the employee feels at comfort. The best idea’s come to you when you’re not actively trying to solve something. Opportunity for relaxing is in that sense just as important for the employer than the employee (mind you, there should be a good balance here. Of which part of can be obtained by having a good thorough intake). When the employee has to meet up to the set 8 hours of his job he’ll feel at discomfort when he sat down and stared out of the window for half an hour, even though this time might have solved lots of other stuff. At the end of the day he or she will start with creative bookkeeping which will result in lots of negative energy that could have been used for positivity.

Resource tracking with story points (more product oriented)

When you start resource tracking with story points:

  • all points will be relative to one another.
  • developer doesn’t have to think about the client. They just have to think about what it’s relatively costing to another task
  • tasks get easier separable, since they can now be defined without having to speak in understandable business terms.
  • story points have relative value. This eliminates that the speed of the individual developer is weighed in the estimation
  • more focus towards quality

But how do you sell this:

  • Measure in Complexity and Uncertainty, not Effort
    • Complexity consists of how hard it is to clear the job. E.g. it touches lots of repositories, we have to align with lots of people and the subject is very delicate. This would be a high complexity.
    • Uncertainty gets weighed in, because it is key that during grooming sessions this factor gets reduced to a minimum. The more certain something is, the smaller the task can be, the better it is estimable but also deliverable for the allocated number of story points. So if either your PO or yourself don’t have high confidence and feel uncertain how to solve the task, you should start splitting what is clear and what isn’t, to maintain deliverable stories. External dependencies are uncertainties as well.
    • Effort gets pulled out. It’s silly to do simple stuff for lots of times and your customer shouldn’t have to pay for silliness. This is where you have to play smart, and say: changing all files by hand would take me 10 hours, so I will have to write a converter that does exactly this, this and that, which will only drive up the complexity, and thus the investment in to getting this topic solved.
      By doing this, you get rid of your legacy topics. Programmers should be lazy and automate everything they can. This should be part of the routine, because your PO ‘pays’ for your routine. Just be cautionate to not over-do it.
  • Each team will start establishing a baseline of story points that they can process in a time slot. This is your so-called ‘velocity’. You can easily divide the time over the storypoints and see what the cost would be.  Because your team is focussing on what it would cost in relative terms, instead of fitting deliverance of functionality in a timeslot (which is doomed to fail), you can calculate the cost to analyse ROI versus expected delivery date. You could then also decide to buy from external sources, since you have a good idea what it would cost doing it in-house.
  • Run through the story and write down all steps that need to be done. Now everybody should have a ground level understanding of how to solve this story. Buy a set of scrum-pokercards, count down and let everybody throw down a card. No significant differences? Quickly reach consensus and take the average if no-one objects. Some super high or super low? Let them explain and let the team learn from this perception.

Conclusion

As you might see I’m in huge favor of working in a product oriented environment. This is not always a possibility given your business model and current list of clients. If you would like to go with story points, but your clients are not ready yet, try to do this:

  • find a good size client, preferably one that favors on-time delivery more than super-detailed invoices. You need one or two good sized ones, because this will work best when you allocate one (or preferably multiple) full sprint with a complete team on this.
  • build a business case, show them your intent to deliver more consistently and be willing to invest a bit yourself. You can decrease your own investment over the course of time, but all process changes suffer from inertia so give it some of your own momentum.
  • learn from your first couple of attempts. It is more key to persist in the process than to immediately have the good numbers.
  • make sure you deliver, so make sure you have small stories with almost no uncertainties.
  • Once you have a basic idea of how much the team can do, commit slightly under it and use that room for delivering quality and optimizing your process. This allows you eventually to move the baseline up

You’ve now done what a developer would do. Apply an abstraction layer over business metrics in order for the team to work with their own currency. You’ll have a more productive, more motivated and more reliable team as a result.

Any thoughts? Let me know!

Turtles all the way down

A story about the risk of over-abstraction and false assumptions on technical debt.

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

The first time I’ve got in contact with this story is when I read the book Godel Escher Bach. Although it seems hilarious, it also points out that there is no solution for human stubbornness and lack of logical thinking. It is an exceptional example of missed credits for the inertia of mankind’s cumulative factual cognition, or perseverance of human-induced stories one could believe in.

Not soon after, I started recognizing some of these silly patterns in my own behavior. Of course regarding personal and behavioral stuff, but it somehow concerned me more that I noticed that these patterns can easily be found in day-to-day technical tasks. A recurring theme in my software seemed to be some serious over-engineering with abstractions over abstractions, all to separate concerns wherever possible and isolating whatever could be isolated. The layers of abstraction grew so thick that they became very hard to follow for anyone else including my future me, and I realized I needed some serious re-prioritization of what I perceived as good practice in software development.

The rule of three

It is so, so hard to leave technical debt when you’ve had a history full of it. This becomes the second nature of a developer. Remove any tech-debt up-front before it bites you in the ass afterwards. But there’s a risk in this.

We tend to prematurely optimize our code. But the risk here is that we optimize without knowing the full set of features that are required. This is when I introduced the rule of three (which sometimes is ignored, sometimes the rule of two, but don’t tell anybody okay?).

Only when you’ve seen similar functional demand occur three times, start clustering the functionality and isolate the individual concerns.

By the time you’ve implemented a third similar functionality (which usually needs some adaptions to work in a specific situation), you can tell something about the environment the component should work in.

Set your KPIs

(For those who don’t know, Key Performance Indicators, the stuff that tell you if you are doing the right thing or should pivot your efforts).

This might seem strange, but make sure you set your definition of done straight before you start the development of new features. The definition should only encompass the creation of functionality. Not the how. Just the what. Don’t create elaborate structures, but try to get to your goal the fastest way possible.

Honoring:

  • transparency
    Read your code, and let someone else read it (peer reviews). If it’s not clear what it does or how it works, it’s not good enough
  • Usage of other modules (DRY (Don’t Repeat Yourself))
    Don’t do work that’s already done
  • Don’t implement features you don’t directly need (KISS, Keep It Simple, Stupid)
    I guarantee you that the functions you consider nice to have but unused, will be the first to bring your code to a grinding halt.

You’ll need these KPIs! Because odds are that you won’t feel good – at all – about the product you’ve just delivered. There’s ALWAYS a better way to do things, and that shouldn’t drag your just created real-life value down. Satisfy your KPIs and feel satisfied. But watchful.

Observe

Take notes along the path of deliverance. Mark the project as Concept and MVP (although functionally you might feel you’re there, you can sometimes treat functionalities as separated products) and keep track of it. Observe all stuff needed in the future and observe if your suspicion of lacking features, abstractions and re-usage of code are right. If so, don’t be shy to become your own PO and create a story that removes tech-debt. If your relation to your usual PO is one that has trust in it’s fundament, he should respect this story as much as any other feature request and allocate time to remove this technical debt.

Apply validated learning

By waiting to apply all these abstractions, you enable validated learning (beautifully described by Eric Riess in The Lean Startup) to more or less scientifically confirm the future of the feature (the standard definition used in validated learning), but also the need and the focus of the future optimization.

Bottom line: You’ll spend less time, on stuff that get’s thrown away.

It’s not turtles all the way down anymore. It’s just a bunch of oddly stacked turtles on a ridge in some water on a planet.

What follows after this.

I’d still like to write a blog post about testing code. This article about levels of abstractions relates to that future testing blog post in so many ways.

If you’d like me to put some focus on that, let me know by using the poll on the right side of the screen!