Work ethics, rewards and private balance

I have seen many different work-ethics. Some people are working day and night, neglecting their own needs as a human being (trust me, it will bite you once you get older). I’ve also seen others that show no interest in company goals and strive for tops 3 hour efficiency and well-groomed social media profiles every day.

 

I’m writing this blog post because I think that with the right argumentation people can get on-track, become more efficient for their bosses, but also in their personal lives.

Believe in the company goal

It’s paramount that you believe in the company goal, believe in the people that wish to go there, and believe in the fact that you’re going to achieve that goal. If you are not certain of one of these things, you should vocalize your concerns.

Why? Because your and your peers future success, and joy in daily work rides on it. You will never be able to work really hard for something if you don’t believe in the thing you are working for. Don’t forget that you spend more ‘conscious time’ at your job than you do anywhere else. So don’t waste that time and optimize whatever can be done to achieve the best results.

When you join a new company, you should really investigate what the goals are and if they are in line with what you feel is best and fitting with where you personally want to go.

Career opportunities

Never settle for the job you’ve got, always work for the job you want. Opportunities don’t automatically come your way. It’s hard work, and if you follow the company’s goals whilst not neglecting your own needs, you will achieve more.

Especially for people in tech willing to grow, I would suggest to force yourself every once in a while to perform in an uncomfortable setting. Speech in front of 50 people, be bold (not bald, that’s another blog post) and question your PO’s or clients about choices they make. Dress for your job. Anything. Just put in more effort than is minimally required. You will see that it’s uncomfortable at first (that’s a guarantee), but will boost your communication skills, technical skills and moral. You’ll be more visible on the radar.

You can be the best programmer on the planet and create something that somehow saves the world from climate disasters, but if you don’t evolve the skills to communicate about it, no one will know nor care about it.

By setting career goals, and being willing to fall while you stumble to get there, you’ll grow.

Once you know what you want, you should fight for it. Because if you cannot fight for your own worth, then how would you be able to fight for the same thing for your boss. You should always put your own goals in the scale with more weight than the company goals. But that said, this is not a black and white world. Try to scan the horizon for every possible way you can unite these two goals in to one, and be verbal about it to your boss before you let the scale decide. When he or she cares they will pursue the exact same path (a compromise or optimum that’s in some way beneficial to both parties). If they don’t, the environment you’re in might not be the good one for you and you should consider the weight of the problem and be strong enough to draw conclusions when needed.

Be worth what you are paid for

When you’ve been an entrepreneur, you’ll know – no, let me rephrase that – you’ll feel the real value of the money that you receive each month. Money doesn’t come for free from a magic tree. It’s a hard earned currency that you and your fellow colleagues have worked hard for. There’s no guarantees that it’s there next month or the month after. That’s the stone your boss might have on the bottom of his or her stomach. The risk they take and lay awake from at least a couple of nights in the year.

Employers don’t want to make decisions you don’t like, but they need to do it anyways, or they might draw the short straw on the vow they’ve made in the beginning of your employment: I shall provide money each month so my employees are rewarded for their efforts and their families mouths fed

When you negotiate with your employer, don’t negotiate beyond that what you are really worth. Make sure you can look them in the eye when you convince them of what you are worth and believe in your words. And again, if you can’t fight for your own (fair) salary, how can you fight for the income of the company. When you feel you’ve reached your capacity, know when to stop. You don’t want to work a year with the constant feeling that you’re being less productive than paid for. There are more ways you can be rewarded than with money. Think of growth opportunities, flexibility in location an times to work from or other ways.

When you create awareness for yourself what you cost the company, you can (and should) use that knowledge when you are working. Assess if you are worth your cost every once in a while. Good bosses do the same.

Work to live, not the other way around

Work hard, play hard is a good thing to keep in mind. Charge yourself, enjoy your life and reflect on what’s happening because time flies. Nurture your home situation with the same care and awareness that you apply on work. Strive for an equilibrium. You will only succeed in one or the other if you can find a balance. Life doesn’t work in sprints, it’s a marathon.

Bottom line

It’s all about balance, honesty and positivism. One could almost say that the term ‘self fulfilling prophecy’ is a derivative of newton’s third law:

When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body

So find your optimum. Find peace and positivity or execute on thoughts or actions that enable you to do that. Put in a lot of effort and positive energy, and the reward will eventually be as good for a long time to come.

Tips on how to conduct a retro

For: scrummasters and teamleads

The most important thing with retro’s is that you have them. But when you have them, there are a couple of things that you can do to optimize and get the most out of them.

When to do the retro

Try to schedule the retro after the sprint, around the moment you do your demo. Since the team will already start looking back on two weeks and see what has happened and how they can demo their achievements, this is an ideal moment.

Use sticky notes

Let everybody write down their own positive and negative points on sticky notes. Some members may come prepared with some sticky notes that they have gathered during the sprint (if you see that there’s not enough quality feedback you can suggest doing this to the team). Each note can get a + or a – in the top.

Create a board with two or maybe three lanes. A lane for things that didn’t went so well and a lane for positive things. When someone is done, let them put their notes on the board. Each member can see if one of their points was already put on the board. If so, let them stack. Big stacks, usually indicate big concerns regarding that point.

Some benefits of doing sticky notes instead of going sequentially through all members:

  • everybody can write down their own thoughts, and don’t have to remember them through the talks of others
  • you can monitor actual participation in the retro, instead of having a ‘what he / she said’ or ‘I have nothing’ answer
  • you get to know some sense of how big a concern actually is
  • you can take the notes with you to process them later and not have everyone waiting for you

What to put on the notes

It’s important not to narrow the scope of these points. Let people tell that they’ve had a good barbecue, a nice birthday or a dog that died. Because influences from our personal lives into our professional lives, are as real as the other way around. Being able to briefly share your joy or sadness on something, makes this meeting something to look out for, and it’s an easy way to share stuff with your team and fortify your future collaboration. When a team member feels heard and seen, he’ll tend to stick around longer.

Run through the notes

Start with the points of concern. This is because of two reasons. You don’t want to spoil a good vibe, and people seem to cling on to the vibe they left a meeting with. This way, the momentum you’ve built during the discussing of positive things will carry on after the meeting. Create piles of all + and – notes

Guiding the meeting

Rotate members to read the cards (different member per retro). This trains them in speaking and conducting meetings (even though this is a very small and safe meeting), which will help them in the future. Also, you’ll prevent biassing the meetings with the tone of one person that always reads the cards.

Let someone take notes, but only on the action points. So when the team discusses the cards, only write down how they think they can do better. When the meeting is over, create a page with a retro template in Confluence (or any other tool you use). Take the stack of + cards and write them on the column ‘what we did well’. Take the stack of – cards and write them on ‘what should we have done better’, and take the notes of the discussions and put them on ‘actions’.

Revisit the list of previous action points at the end of each meeting and compare them with the list of actions from the previous time. Don’t add unsolved actions to the new action list. When an action didn’t get recurrence, it fell out of grace and might have not been a thing for an action point after all. Do ask which of the actions have been done, and update your previous actions so they get marked done. You can look over-time if actions get recurrence. This enables you to detect inefficiencies within your way of working.

DoD: Distribution of Development

For: tech entrepreneurs that are considering the options on where to source their development power from

It’s not an uncommon question. Should I outsource, should I create a distributed environment or should I keep everything (literally) in-house. There comes a time in nearly every company when these options will be (re) considered.

This blog post should shed some light at some possibilities and some of their pros and cons.

The candidates

Local-only

This is when you hire from your own county.

Pros

  • One language that everybody understands
  • You can easily walk up to someone
  • Trust can be established by seeing someone at their spot
  • It’s easy to bond
  • Pair programming and doing coding dojo’s are easy to do
  • You can facilitate a nice entourage and a real sense of being part of something big

Cons

  • You’ll have to reimburse travel cost
  • Hard to find competent or top-class people
  • Because you can easily walk up to someone, you basically disturb them and a lot of effective time can go to waste
  • Being at a spot, doesn’t say you are working or doing that efficiently
  • When you find someone very competent from another country, you’ll have to relocate them

Verdict

It’s easiest to start a company with local people. You can establish a brand, have short (but probably more inefficient) communication and reporting lines and you won’t have to take cultural differences in to account. Because you will have to set-up a facilitating environment anyway, it’s also a good base on which you can start working with interns and juniors so you can really supervise and even micro-manage every now and then (you shouldn’t but we all know it happens when you are still small).

 

Outsourcing

This is where you completely carry over all development to an off-site company.

Pros

  • It’s well known that some outsourced development is cheap as pennies in some countries.
  • You can prototype and work in parallel
  • You can hire for what you need. It’s scalable in that sense.

Cons

  • You might get the features you’ve asked for, but who guarantees the quality and the longevity of the code
  • You will have to meticulously write down every possible expectation of the product you wish for. Don’t assume it will be done for you.
  • There is no intrinsic motivation or personal attachment to the quality of the product
  • You will have to overcome cultural differences and stand firm when someone that works by different rules, laws and ethics disputes your requests or assumed rights.

Verdict

I’ve tried this for a part of an application, but this backfired horribly. It’s hard to gain trust and monitor the quality. You could do this if the product is in example a tool that you just wanted as a prototype to get a better grip on your value proposition. One-time, never look back applications (e.g. apps) get developed like this all the time with good results. Working on a SaaS? Don’t do it, I would say.

 

Truly Distributed

The address is a formality, this is when there is no such thing as an office. Lots of pros and cons on working distributed have been mentioned above, so I’ll try to limit the pros and cons to only that what’s important and different if you work truly distributed

Pros

  • There’s no overhead of physical buildings involved. This money can go directly in your company and its employees
  • you will start judging people on their performance. “Did they do the thing I asked of them” will be more relevant than, “did he work 6 hours or 8, and did he work in the morning or evening”.  This actually ties more in to the nature of people. Attendance is something less relevant than focus.
  • You can work from EVERYWHERE. This is true for the developer, but also for the entrepreneur. Want to travel the world, but still get payed? This is the company to join!

Cons

  • depending on how big the range of time difference is, it can be truly a madhouse to orchestrate everybody in to one virtual room.
  • it’s harder to bond with the team.
  • it’s harder to create a hierarchy or other form of ladder on which the developer can grow. Personal growth must be attended to, it is one of the prime concerns of any developer

Verdict

I have seen and heard companies succeed in this setup. It brings its set of challenges but also benefits. I think that going truly distributed is not something for the fainthearted.  Especially the entrepreneur will have to be the link between all developers. Every part of the process has to be looked at, cleaned, oiled and put back in. Is the documentation there. Is it in proper language. Are all code standards applied. Did he communicate nicely to his peers. etc etc. These are normally things you pass on to your leads and managers, but (especially when you start) should be done by the owner to ensure his company still sails the charted course. I would say that it’s not a natural first choice. You have to have the experience, and maybe it’s easiest to come there by progressively hire more external and bleed out local.

 

Externally integrated

This is when you have the majority of your employees locally, but you also hire dedicated people abroad.

Pros

  • dedicated workers for your company mean they (could or should) feel in line with the company’s goals
  • working with different cultures really broadens your horizon. You’ll become more creative in work and this will reflect to your personal lives. It’s an absolute joy.
  • no cost of commute
  • having this division, enables you to not have all your eggs in one basket. (e.g. internet goes down at HQ, or everybody resents recent management changes and strike or find different jobs)
  • When you hire through an agency, you are able to scale your external development faster.
  • The infrastructure that you should get in place (think of cameras, digital boards, good documentation, communication pipelines etc) is also beneficial for your on-site employees. All of the sudden they can also efficiently work from home
  • All communication should be in a language that everyone understands (most common is English). This future-proofs all documentation, code, etc and prepares your company for being true to serving the World Wide Web (distribute for everyone to read, all over the world).
  • You still have a place to go to, a brand that can be visited

Cons

  • once per X time you will have to gather, take the plane and meet each other for some time and do bonding. This is an expensive undertaking. I have heard from someone who actually has a fully distributed team that the cost is generically about the same as the cost of commute for local people (for a mid-sized company hiring in the same country).
  • Initial setup can be demanding on people and budget.
  • You will have to oblige to the law from wherever you hire. Some laws are really strict and can discourage experimenting with this way of working.

Verdict

When you have the time and means to orchestrate this, I’d most definitely do this. It’s a perfect hybrid form that – even if it fails to work remotely – also greatly benefits your local team.

This will only succeed if you will have someone on the other side willing to understand and work with your company. Be critical about who to hire, especially the first one. But this will also only work if people on the local side bring this person in to all relevant discussions and support whenever this is needed.

Having the mindset and the infrastructure in place, you also create a nice environment for your local employees. They can work from home whenever needed, you remove ‘knowledge lock-ins’ in your team and individual and team performance will go up.

Offshore

(in different time zones)

Extra pros

  • able to run support and development 24/7
  • the world truly is a bigger place than a specific timezone

Extra cons

  • It becomes increasingly harder to communicate when someone lives further
    • daily routines
    • planning deadlines
    • evaluations
    • establishing unity within the company

Nearshore

(in the same timezone)

Extra pros

  • Time of communicating is no issue
  • The locations are generally ‘near’, so cost and time of traveling every once in a while shouldn’t become the main reason to do or not to do it

 

Conclusion

So you’ve probably read it well. The definition of how it’s properly done (DoD), resides in the distribution of development. It might not be low hanging fruit, but the stuff that hangs higher get’s more sun you know ;-). Striving for it might already make the difference. It’s up to you to decide how to implement it for your scenario.

I’m starting an app. What technology should I use?

For: entrepreneurs that need to decide what technology to use for their future app

Mobile first. Everybody does it, wants it, needs it. Simply because your phone’s front
camera probably sees your face more than your spouse. It’s easy, it’s fun and addictive. So usually going for a mobile-first approach seems like a good thing to do.

battle_gloves_sport_boxing.pngIt’s – as an entrepreneur – very hard to decide which technology to use. You could be non-technical, but you still would have to make a well-informed decision on this technical matter and you want what’s best for the company. This blog post attempts to give you some insight (boxing gloves) in what options you have, and how they might affect you in the short and long run.

Progressive web apps

pros

  • The app can be developed in a  well-known stack. Most developers in this world are webdevelopers.
  • Quicker iterations in development, easier to work with each other
  • Quicker iterations on client-side, codebase can pull the latest version ad hoc, instead of having to wait for store approval
  • No store denial, apple google and others cannot curate your app based on its content. Not only the app, but also the update itself doesn’t have to be reconsidered against policies
  • Cross platform, any browser that supports it, on any platform, will be able to show and serve (there’s a caveat here, read for cons later on)
  • Fallback for non-compliant browsers, the browser will simply see a website without the offline benefits
  • It’s really (really) fast
  • The app can be visited by browser, but also installed (become chromeless for a perfect UX. There is no reason your user shouldn’t be able to have the exact experience he or she would have like on native technologies).
  • Not only cross-platform, but also re-usability of components cross-ui (reuse business logic for different experiences on mobile, tablet, desktop, television).
  • Easy to apply lean / agile methodology (determine MVP, test / experiment, pivot or accept and enhance)
  • Forcing of https
    • security against eavesdropping and modifications with proxies
    • security all-throughout the communication platform
    • enablement of https2 (more performance!)
    • more services become active (like GPS) since having them on http would infringe on personal security
  • Progressive compliance with device interfacing rights (e.g. the user will only be asked to allow camera access, when the camera is actually used by the app)
  • All major browser vendors (Google, Microsoft, Firefox, Samsung-web) are actively supporting the adoption of this technology

cons

  • Not all hardware and native functionalities can be accessed as easily as through a native app, though browsers are making a real effort to mitigate this issue. Think of the recent work that’s done in:
    • gps
    • camera
    • push notifications
    • run as a service
    • vibrate
    • multi-touch
    • etc..
  • Not all browsers on mobile fully support this, but it is coming (as spoken of on the google PWA summit 2016 Amsterdam).
    • Safari is currently not actively pursuing PWA, but recently announced they will start making effort (no solid promises there). This probably has everything to do with losing grip on the app ecosystem and possible loss of market share.
  • Some native look and feel for default behavior may differ from the native experience
  • Optimized for 2d web apps. Web Assembly and 3D are coming, but for now one can expect to get highest 3d performance on native.

When to apply

  • Do this if you have an informative app. PWA isn’t really suitable for gaming. It’s not that you can’t, it’s just that usually native apps are better at this or have nicer tools to quickly and effectively approach your goal.
  • Do this when all animations and visual elements are designed and accounted for.
  • Do this when your MVP has best chance of surviving when it’s rolled out on more than one device type

Cordova

Cordova might be a good fallback for PWA on IOS to get maximal coverage on all platforms. Cordova basically shows a webview in an app and loads your web-app from it’s static cache.

Pros

  • It mitigates possible current platform issues, since the entire PWA built app can be wrapped in a native container.
  • There are tonnes of plugins that enable direct usage of hardware or other device features
  • Extendable per operating-system (e.g. when a sync-adapter should be shipped with the app)

Cons

  • Webviews use software rendering and are degraded versions of browsers. Therefore the app will never be as fast as a native or PWA.
  • Publishing to markets comes with all rigidity of publishing to markets e.g.
    • maintain backwards compatibility on all interfacing endpoints, expect users not to update / upgrade
    • slow iterations and bundling of features (release trains to prevent flooding of updates). Also introduced slowness in acceptance by market
    • having accounts for all platforms, compiling against all platforms
  • mandatory compliance with permissions and updates of these policies

When to apply

  • As a fallback wrapper for PWA. I wouldn’t apply this anymore without PWA as foundation t.b.h.

Native technologies (per platform)

Pros

  • fast when operational (usually slower boot speeds than PWA), especially for high performance apps like games
  • when standard device interaction is preferred (like how pulldowns look and how some animations are), this is the best way to go.

Cons

  • you need intricate knowledge on the specific
    • languages
    • development processes
    • device capabilities
    • market rules
    • platform updates
  • codebase is not reusable for other platforms
    • that means that the effort will have to be made twice
    • discrepancies in functionalities between platforms will form
    • you’ll need more people in order to maintain multiple platforms / versions
    • double maintenance, updates will become cumbersome and expensive
  • mandatory installing through appstore, no ‘checking out’ through browser before installing
  • slow iterations in releasing, mandatory release train orchestration
  • mandatory compliance with permissions and updates of these policies

When to apply

  • when developing an MVP, a specific device could be targeted. This can only be done if the success and KPIs of the MVP are not dependent on this factor. Do this only when you are sure enough that the other platform either doesn’t matter, or enough resources will be available to also develop for the other platform
  • when developing for e.g. mobile watch technologies
  • when developing a game

Cross-platform technologies (like xamarin)

Pros

  • one language to rule all app-based platforms

Cons

  • no portability to desktop
  • unfamiliar development process. I’ve never seen many people work on this at the same time
  • harder to find people for, thus creating a strong human dependency (SPOF)
  • there’s a transpilation process. Since native technologies are moving very rapidly, it’s only the question of these technologies can keep up with the pace of multiple OS Vendors.
  • the longevity of native OSses is high, but there is no guarantee on technologies that depend on these OSses.

When to apply

  • When native is preferred over PWA, but cross-device (excluding desktop usage through browser) is really important
  • When a performant application on IOS is key (since the Cordova fallback webview for PWA doesn’t perform that well)
  • When you develop a game and found people that are dedicated and skilled in developing for these technologies.

Where business logic isolation fails with RDBMSses

For: software architects, DBAs and  backend developers

Databases get a lot of attention and they should. In a mutual relationship code gets more and more dependent on a database, and a database grows by demand of new functionality and code.

In order to be as resilient as possible to future changes, in RDBMS-land one should strive for getting the highest normal-state for their data. If you’ve already decided that you have to have an RDBMS to clear the job, take this advice: screw the good advice ‘never design for the future’. Because the future of your application will look very grim if you don’t carefully consider your data scheme (I assume considerable longevity and expansion of functionality on the application here).

So.. What’s this blog post about?

There is tight coupling involved between a database structure and an application. There’s no way around it. And that’s okay. But what I think is not okay, is setting rules about the  relations more than once. This is prone for discrepancies in logic between the database and your codebase. There can be countless triggers, foreign key-constraints that block, cascade or set to null,  and you wouldn’t know about them unless you perform your action and analyze the result, or you’ll have to mimic the database logic in your code (which will break in time due to the discrepancies).

Making the issue visible

Let’s first fire up a database

╭─tim@The-Incredible-Machine ~
╰─➤ sudo apt-get install mysql-server-5.7

And populate it with some data. I’ve found this example database on github. If you also use it to learn from or with, please give the repo a star so people know they aren’t putting stuff out for nothing.

╭─tim@The-Incredible-Machine ~/Git
╰─➤ git clone git@github.com:datacharmer/test_db.git
Cloning into 'test_db'...
remote: Counting objects: 94, done.
remote: Total 94 (delta 0), reused 0 (delta 0), pack-reused 94
Receiving objects: 100% (94/94), 68.80 MiB | 1.71 MiB/s, done.
Resolving deltas: 100% (50/50), done.
Checking connectivity... done.

╭─tim@The-Incredible-Machine ~/Git/test_db ‹master›
╰─➤ mysql -u root -p < employees.sql
Enter password:
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
INFO
LOADING departments
INFO
LOADING employees
INFO
LOADING dept_emp
INFO
LOADING dept_manager
INFO
LOADING titles
INFO
LOADING salaries
data_load_time_diff
00:01:02

In order to see what we’ve got, I reverse-engineered the database diagram from the database. This sounds harder than it actually is. Open MySQL Workbench, make sure you’ve established a connection with your running MySQL service, go to tools tab “database” and use the “reverse engineer” feature. Workbench will create a schema for you, a so called EER (Enhanced Entity Relationship Diagram).

example_db_structure.png

EER Diagram of example database

We can basically see that all relations are cascading when a delete occurs, and restricting when an update occurs. So when an employee gets deleted, he or she will be removed from the department, will not be a manager anymore and all history of salaries and titles will be removed.

So now let’s look up one single department manager

mysql> select * from dept_manager limit 1;
+--------+---------+------------+------------+
| emp_no | dept_no | from_date  | to_date    |
+--------+---------+------------+------------+
| 110022 | d001    | 1985-01-01 | 1991-10-01 |
+--------+---------+------------+------------+
1 row in set (0,00 sec)

And ask the database to explain it’s plan upon deletion of the employee that we know is a manager.

mysql> explain delete from employees where emp_no = 110022;
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
| id | select_type | table     | partitions | type  | possible_keys | key     | key_len | ref   | rows | filtered | Extra       |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
|  1 | DELETE      | employees | NULL       | range | PRIMARY       | PRIMARY | 4       | const |    1 |   100.00 | Using where |
+----+-------------+-----------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+
1 row in set (0,00 sec)

So what is it I miss?

I am missing an Explain-like function, that doesn’t give me meta-info about the query-optimizer, but actual info on what would happen -in terms of relations- if I where to remove an entity from my database given a specific query.

I would expect that it would return data like:

  • which table
  • how many records will be affected
  • what would be the operation (restrict, cascade, null)

And just like an explain can get extended, running this in an extended mode could also yield a group-concatenated list of primary keys, separated by comma and csv fanciness to not break the string (using string delimiters, escape characters, whatever you feel needed).  Imagine the fancyness you could unleash with actually informing your user which record is the culprit.

Name me some examples where I need this!

Lets imagine we’ve created a CRM-like system. For now, let’s imagine we have

  • organizations
  • addresses
  • invoices
  • contacts
  • notes

Some questions that might arrise are:

  • Can I delete the organization, or is there still an unprocessed invoice attached (blocking constraint)? I’d rather not present the user the delete-button and give a reason why it cannot be done, then concluding this issue and rolling back on the action.
  • When I delete an organization, what will go with it?
    • Will processed invoices also be removed
      (hope not! Better set to null in this case, the invoice should (besides the org.id) also store a copy of all relevant org data)?
    • Will unprocessed invoices be removed?
      (Maybe best to block in this situation, since there is still money potential  or reason to assume the user doesn’t want this)
    • will notes be removed?
      (You could do this, but only if you are very verbose about it)
    • will contacts be removed?
      (Often you don’t but since these kinds of relations are often many-to-many, these coupling-records should be removed)
  • What do I actually know about this relation? What’s exactly connected? Super nice to dynamically create graphs on how data is connected and how relevant this data actually is.

Why solve it in the database?

I have worked on two applications that counted each in excess of 150 tables that where all interconnected, and I can really say that investing significant effort in philosophizing on your database schema isn’t a luxury, it’s a must.

A database contains atomic values. It lays the foundation where your code starts to work with. That inherently means, that whatever we can do at that level to protect it, we should.

A couple of advantages of solving as much as possible in the database

  • If multiple tools or applications connect to the database, they will all have to follow the same conventions
  • whatever’s already in the database, doesn’t have to be transferred towards the database, and thus saves bandwidth.
  • Foreign key constraints use indexed columns by nature. Your queries will run faster, since the data where your tables generally relate to each other are already indexed
  • You don’t have to rely on super-transparent naming (you should do that anyhow by the way), but everyone that looks to your database will understand how tables relate to each other.

What’s next?

With regard to Database-related topics, there are a couple of topics that I’d like to cover in the future like:

  • Caches
    • microcaches, denormalizing data
    • cache-renewal strategies
  • Versioning of datastructures
  • Query optimization
    • do’s and don’ts on writing of queries
    • methods to optimize

Do you think that I’m off, missing some recent progression in this area or just want to chat? Drop me a line and let me know what you think!

Creating a resilient Front-End build process

For: intermediate front-end developers and expert developers that need a quick reference.

There are lots of ways to get your front-end architecture to the client. Usually the basic concepts seem easy and quickly done, but you will soon find yourself

  • make an effort to integrate it in a running environment,
  • make it work for the world (browser compatibilities and such)
  • handle external dependencies
  • write code that’s understandable for your colleagues (and yourself in 6 months)

Yep you’ve guessed it, with every demand you put on your code, the effort will go up exponentially.

This talk is not about how to manage your project. There’s many good books (e.g. Eric Ries – The Lean Startup) and articles to be found about this. This talk focusses on some core steps that I generally like to make to get an optimal work environment that gives me the least amount of friction during development.

During development you will work with lots of individual files, comments, testers and such, to ensure that the project is structurally sound, but also understandable for humans. But as soon as we deploy for our client, there will only be machines interpreting your code, and we’d like to get rid of all that isn’t strictly necessary and really get the highest performance we can get.

Assumptions

I have to make some assumptions about your environment, otherwise I would have to go all the way back to installing linux. So the things that I assume are:

  • you are devloping / deploying on linux machines under the debian architecture (I use Ubuntu)
  • you have NVM (Node Version Manager) installed (https://github.com/creationix/nvm). I don’t include NVM in the build process, since it is installed with a shell script and I consider piping a curl-response to bash as a potential hazard for your organization.
  • you have basic knowledge of linux, client-server over http and javascript
  • you have installed sass

Versions

One issue that is persistent over the years, is versions of dependencies. Sometimes a dependency gets updated and sub-dependencies cannot be resolved anymore. A solution for that resides in the combination of NVM, NPM and Bower.

You might notice that I refrain from using global modules. I do this to detach the project as much as possible from anything that is, or should be available on your machine. This way, we can also ensure that we use the correct version, and not an unknown version that is set globally.

Node

We first need to define which version of Node we’d like to use. Usually, at the time of developing we’d like to use the latest stable.

╭─tim@The-Incredible-Machine ~
╰─➤ nvm install stable
Downloading https://nodejs.org/dist/v7.0.0/node-v7.0.0-linux-x64.tar.xz...
######################################################################## 100,0%
WARNING: checksums are currently disabled for node.js v4.0 and later
Now using node v7.0.0 (npm v3.10.8)

Here you see my machine pulling in the latest version of Node, which at this time is 7.0.0.

You can see which versions are available on your machine like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm ls
   v5.5.0
   v6.8.0
-> v7.0.0
system
node -> stable (-> v7.0.0) (default)
stable -> 7.0 (-> v7.0.0) (default)
iojs -> N/A (default)

And you can switch between them like this:

╭─tim@The-Incredible-Machine ~
╰─➤ nvm use 7.0.0
Now using node v7.0.0 (npm v3.10.8)

Now we have Node in place, it can supply us with tools that we use to build our solutions with. We’ll first need to create a project.

 

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master›
╰─➤ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (build-process)
version: (1.0.0)
description: Example build process
entry point: (index.js)
test command:
git repository: (https://github.com/timmeeuwissen/build-process.git)
keywords:
author: Tim Meeuwissen
license: (ISC)
About to write to /home/tim/Git/build-process/package.json:

{
 "name": "build-process",
 "version": "1.0.0",
 "description": "Example build process",
 "main": "index.js",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 },
 "repository": {
 "type": "git",
 "url": "git+https://github.com/timmeeuwissen/build-process.git"
 },
 "author": "Tim Meeuwissen",
 "license": "ISC",
 "bugs": {
 "url": "https://github.com/timmeeuwissen/build-process/issues"
 },
 "homepage": "https://github.com/timmeeuwissen/build-process#readme"
}


Is this ok? (yes)

Node is basically a javascript-engine running in a linux environment (e.g. on a server). There are lots of great tools written in node to interpret and mutate your project’s files to become client-friendly.

Bower

Bower is one of these tools. It enables to get dependencies for your front-end architecture. Whenever you feel like using jQuery, lodash, react, material-design or whatever you prefer to use, you will always need to get the dependencies, structure them in a certain way and keep track of their versions. Bower is NPM for front-end related components and does exactly that.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i bower --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└── bower@1.7.9

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/bower init
? name build-process
? description Example build process
? main file index.js
? keywords
? authors Tim Meeuwissen
? license ISC
? homepage https://github.com/timmeeuwissen/build-process
? set currently installed components as dependencies? Yes
? add commonly ignored files to ignore list? Yes
? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes

{
 name: 'build-process',
 description: 'Example build process',
 main: 'index.js',
 authors: [
 'Tim Meeuwissen'
 ],
 license: 'ISC',
 homepage: 'https://github.com/timmeeuwissen/build-process',
 private: true,
 ignore: [
 '**/.*',
 'node_modules',
 'bower_components',
 'test',
 'tests'
 ]
}

? Looks good? Yes

TypeScript

I rather use typescript than I use plain JS. Typescript is a superset of JS, that needs to be pulled through a compiler in order for it to be able to run on a client. There are always reasons not to do stuff, but I’d like to share my reasons to do it anyhow.

  • It is developed and maintained by Microsoft. This isn’t the smallest guy in town, and they do an excellent job at it.
  • It is a superset, and depending on your settings you can or cannot use typing wherever you want (May I kindly request you enforce every variable to be typed strictly, for reasons to follow)
  • Code-completing gets a hell of a lot more fun for your IDE (I personally really like Visual Studio Code on Linux from Microsoft. It’s free, try it!)
  • You can always work in the latest standards. Depending on your compiler arguments the code will be transpiled to any given EcmaScript standard you require for your company. The TypeScript compiler nicely polyfills whatever isn’t available in that ES version, and as time moves on, browsers get better, and your code will deprecate less fast (you can export to ES6 on any given day of the week e.g.).
  • You can more easily apply your backend architecture skills when you work with the latest ES version.
  • Your fellow developers will exactly know the input and output of each and every function without knowing the intricate details of your application.

Packing and transpiling

It’s a safe bet that we will create lots of documents that will all depend on each other. Browserify is able to follow these dependencies and combine them in to one file. There is a plugin called tsify, which basically runs the typescript compiler on the files before or after the merging has happened.

Lets add it to our project:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i browserify tsify typescript --save-dev
build-process@1.0.0 /home/tim/Git/build-process
├─┬ browserify@13.1.1
│ ├── assert@1.3.0
…
…

Get the google closure compiler. We will use this to pipe the output of browserify through.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i google-closure-compiler --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ google-closure-compiler@20161024.1.0

Typings

Now, not every project is written in TypeScript, but we would like to be able to rely on their behavior within our files. E.g. jQuery might return some structure after invoking it, and we want to be able to recognize that output and work with it as such. Typings is a library filled by lots of wonderful people with interfaces of these external dependencies, so you don’t have to do it yourself!
Lets get it first:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i typings --save-dev 1 ↵
build-process@1.0.0 /home/tim/Git/build-process
└─┬ typings@1.5.0
├── archy@1.0.0
…
…
╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ node_modules/.bin/typings init

Sass

It’s important that every part of our application is structured in such a way that someone else understands what he or she is looking at. This includes the style. Style documents get easily overlooked and deemed as less important, but in my experience they are often responsible for the biggest part of the technical debt. Files with thousands of lines of style that react with the html, and no way to understand or properly refactor aren’t an uncommon sight.

In a separate document I plan to get deeper in how you can structure in such a way that you won’t build your own little jungle of css. Here, I will remain to focus on the build-steps and high-level reasoning.

Sharing of configuration and building

It often happens that some variables are shared across front-end architecture. Examples might vary between a path to the CDN or a max-amount of items within a caroussel.

Assuming that you already have sass installed, I could encourage you to also install sass-json-vars.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ sudo gem install sass-json-vars

Because we always want to compile our files this way, it’s handy to create a helper that helps with picking up all scss files that match a pattern, and converts them to a normal css file.

I created a directory helpers with the file build_css.sh that basically contains this:

#!/bin/bash

# $1 = directory to scan for documents
# $2 = directory to put finished css documents in

find $1 -name [^_]*.scss 2> /dev/null | while read input
do
  output=$(echo $input | sed s@^$1@$2@ | sed s@\.scss$\@\.css@)
  outputdir=${output%/*}
  mkdir -p $outputdir
  sass -r sass-json-vars -t compressed $input ${output}
done

Call it with an input and an output dir as first and second argument. Nice helper right?

p.s. don’t forget to make it executable.

Basic file structure

Now that we have our external dependencies in, our directory structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── helpers
│   └── build_css.sh
├── package.json
└── typings.json

 

(I’ve omitted some documents for the tree-view, you can do this yourself also by creating the alias in your .bashrc: alias tree=”tree -C -I ‘vendor|node_modules|bower_components'”)

Lets add some extra folders to give some directions where stuff should go.

First all public stuff.

Here’s where all scripts that can be visited by a browser will reside. All compiled files will be written to these directories. Why not dist? Since these projects are intended to be a dependency by other scripts, other parts like sass or typescript files are also part of the stuff that’s distributed, but those should never go ‘public’ so there you have it :-).

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p public/api public/css public/js

 

Now the source stuff.

There will occur situations in which you want to have a special interface for an external dependency. Typings will create its own dir, but here we have a space in which we can put our own custom ones.

Sass is split in multiple files which don’t have to be converted individually (partials) and functions that cover some visual logic (mixins).

The conf dir will have ‘constants’ variables that will become fixated in the code at compiletime.

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ mkdir -p sass/partials sass/mixins ts helpers typings_local conf

By now our structure should look like this:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ tree
.
├── bower.json
├── conf
├── helpers
│   └── build_css.sh
├── package.json
├── public
│   ├── api
│   ├── css
│   └── js
├── sass
│   ├── mixins
│   └── partials
├── ts
├── typings.json
└── typings_local

11 directories, 4 files

Create a .gitignore. It makes it so much nicer if you don’t have to store all external dependencies in your own repo. Notice that all files in public are checked in as normally. Optionally you could run a post-commit hook that builds the project so you always know that the built files represent the state of the original source files.

node_modules
bower_components
typings
.sass-cache
ts/**/*js

Tying it together

There are so so so many ways to run tasks. You can use Grunt, Gulp or all kinds of fancy task-runners. Using task-runners can bring many advantages. But for me, a build process is something that should be intuitive whether or not you are familiar with a project or even a programming language. Linux has a common make process, and in my opinion whatever you want to expose as build steps should go through that mechanism. Makefile.

So if you decide on going with plain bash scripts, Grunt, Gulp or whatever you like, always make sure that your endpoints are also mapped in your makefile. In this way you can reliably build all your projects – but also detect issues on all your projects – in the same way, no matter what’s running underneath the hood of the project.

Since there isn’t any real exciting stuff going on in this project, we already have to do a makefile, I see no reason to already start implementing one of these task-runners.

Let’s make a Makefile:

.PHONY: all clean get-deps build build-js build-css

all: get-deps build

clean:
    -rm public/css/*.css
    -rm public/js/*.js
    -find ts/ -name "*.js" -type f -delete

get-deps:
    nvm install 7.0.0
    nvm use 7.0.0
    npm i
    node_modules/.bin/bower i
    node_modules/.bin/typings i
    sudo gem install sass-json-vars

build: build-js build-css
build-js:
    node_modules/.bin/browserify -p [ tsify --target es3 ] ts/app.ts \
        | java -jar node_modules/google-closure-compiler/compiler.jar \
        --create_source_map public/js/app.map --source_map_format=V3 \
        --js_output_file public/js/app.js
build-css:
    helpers/build_css.sh sass public/css

serve:
    node_modules/.bin/static-server -i index.htm public

I’ll explain briefly what happens

Make clean clears the public folders, since they can be regenerated. It also removes .js files in the ts folder. Some IDEs create these files to test their validity.

Make get-deps gets the dependencies for the project. This can be ran at your test and merge servers every time before building.

Make build builds the JS from TypeScript, drags it through browserify to create one file and drags it again through the google closure compiler to garble it and optimize it. Once done, it creates the CSS from SASS

Make serve starts a super-simple server that enables you to test this front-end application visually on-screen.

This small server that helps you during development can be very handy. For now we don’t require any fancy rewriting, proxying or server-sided processing, so staticly serving assets should suffice. Install the package by running:

╭─tim@The-Incredible-Machine ~/Git/build-process ‹master*›
╰─➤ npm i static-server --save-dev
build-process@1.0.0 /home/tim/Git/build-process
└─┬ static-server@2.0.3
├─┬ chalk@0.5.1
…
…

Seeing it work

Lets populate the project with some base values in order to test the build process.

Configuration of the app that’s shared between css and js.

conf/app.json

{
  "color": "blue"
}

Entrypoint for css

sass/app.scss

@import '../conf/app.json';
@import 'partials/_example';

Set the background color to the color in the variable, and center the color name as text on the page

sass/partials/_example.scss

html,
body {
 width: 100%;
 height: 100%;
 line-height: 100%;
 background-color: $color;
 text-align: center;
 font-size: 40vw;
}

An interface to define what can be expected from the configuration.

ts/i_config.ts

interface IConfig {
 color: string
}

export default IConfig;

A basic javascript file that replaces the content of the body element to show that the config variable “color”

ts/app.ts

/// <reference path="../typings_local/require.d.ts" />

import IConfig from "./i_config"

let config = <IConfig>require("../conf/app.json");


window.onload = () => {
 document.body.innerHTML = config.color;
}

In order to load the external JSON file:

typings_local/require.d.ts

declare var require: {
 <T>(path: string): T;
 (paths: string[], callback: (...modules: any[]) => void): void;
 ensure: (paths: string[], callback: (require: <T>(path: string) => T) => void) => void;
};

The html file that combines it all together

public/index.htm

<!DOCTYPE html>
<html>
<head>
 <meta charset="UTF-8">
 <title>Build Process</title>
 <link rel="stylesheet" href="css/app.css">
 http://js/app.js
</head>

<body>
 JS not loaded
</body>

</html>

now run:

make clean build serve and see what happens!

What happens afterwards

This sets a base, but it’s far from done. And since it’s my first blogpost, I’ll first need to assess how this will go.

I plan on writing on lots of topics, but in sequel and relation to this post I’m considering stories about:

  • ServiceWorkers, PWA.
    • TypeScript
    • Caching
    • Cache manipulation
  • TDD, BDD automated testing
    • karma
    • browserstack
    • jasmine
    • chimp

Let me know what you think!

p.s. you can find this code at https://github.com/timmeeuwissen/build-process