Why you should (probably) not use submodules

Edit: made title less misleading clockbaity

Spoiler alert. I do not hate submodules. I do how ever have an instant oh oh response when people mention they want to solve a problem with Git submodules. I think this comes from having most experiences where Git submodules were introduced for the wrong reasons.

What is submodules?

I am not going to go through a large dissertation on submodules – there is tons of good information online on how you can use them. Atlassian has excellent tutorials – or you could buy my book Practical Git – that also covers submodules in brief.

In short submodules allows you to have a folder in your repository be populated from another Git repository. You can either track a specific commit or a branch in the given submodule. This is the basic intuition.

Dependency management is hard

I believe handling our dependencies in a way that allows us to decouple the way we work, is one of the most difficult architectural challenges in software. It seems we underestimate how difficult and valuable maintaining explicit and stable interfaces are. This leads to tons of frustrations and horrible DevEx. Neither monolithic or polylithic repository organizations solve this problem. My opinion is that most software organizations do not have the discipline to successfully maintain their implicit dependencies in a monolithic repo, and thus for most working in a polylithic world will be a forcing function bringing dependency pains to the surface, before it is a very expensive mess to correct.

The delivery mechanism implies collaboration mode

If you hand me source code, I expect to tinker with it, and have a high bandwidth, close collaboration with you. If, on the other hand, I am consuming your service as an API, my expectations to our collaboration might be more along the line of a help-desk and a Google search. There are more levels in between this. The unit that we are sharing could typically be (in increasing order of abstraction)

  • Source Code
  • Library
  • Executable
  • Container
  • REST API
  • SaaS

All different levels are valid, I simply propose that the unit of delivery should match the collaboration mode. This is heavily inspired by Team Topologies.

Version your stuff

I don’t know if it is because I have been working so much in the heavy industries, but I fancy myself a Bill-Of-Material (BOM) way of thinking. Both as a way to enable variance, but also as a way to force ourselves to think about explicit and stable APIs. Please refer to semver.org for my preferred versioning scheme. BOMs also allows us to decouple much of our workflow from our source code, which I think is a power move that most underestimate. Versioning our sub-components also hints at a boundary that should mean decoupling.

Cognitive Load is key

One could also say that developer productivity is the most important, but I like to be a bit more specific than that. The problems that we are trying to solve through (software) engineering are hard enough in themselves, that we can’t afford to splurge on accidental complexity, when we should be spending all our effort on essential complexity. For a both excellent and brief walkthrough of accidental vs essential complexity watch this presentation by J.B. Rainsberger.

Where am I working?

When I am working with submodules, all source is equal. It is not obvious to me whether I am working in context of my project or one of its dependencies. I know I can figure out if I am, it is not completely inscrutable. It just isn’t obvious. When things are not obvious, it means I have to spend mental capacity on it, rather than just grokking it. This means lower mental capacity to spend on solving the actual problem.

Gitting Nuclear

Unlike some centralized version control systems such as ClearCase, Git does not have a concept of atomic commits across repositories. This can make the workflow of updating dependencies and then grabbing those dependencies in to your repository clunking and non-intuitive. Again, this creates friction which I would rather avoid to spend energy on, preserving those neurons for the interesting problem.

Again, this is possible to work around with Git submodules – and get working in a way – but this requires discipline, Git proficiency and doesn’t create the tension that we should handling our dependencies as such.

Using submodules at the right time

I believe submodules are a superior solution to copy pasting code between repositories, so if that is your alternative, please use submodules.

I know some programming ecosystems, such as C++ does not have a default way of handling libraries and dependencies, and in that case it might be good to look into submodules.

But, I really want to stress that this should be the last out. I always recommend assuming that the best way to solve these kinds of dependencies is to build libraries or similar, and then resolve them through what ever build engine you are using. Even if you are building to multiple targets, it is unlikely you will end up building the artifacts more often, than if you share them as code.

Submodules as a decomposing tool

If you have a monolithic repository, and perhaps not the most modern buildsystem submodules can be a great way to experiment with decomposing your monolithic repository. These kinds of situations are often difficult to experiment with, because everything is dependent on things being on the right place on our disks, and submodules can solve that for us. Of course, we shouldn’t just stop there.

Our plan for decomposing through submodules should be like so:

  • Move folder containing component into its own repository
  • Add submodule pointing to the component repository
  • Now we have decoupled the workflows and integrations into the two repositories so it is a start.
  • Figure out how to build the component as a library and publish that simultaneously with source code
  • Start consuming the artifact
  • Deprecate, consume the submodule and transition
  • Remember to profit!

In conclusion

Please, if there is any dependency management in your chosen platform use it. As described above there are a bunch of caveats and concerns regarding using submodules.

I also believe that in order to be working efficiently with Git submodules, you have to be quite Git proficient, at least more proficient than is commonly a good investment for an average dev team. Either invest enough in tooling around your submodules to abstract them away, or consider if different dependency management solutions could be used better.

If you need to work with submodules, respect the tricky workflows around updating and be diligent, then you’ll be able to do well.

If I forgot anything reach out to me, if there is something you disagree with, please let me know, I might be wrong šŸ˜€ And finally, it is a spectrum, your mileage will vary.

Being a (virtual) track host

Today I have had the pleasure of hosting a lunch and learn session with Douglas Squirrel and Jeffrey Fredrick who can be found at https://www.conversationaltransformation.com/. They wrote the book Agile Conversations . While I enjoyed our conversation at https://thedevopsconference.com, that is not the point of this post. I have tons of experience hosting different events, going way back from being the Master of Ceremonies at the student community, and through my consultancy career with plenty of public speaking and DevOpsDays. And I still screw up with basic stuff, so I thought I’d just try to be explicit about what I would expect the host role to contain.

As we all love maturity models, I have created this track host model so you perhaps can grab something useful for the next time you have to host an event track.

Level 0 – The keeper of time

At the most foundational level you just need to keep the wheels turning in an organized way. The most important public speaking skill is time management.

That means that there are two basic things you have to do:

  • Introduce the speaker – At the right time let the audience know who is speaking and what the title of their talk is.
  • Say thank you – At the right time, let the audience ( and speaker ) know the session is over. Bonus point here for signaling to the speaker five minutes before their time is up. Additional bonus points for doing so both obvious to the speaker and subtly from the perspective of the crowd. Suggest the crowd to give another round of applause.

Level 1 – Herder of cats

Now that you have graduated from level 0, we’re going to up the game in terms of setting the speaker up to succeed.

Before the talk

Before the talk, you want to make sure that you get the speaker to the right place in the right time. As a speaker it is so calming to have someone else take responsibility for making sure that you are in the right place at the right time.

You can also make certain anything they need, help with AV, computers are a bottle of water or anything. Make them feel good. That makes for better presentations.
It can be difficult to provide water if you’re both remote, but at least you can provide a link to the correct stage in the conference platform of choice.

During the talk

It is always good to coordinate with the speaker whether they want to take questions during the talk or after. At larger events it is customary to have Q&A at the end. Though some events are dropping Q&As to avoid too many “This is not so much a question as it is a comment”s. A common timeframe for Q&A is 15 minutes. That likely gives time for something along the lines of two to five questions. Fewer than you think. And remember. You are the keeper of the time. Embrace the awkward and give time for questions to appear, but do not force it, and it is better to end the sessions before time, than to unnecessarily prolong the sessions.

Depending on the size of the crowd you can manage logistics and have runners with microphones. If necessary repeat the audience question in a microphone so there are no doubt as to what has been asked.

After the talk

After the talk, make sure the audience knows what the next thing that will happen at this track is. Perhaps mention other tracks. Again, you are the keeper of time, so you should also inform attendees as to the concrete schedule, and whether we are being it or ahead of it. If there is something important such as lunch or sponsor event, it is your responsibility that the attendees are certain about what they are supposed to be doing at any time.

Level 2 – The Hype-person

The first two levels are about the mechanics about of managing a track – now we come to the style of hosting a track. As the track host you are the glue that ties everything together and sets the tone.

Before the talk

Small talk with the speaker. Ask them if there is anything in particular they want you to mention. Confirm pronunciation of their name. Read up on the speaker. Form an opinion. Ask them if there is anything in particular they want you to advertise.

Let the speaker know basically how you are going to introduce them. Not verbatim, but the gist of it. Praise them for some of their content that you’ve consumed.

Let them know where you are sitting in the audience, and how and when you will signal time to them.

As you introduce the speaker, talk them up. Praise their accomplishments. Do not lie or exaggerate, but make sure that you have covered their bases, so it is unnecessary for them to dump any credentials or anything.

Let the audience know not just that you are excited, but why you are excited about this talk in particular. Make the audience cheer the speaker on as you walk off the stage and give room for this fantastic presenter.

During the talk

Make sure that you pay attention to the speaker, and over emote. Having that one person in the audience that laughs at the right time, nods and generally just is engaged brings so much energy to the speaker that will end up being projected to the audience.

This also feeds you great inspiration for asking the good opening question.

As the Q&A starts, I like asking the first question, or as a speaker getting the first question from the host. Usually, we get safe, interesting and understandable questions from the host, so it sets a nice scene for the rest of the Q&A.

The best questions here are asking for elaboration on a specific point, or mention very briefly how something they said impacted you, if you can lead it towards a question for the speaker. It could also be a tangential point, or something you have seen in the other content the speaker has created that you know they are passionate about. This is the only time where you as the host can toot your own horn. Just don’t make to big a deal about it. If there are no questions you can provide more from your bank of interesting conversation topics, but be careful about not just prolonging the sessions beyond what is useful or interesting.

After the talk

Praise the speaker again. Let the audience know where they can find the speaker on social media. If they have published a book, or have a blog mention this here as well.

Conclusion

Hosting a track is a great way to get practice at public speaking, and getting on bigger stages than your current public speaker journey has taken you to as a presenter, but it is ofter underestimated in terms of the skill and preparation that it takes. I hope this post have been useful, but please reach out and contribute your tips to the maturity model.

Go speak. You have interesting things to say.

Edit: Many small fixes due to the ever awesome and feedbacking Figaw šŸ˜€

The Difficulty of Pricing DevOps Transformations

I do not know quite what it is, but Lapa who is the CMO of Eficode has a way of triggering me in such a way that I must respond. He is almost my private /r/writingprompts.

Even though I was already immersed a particularly phrasing here stood out to me.

If you know how much a feature can extract or accelerate revenue..

– Lapa

I think this is the golden goose. If we have this capability or knowing in advance how much any given task is worth, we can reason very effectively on both priority, order and when we should stop throwing more money after a thing. This would allow us to do things like Weighted Shortest Job First and in general make economic decisions. I recommend The Principles of Product Development Flow for an in-depth treatment of this and many other insights.

If we could do this, we would be able to much more reason about what we should do, and how we should approach it.

We could reason about as in this figure

But, unfortunately for most organizations the view is more like so:

Which is obviously a huge problem. It really enables the Highest Paid Persons Opinion (HiPPO) decision making strategy and moves us away from the Measuring of CALMS.

Some organizations are even further from that. They have a hard time figuring out what the cost and revenue is of a value stream. We talk so much about being outcome oriented, that we forget that many teams do not have the context and environment to be outcome aware, on a level more interesting that the number of magical story points they churn out.

I like the focus Mik Kersten brings to this in Project to Product. The flow metrics, combined with the Flow Items really brings some reasoning on a relevant level that can help us talk about productivity.

If we then add in this paper:

Evaluating well-designed and executed experiments that were designed to improve a key metric, only about one-third were successful at improving the key metric!

https://ai.stanford.edu/~ronnyk/ExPThinkWeek2009Public.pdf

It starts looking really gloom. In my experience most teams are not even at the starting stages of scientific thinking and hypothesis driven development. And yet well-designed and executed experiments are more likely to not improve the business. What a bummer.

So, DevOps Transformations

We were talking about DevOps transformations. In the previous I have mostly argued for the difficulty in reasoning on the impact of a task or feature on business value.

So with regards to DevOps Transformations I have a few main points.

First off, when we try to build the business case of a DevOps transformation we almost always end up end some automation driven cost-down analysis. And while this might be true and valid, it is only tangentially related to DevOps, and no one was ever inspired by a visionary cost-down target. In my opinion strategy should be about raising the top-line, and then we should have a continuous focus on improving the bottom-line, but that is not for the vision.

Second, a DevOps transformation is a huge undertaking and trying to estimate the business case of that entire transformation would be folly. I believe that we can argue that the DevOps transformation does not set much in terms of how far we will go, but it will make it more likely that we are moving in the right direction.

Third, DevOps is about iterating, and we should be able to make many smaller business cases along the way. On specific, context-dependent initiatives.
And these improvements should be done in a hypothesis driven way. Not everything will increase productivity, speed, developer happiness or what ever is the focus area. If we are not prepared to do stuff we might have to undo afterwards, we are in dire waters.

Fourth, I haven’t really dug into the horrible overload of the word Transformation. We are not defining some function f: f(x) = y that will move transform one state to another. DevOps is about continuous improvement and iterations. And creating a culture where this can happen. That is not a transformation.
And Fourth again, the word transformation does not distance the desired end-state hard enough from the current state of affairs. This is particularly true for agile transformations. There will be blood. Someone will lose their job, or feel bullied to quit because their work has been transformed in a way that is not comfortable for them. The word transformation glosses over some of the ugly truths of a disruptive process. I paraphrase Geoffrey Moore in Zone to Win when I say “A transformation is so disruptive it takes the full attention of the CEO. There can be only one transformation at any point in time”. I think we underestimate the disruptiveness, and as such we underprioritize it in top management, and there we have already set it up to fail.

Fifth and Final, if we have such a hard time reasoning the impact that our work has on the business, it will be that much more difficult to reason about the impact of improving the way we work.

That was way too many words, but thank you Lapa for making me think.

The Inside Out of DevOps

Change is hard. Using the crew from Inside Out we’ll see how you can make a great travel companion out of anyone!
This post first appeared at https://www.praqma.com/stories/the-inside-out-of-devops/

The Inside Out of DevOps

Everybody wants DevOps! Introducing new stuff in any organization is always challenging though.

It is an adventure, and you will have a rag-tag bunch of companions on your journey. Using the crew from Inside Out we’ll look at these characters, and how you can make a great travel companion out of anyone!

The DevOps Detour

Before we can get started, we probably need to talk a bit about this DevOps thing. There is a lot of arguing about the definition of DevOps. For the purposes of this blogpost let’s say that DevOps is a set of practices that favors automation, a holistic application view, and velocity in software development and deployment. So it is about developing software, delivering it to customers, and making sure it keeps working after delivery.

We believe that DevOps and Continuous Delivery will be just as important as Agile has been for the industry. Developers will look with confused resignation on organizations that do not do Continuous Delivery and DevOps. So let’s take a look at the characters that you might encounter on your journey towards modern software development.

Welcome to Headquarters

In the movie Inside Out, we’re inside the head of a girl called Riley. The primary emotions Anger, Disgust, Fear, Joy and Sadness have a shared control panel, the combination of these different perspectives make up the personality and behavior at Riley. While these emotions are an obvious simplification, they have helped many better understand how emotions work. So let’s try to apply the same reduction to the DevOps transition in a tech organization.

Joy

In the world of Joy everything is awesome. Joy is a first mover, always tinkering, and typically better at starting things than finishing them. It is necessary to have Joy around, we need the everything is awesome mentality. But we also have to remember that different isn’t good by itself.

The Joy Character

We need to continuously reflect on the choices we have made and not just keep running towards a shining goal. It is important for any change process that we at all times are aware of why we are doing what we are doing, and how that is aligned with the overall goal. We are trying to solve some problems, we are not just grabbing a bag of new tools for the sake of doing something.

People with the characteristics of Joy tend to be Technology Apologists. Very impressed that a tool works or exists, but not necessarily observant of the (lack of) maturity or usability. The term Technology Apologist was coined in the book The inmates are running the asylum by Alan Cooper.

It can be hard to keep a Joy in line and working in accordance to the established processes. Joy can be used as a canary of issues to come. Make sure it is so easy to do things the right way, that Joy can’t help but create that Jira issue for the improvement she just came up with.

Favourite Quotes from your Joy character:

  • ā€This is how they do it at Facebookā€
  • ā€I found this new toolā€
  • ā€I read on HackerNews ā€¦ā€
  • ā€Oh, yes, that doesn’t quite work yet, but […]ā€

Joy’s favourite tools:

  • Something she compiled herself from some repository with code written in a hip language like Rust or Go.
  • Alternatively, Vim.

Anger

Anger gets sh!t done! You can be sure that if something is broken you will hear it from Anger. If colleagues of Anger are not quite adhering to coding style, or waste time at meetings, it will not go unmentioned. Anger is very productive, but it can be a very uneven road running with Anger. Productivity tend to go down when you have an Angry developer standing at your desk at regular intervals.

The Anger Character

A way to treat Anger is to make sure that you are aligned with Anger and that Anger does not have unrealistic expectations in terms of the maturity of what they work with. It is also very important to not let Anger drive all your decisions. In that case you will be fluttering like a firefly to extinguish the fire du jour.

You need to remember though, the Anger does not come out of nothing. Figure out the root cause and address it. Figure out what drives Anger and he will be a powerful ally. Anger gets sh!t done!

Favourite Quotes from your Anger character:

  • ā€The pipeline is brokenā€
  • ā€Who broke my build?ā€
  • ā€I’m not gonna wait for IT any longer, I’ll buy my own serverā€

Anger’s favourite tools:

  • The print screen key
  • Outlook
  • Walking to your desk

Disgust

Disgust serves a very important purpose, to keep us from getting poisoned. It is a healthy trait to have a certain skepticism regarding new tools or procedures. As many change agents have tendencies towards Joy, we feel that Disgust is very backwards. They are too hesitant for us to work with them in a meaningful manner. But in many ways Disgust is the barometer for success. Remember that at one point in time what is now status-quo was new and disgusting.

The Disgust Character

Until we can convince Disgust, the transition towards DevOps is not going in a delicious direction. And as DevOps is something that we sprinkle on top, it should be delicious. It should be the case that what we are moving towards is more attractive than status quo.

We need to decipher the origins of Disgust’s skepticism, and cut out the poisonous parts. Or boil them hard enough that they will become unrecognizable. Make small improvements for Disgust. Make a Git alias or a little script to take away some of the pain.

I’m sorry to say it, but this might be the point in which you end up taking screenshots and put them in a Word document or PowerPoint.

Favorite Quotes from your Disgust character:

  • ā€This looks awfully like a tool from the ā€˜90sā€
  • ā€This new setup looks very complexā€
  • ā€Don’t think that just because it works other places it works hereā€
  • ā€I’m not sure that it provides value to the company that I learn Git. It is for people like youā€

Disgust’s Favorite tools:

  • The IT-department approved IDE
  • Sharepoint

Fear

Fear is important. Fear prevents us from getting hurt. Our experience is that Fear tends to look at what is the safest path in the short term. For many companies the safest thing might be to keep working in SVN. Keep working as we usually to. But that will hurt the business in the longer term. Fear sees through the ā€œOpsā€ point of view. We have something that we need to keep running. That is the most important part of everything that we do, because otherwise how can we provide value to our customers? Fear likes stability above all.

The Fear Character

An important thing in dealing with Fear is to not do something unexpected. Make sure that you have a clear and transparent roadmap with a timeline. Be honest if you know that the next short timespan, there will be bumps in the road. That way Fear can prepare for it.

If possible involve Fear in the change process. And then when you are the utmost annoying with Fear being a pedantic prick, remember that everything Fear catches and points out, is a lot cheaper than fixing what went into production.

Favourite quotes from your Fear character:

  • ā€I don’t think it will workā€
  • ā€It works now, why change that?ā€
  • ā€I’m not sure picking an Open Source tool is such a good ideaā€
  • ā€How will we know that […] ?ā€

Fear’s favorite tool:

  • Excel
  • Visual Studio 2010
  • A fourteen page Word document describing an install process

Sadness

Sadness is what brings context to Joy, sadness is what brings reflection. It is not until we encompass this and make a balanced whole that we will be a successfully DevOps organization.

Sadness is the necessary counterpart to the Technology Apologist Joy. She makes sure that we do not forget why the last transition failed. And to Joy this will feel like a miserable nay-sayer. But that is not the purpose of sadness.

Nobody puts sadness in the corner

As Sadness says ā€Crying helps me slow down and obsess over the weight of life’s problemsā€. And this is how it feels. But if we do not obsess a bit, if we do not go through our retrospectives honestly, remembering both pains and prides. How are we supposed to improve? How are we supposed to succeed. Sadness gives us all the important learnings that we need to be better next time.

I am Joy at the core. I start things, get distracted, start new things. Sometimes forget that there is also a reality that needs some love and care. I have to be very careful that I do not try to just put Sadness in the chalk circle and say ā€œdon’t disturb my progressā€.

I get annoyed when I (again) have gotten this awesome idea, and I present it to somebody who turns out to be a fact-driven realistic, level-headed guy. It helps me figure out what ideas are important and what ideas might be best left at that. Ideas.

So respect and acknowledge Sadness, there is much truth to be found from them.

Favourite quotes from your Sadness character:

  • ā€In the last transition we forgot ..ā€
  • ā€Remember that everyone needs to get onboardā€
  • ā€There was a huge hassle last time we changed tool because ….ā€
  • ā€The pain is still there with the new tool, just at a different place in the processā€

Sadness’ Favourite tool:

  • Memory of what have been tried before. The comprised set of pains of transitioning from the day the company started till now.
  • A coffee mug from a tool vendor long gone

Bing Bong

Bing Bong is the imaginary friend of Riley. No longer relevant he sacrifices himself for the sake of the mental health of Riley.

As such we can see him of an example of the code base or tool stack that has been. It was important, and at a point in time was the most critical part of the way that the organization worked. We should make our decisions based on the lessons learned when Bing Bong was the king of our streets, but we should not try to artificially preserve him, or for that matter maintain restrictions imposed by the way we used to work.

The character Bing Bong

As long as we recognize that the tool stack that we painstakingly have cobbled together over the years is what enabled us to get where we are, we should be able to, in good conscience, say good bye to it, and move on to DevOps. This is a difficult realization to obtain, especially in companies with codebases that had their first lines of code before companies like Uber or Tesla, were even founded.

We must acknowledge all the decisions, that were right at the time, that led us to this mess we are in now. And we have to acknowledge that we can move much further by changing.

Going towards DevOps

If you are on the journey towards Continuous Delivery and DevOps and need some sparring, or if you are just trying to figure out how to start, please reach out. We have a lot of experience of being a valuable travel companion in those journeys.

If you see yourself or one of you colleagues as one of these characters, please give us your favorite quote from them.

We wish you godspeed and great mental health.

All images in this blogpost are from the movie Inside Out. Copyright: ©2015 Disney/Pixar.

Learning to use your HEAD

Learning to use your HEAD

I have a saying called “Learning by teaching” which is just my way of phrasing the fact that you always learn a lot by teaching others. This happened again to me a few days ago when I was asked  a Git question. I have been extensively using and teaching Git the last years, so I feel quite confident in my abilities in this domain. I was asked about why you can’t push to your feature branch, when you’ve rebased your local branch on top of master. We had a good chat about force pushes (–with-lease), and protected branches.

But a topic that we also covered were the git push syntax. There are many good learning opportunities in this simple command, even though it is composed of simple parts. My usual starting point is git push <remote> <local-branch>:<remote-branch>, which can look intimidating to many Git novices, but can be decomposed in to understandable parts.

We first went into deleting your branch on the remote by pushing the “empty” branch to the remote branch you want to delete. For example to delete the branch my-branch on the remote origin , you can run the command git push origin :my-branch. This is using the empty string as the local branch.

But we have yet to cover my learning. I have always been a bit annoyed by the need to be verbose when no upstream branch has been set.

Running the command git push, when no upstream branch is configured gives a fatal error.
Running git push without an upstream branch configured gives an error.

The above example is reasonably easy, but many workflows involve feature/ and some form of issue id and task description as part of the branch name, and then it becomes a bother. But my colleague Christian that I had been discussing this with just did using HEAD instead of the branch name.

Using HEAD instead of branchname shortens the set-upstream flag to Git push

This makes sense as we can dereference the HEAD pointer to the branch name, as seen by cat .git/HEAD.

This way of setting the upstream branch is much easier for me to grok and I have now become more proficient in Git. Thank you!

The lemonade stand

The Lemonade Stand - A Teambuilding Exercise

I have a love/hate relationship with team building exercises. I think investing in culture and team health is essential. But when I look to find team building exercises to do, most are so far from being operational in my day to day work that it is difficult to see what the value is.

I do not remember the source of The Lemonade Stand, but now I’ve finally done the exercise with a team. Though I knew the exercise as a co-located exercise, we did it in a distributed setting and it work like a charm. The exercise spread a good mood and energy, and it was touching to hear the team members share.

Here comes my lemonade stand and then I’ll walk through the parts

Example Lemonade Stand

If it is not obvious from the above, the exercise is not about drawing skill šŸ™‚ The exercise is about what you bring to work.

You draw a lemonade stand, with a banner, a table, some posts to keep the banner up, and then of course the awesome host – you!

On top of the table you add the items that are your active skills that you bring to work. In the example above I’ve mentioned Git, Jira, DevOps and Agile.

Below the table are the secret menu, or the skills that you bring, but are not immediately relevant to your work. In the example above I’ve written Public Speaking and Playing Music. While the connection is not obvious, it is potentially very nice to know that I can be approached if you need some pointers or feedback on a big presentation.

On the posts you put the things that you’d like to add to the table, the skills you would like to learn. For me that would be SAFe, Coaching techniques and building visualizations.

These would be the things in the original exercise, but I think we can add another useful layer.

On the host we put the things that motivates us in the heart, and our worries in the mind. In this example I’ve put “Teaching something” as a motivation and “Unsure if I provide value” as a worry. I’ve used them as an example because they are true.

The Agenda

We did this exercise in 90 minutes on a team of six. We had ample time and even got to discuss team names a bit.

I spent the first 5-10 minutes introducing the exercise showing the example above.
I recommend that you put a lot of emphasis on the fact that it is not a drawing exercise and there are no right and wrong answers to this.

We then spent 25 minutes drawing our lemonade stands including a five minute break.

We then took turns presenting our lemonade stands, and it was awesome! Presenting the lemonade stands was 5-10 minutes per person.

The Equipment

If you are co-located you can do just fine with some A3 sheets or flip overs and post-its. In this distributed setting I said that they could use Paint just fine, two of the participants used other drawing apps, such as draw.io.

I did not do anything particular to use a collaborative tool or anything, the participants simply shared their screen when it was their turn. I recommend selecting the solution with the lowest barrier to entry, and what is available in your ecosystem already.

The Take-aways

The lemonade stand is a great exercise for getting to know each other without it becoming too kumbaya for my tastes. The feedback I got from the team was also very positive, although of course that might be biased because I asked for it as a vote in a public slack channel. I remain positive though. The team shared their lemonade stands in their slack channel. That gave some persistence to the whole exercise.

I recommend doing it in your team, and it will not be the last time I have done it.

Faster React Pipelines with Github Actions Dependency Cache

Faster React builds with Github Actions Dependency Cache

I’ve been having a bunch of fun with JavaScript these days. I try to build stuff with React and serverless on AWS Lambda (Sideproject: Cultooling).
In particular JSX and JS6 features are making me feel quite productive. What I hate is the truckload of dependencies that I have to initialize and work with. But I accept them and get by with them because of the ease of adding functionality. But what really grinds my gears is having to spend half of the compute time in my pipeline establishing dependencies. I publish my React application to S3 using Github Actions, so when they added a dependecy cache I saw an option to cut some time off.

To production we go

The help documentation is very useful, and I grabbed what I needed, and added it to my already existing Github Action pipeline.

    - name: Cache node modules
      uses: actions/cache@v1
      with:
        path: node_modules
        key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
        restore-keys: |
          ${{ runner.OS }}-build-${{ env.cache-name }}-
          ${{ runner.OS }}-build-
          ${{ runner.OS }}-

Using this action with this configuration creates a cache of the path node_modules, using a hash of package-lock.json as a key. This ensures cache hits as long as dependencies are not changing, and misses when they change. Just as we want. restore-keys are the keys that will be tried to restore cache from.
The keys will be tried in order, and a hit will occur if the string in restore-keys is a partial match with a cached key.

If you want to use multiple cache paths, repeat this action for each.

Now drumroll and let’s look at the results:
cultooling-ci-improvement

While the pipeline is simple, half of the time spent running is on handling dependencies, so if we can get a nice speed up, simply by grabbing a tool off the Github shelf, I am not complaining.

The ease of S3 on AWS, has also made me believe that I will always be hosting static files there from now on. I had some issues getting this to work due to the AWS cli GH Action being broken. Luckily the fix is trivial as the ubuntu-latest that Github is providing contains both npm and aws-cli.
I have shared my Github Actions file for this pipeline. This will build and deploy to an S3 bucket assuming correct permissions has been setup on AWS, and the secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY has been set on the repo. Please tell me where I have done it wrong – otherwise enjoy!

name: CI

on:
  push:
    branches:
      - master

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - name: checkout
      uses: actions/checkout@v1

    - name: Cache node modules
      uses: actions/cache@v1
      with:
        path: node_modules
        key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
        restore-keys: |
          ${{ runner.OS }}-build-${{ env.cache-name }}-
          ${{ runner.OS }}-build-
          ${{ runner.OS }}-

    - name: Build and Test
      run: npm install && npm run build && CI=true npm test
    - name: Deploy to S3
      run: aws s3 sync build/ s3://cultooling.com --delete
      env:
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY:  ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: eu-west-1

I hope you found it useful reach out on twitter if you want to chat!

DOPIER Infrastructure

Infrastructure is all the buzz these days. And there is a lot of buzzwords around it. Clusters, Kubernetes, serverless, on-premise, cloud, edge computing and tons of other more or less valuable terms. There are many different levels of progress along the axis of infrastructure, from those just becoming aware that infrastructure is something that exists to those that are already far in their journey and already have extra dope infrastructure. I have compiled a set of principles that will help you make sane decisions regarding how you think about infrastructure. To help making your infrastructure DOPIER.

Deliberate

Too much infrastructure is ad-hoc, or arbitrary. We are not quite sure how Server A is spec’ed, why it has 8 GB of RAM, or whether it has SSD or not. This is broken.

We can’t handle our uncertainty by ignoring it. This server has 2 CPUs because it is running Service X, and that is the requirement for performance.

Our infrastructure should be as it is, because we asked for it to be, not because of some arbitrary boxing of machines, or because it is what you use to order.

Operable

fancycrave-342517-unsplash

Most companies have some degree of virtualization, and it has never been easier to obtain a machine somewhere.

This extremely low barrier to entry makes it much easier to forget that the work of making that machine run properly has only just begun with ordering it. We forgot things like updating the operating system, or the service that is running there. Often we do not have the skill set to do so in a professional manner.

It is important that we are able to do operations work on our infrastructure, at a reasonable level of abstraction. For some teams this means having full control of the entire hardware stack, configuring networks, firewalls and kernels. For other teams this means that a machine magically is running somewhere and appears up-dated once in a while.

There is no one true level of abstraction. What matters is that the infrastructure capabilities are rightsourced so you can spend your effort on what provides value to your customers. I probably have an unhealthy attraction to higher levels of abstraction, so for me serverless computing does look very interesting.

Plastic

ines-alvarez-fdez-489172-unsplash

Infrastructure is hard. It is hard to change, once deployed. But this is an anti-pattern. Infrastructure needs to be easy to change in sane ways. This does not mean that we should go completely anarchistic and wild west on our infrastructure. On the other hand this requires more guidance, better tools and is more difficult than simply having traditional IT operations. GitOps is one perspective to obtain plastic infrastructure, although it seems a bit buzzwordy. The goal is to make it easy to change in sane ways but difficult to break.

We are in a time of change, so make sure that you are able to adapt quickly, in a safe manner.

Immutable

On the surface immutability conflicts with plasticity, but I use it on different levels. Infrastructure as a whole should be plastic, the atomic units that compose it should be immutable. In practice this means that if you are running a container somewhere, that container is uniquely versioned and traceable. There should be no doubt of exactly what code or version is running where.

Compose your plastic infrastructure from immutable atomic units!

Existing

elijah-hiett-336504-unsplash

“If it is not under version control, it does not exist”

Originally this quote was targeted at source code, but in these times I think we should extend it to also cover anything infrastructure. In our minds this should go from nice-to-have to a simply necessary fact. Not having things in version control is a risk that is both unnecessary and potentially catastrophic.

“If it is not monitored, it is not in production”

This is a corollary to the first quote, but it is the IT version of “Do you know where your child is, right now?”. Is your infrastructure happy, is it healthy? If you are unable to answer these questions easily you will at some point be caught off guard. Personally I prefer my surprises to be of the “You did not expect this party!” kind, rather than “You better call your spouse” kind. But if you prefer the latter, feel free not to monitor your services. The same way tests capture knowledge about your software your monitoring configurations captures knowledge about your IT landscape.

Reproduceable

waldemar-brandt-716472-unsplash

At this point it feels like beating a dead horse,but the point still holds. One-offs should no longer be a part of the vocabulary of an IT professional. We need to be able to reproduce our infrastructure. In production we need to be able to reproduce our infrastructure as a means of disaster recovery. In the steps before production readily available environments for stating and testing are key to the software delivery process. The value of having these environments, preferably scripted and automated, is hard to fathom before having lived it.

The best way to obtain a reproduceable infrastructure is to automate provisioning and have the scripts under version control.

Making your infrastructure DOPIER

So now we have established the properties of DOPIER infrastructure, how do we go about getting there? I think the most important thing is to start figuring out the world and being painfully honest. Enumerate the services that your team is running, but not really doing operations on. Figure out how to make those parts of your infrastructure DOPIER. That should start the process and the next steps will show themselves.