No person can function properly without building mental models to understand the world around them. The reality is simply too complex to deal with as is, so we need to abstract it.
Software is no different. The complexity of software systems has been only growing. As a result, the ability to quickly build mental models and use them to reason about these systems is an important software engineering skill.
Building good mental models is important because they allow us to better communicate with other software engineers, explain our ideas, and understand the impact of our changes on our software.
Building mental models
Reasoning about software systems requires building mental models that represent them. The granularity of these models depends on their purpose. Often, one level of granularity is not sufficient. The most effective software engineers can quickly build a few models and switch between them depending on the situation.
For example, if your team owns a few services, you can draw a lines-and-boxes diagram where boxes represent services and lines represent dependencies. You can then zoom in and build a model for each service. These service-level models could illustrate interactions between the libraries a service consists of. If you want more details, you can create a class diagram. You can continue zooming in and focus on methods, code blocks, statements, etc.
Each of these models describes your system at a different level of granularity and has a unique set of applications. The high-level model could be useful to troubleshoot larger outages (or when talking with your director) but is unlikely to help you fix a small bug. The more detailed models are best suited for solving gnarly issues but won't be helpful when explaining your infrastructure to other teams.
Building models covering different aspects of the same system is also common. If you want to analyze your system from the security perspective, your model will include different details than when focusing on performance.
Caveats
Mental models are so natural to people that we often forget about their flaws.
All models are wrong
Models, by definition, ignore details. As they only capture certain aspects of reality, they are inaccurate. Furthermore, some relevant information is often omitted because it doesn't fit the model. Edge cases are an excellent example.
For instance, when software developers explain how their code works, they rarely mention error cases. They focus on their ifs and fors and the program flow but omit exceptions because exceptions make their model murkier. The problem is that error scenarios are important. Incorrect or missing error handling is a common cause of system outages.
Models get more wrong with time
The world, including our software systems, is in constant flux. But mental models don't automatically keep up with changing reality. Outdated mental models lead to misunderstandings and bad decisions.
I experienced this very problem recently. I started working on a feature that depended on a system I had never seen changing. Everything was going swimmingly, and I only needed to tie up a few loose ends related to some new data requirements. But when doing this, I discovered, to my dismay, that the system I depended on had recently changed. It had been updated to accommodate the same data requirements I struggled with. Making my feature work now required additional information I didn't have. Plumbing this data meant revisiting my implementation. Working off of an outdated mental model cost me implementing my feature twice.
No two models are identical
Building mental models requires deciding which details are important depending on the purpose of the model. However, even if the purpose of the model is well understood, different people will consider different information relevant.
Also, mental models built at different times will naturally differ because they capture different system versions.
The interesting fact about this phenomenon is that the overlap between mental models built by different people is usually significant. The differences are often discovered unexpectedly, e.g., when discussing small but important details.
If you found this useful, please share it with a friend and consider subscribing if you haven’t already.
Thanks for reading!
-Pawel