How to make better decisions with Second-Order Thinking

Part of Mental Models General Intro


See the flashcard Second-Order Thinking Quiz

In the complex software development and management world, the quality of our thinking often determines the success of our projects and teams. Mental models and cognitive biases are crucial in approaching problems, making decisions, and interacting with others.

This article focuses on critical mental models that enhance problem-solving abilities and decision-making processes. We'll also deep-dive into common cognitive biases that can derail even the most experienced professionals. By understanding these concepts, you will become a super thinker and make better decisions.

We will see the following mental models:

  1. ➡️ Second-Order Thinking

  2. 🧠 First-Principles Thinking

  3. 🔵 Circle of Competence

  4. 🔄 Inversion

  5. 🎯 Pareto Principles (80/20) rule

  6. 🚧 Sunk Cost Fallacy

  7. 👀 Confirmation Bias

  8. 🔑 Type 1 vs Type 2 decisions

  9. ✂️ Occam’s Razor

  10. 🤔 Dunning-Kruger Effect

  11. 🗺️ The Map is not the Territory

So, let’s dive in.

Now, when reading on the web, you can see a helpful “table of contents” quick navigation bar in short lines on the left side of an article. You can click on it to navigate through the article.

Join CTOs, architects, and broader tech teams who trust Catio for their tech stack management. Gain real-time observability, 24/7 AI-driven insights, and data-driven planning capabilities for your tech architecture. Ready to architect with AI expertise and use Second-Order Thinking to excel with your tech stack? Book a demo today!

Book a Demo

We all make decisions in our lives that make us regret. Although it’s tough to predict the future, the main issue with our choices is that we only focus on the next important thing. We can choose a company because of the high salary, but maybe we will stagnate there, and the company has a bad culture, asking us to do many long hours, etc. In the long run, we will conclude that we made a wrong decision, and this is an excellent example of first-order thinking. It always happens when we prioritize immediate rewards over long-term growth.

The shortcomings of first-order thinking are significant:

Yet, we can use second-order thinking**➡️** to improve our decisions significantly. Howard Marks coined this term in his book The Most Important Thing, which discusses investment psychology in volatile markets. He says, “First-level thinking is simplistic and superficial, and just about everyone can do it…” but “Second-level thinking is deep, complex, and convoluted.
Open: Pasted image 20241007200416.png
IMG-20241106232525128.jpeg
What are the ways we can put **Second-order thinking** into practice:

  1. 🤔 Start with self-questioning. After deciding or judging, ask yourself: "And then what?" This simple question can uncover the next layer of consequences.

  2. 🖼️ Visualize. Create a visual representation of your decision. Start with the primary outcome in the center, then branch out to secondary, tertiary, and further consequences.

  3. 🧐 Challenge our assumptions. Try to play devil’s advocate and ask, “What if this fails? What if that fails? What is the probability I’m right?”.

  4. 📜 Check our past decisions. Based on the history and trends, what unforeseen consequences can arise?

  5. 💬 Engage in group think tanks. Organize brainstorming sessions where team members explore the potential outcomes of a decision. Collective thinking often reveals consequences an individual might miss.

  6. 🗓️ Reflect. Set aside time, weekly or monthly, to reflect on your decisions. Consider what went as expected and what unforeseen consequences arose. You can ask yourself, “How will I feel about this decision in 10 minutes/days/months from now”?

Second-order thinking can be used in software engineering. Before implementing a new software feature, consider how users will adapt or misuse it. How will it impact system performance or security?

Making good decisions is crucial in the complex world of software engineering. Yet, pursuing "good" decisions can be misleading. Instead, we should focus on making decisions using reliable thinking methods. This is where mental models and cognitive biases come into play.

Mental models are a set of beliefs and ideas that help us understand the world based on our experiences. These models can be powerful tools for problem-solving, design, and project management in software engineering.

Cognitive biases, on the other hand, are systematic patterns of deviation from rational judgment. These biases affect how we perceive, process, and interpret information. They often result from mental shortcuts (heuristics) our brain uses to make quick decisions.

Mental models help us think more effectively, whereas cognitive biases can distort our thinking, leading to flawed decisions and misjudgments. Understanding both helps us improve decision-making and avoid common pitfalls in reasoning.
Open: Pasted image 20241007200405.png
IMG-20241106232525161.jpeg
I’m a big fan of mental models and biases. Below are some of my favorite mental models and biases, in addition to the previously mentioned Second-order thinking I regularly use in my daily work.

First-Principles Thinking

Open: Pasted image 20241007200356.png
IMG-20241106232525193.jpeg
First-principles thinking involves breaking down complex problems into their fundamental components. Instead of relying on assumptions or analogies, this model encourages engineers to analyze the essential truths of a problem. This approach can lead to innovative solutions in software engineering by allowing developers to rethink existing frameworks and designs. For instance, an engineer might deconstruct the system to identify the root cause when faced with a performance issue instead of applying conventional fixes. This leads to more effective and original solutions.

Read more about First-principles thinking.

The Circle of Competence

Open: Pasted image 20241007200347.png
IMG-20241106232525235.jpeg
The Circle of Competence is a mental model that encourages us to work in areas where we have deep expertise, make good decisions, and avoid costly mistakes. Everyone has a "circle" where they are competent, and their knowledge is broad and deep. Beyond that circle, you're venturing into areas where mistakes are more likely.

It consists of the following circles:

However, it’s not about staying in the circle where we know it all; growing our circle is critical to proper career and personal development. By expanding our knowledge through continuous learning and experience, we can gradually increase the areas in which we're competent.

In software engineering, this might involve mastering new technologies or techniques while staying grounded in your core strengths. The goal is to know where your expertise ends and how to strategically grow it over time without overextending into unknown territory.

The most significant growth happens within **the area of things we aren’t aware of and don’t know**exist, so we must seek guidance or advice from somebody who can lead us in this direction or explore new territories by ourselves by going to conferences, taking stretch roles, and more.

Inversion

Inversion involves approaching problems backward. Instead of asking, "How can we make this system work?" we might ask, "What could cause this system to fail?". This perspective can be invaluable in software engineering, particularly error handling, security, and reliability. By anticipating potential failures, we can build more robust and resilient systems.

Many famous thinkers used this method, including Charlie Munger. Read more about this in the bonus section.

The Pareto Principle

Open: Pasted image 20241007200333.png
IMG-20241106232525300.jpeg

The Pareto Principle suggests that roughly 80% of effects come from 20% of causes. This principle can guide our focus and resource allocation. For instance, a system's performance issues stem from 20% of its components. We can achieve significant improvements with minimal effort by identifying and optimizing these critical areas.

And this can be used in every part of our life:

Sunk Cost Fallacy

Open: Pasted image 20241007200322.png
IMG-20241106232525335.jpeg
It refers to the tendency to continue investing in a project due to previously invested resources, even when they no longer make sense. For example, if a project is failing but significant resources have already been allocated, it may be tempting to push forward. But, understanding the sunk cost fallacy allows teams to pivot or abandon projects that no longer align with their goals.

Confirmation Bias

Open: Pasted image 20241007200315.png
IMG-20241106232525374.jpeg
Confirmation bias is the tendency to search for, interpret, favor, and recall information that confirms or supports one's prior beliefs or values. This phenomenon often leads us to favor information supporting our views while disregarding or minimizing evidence contradicting them.

In software engineering, confirmation bias can manifest in several ways:

To mitigate this bias, we must look at problems from different perspectives, use a data-driven decision-making process, use iterative processes with continuous feedback and adaptation, and use critical thinking when solving problems.

Type 1 vs Type 2 decisions

This model, popularized by Jeff Bezos, distinguishes between:

  1. **🚀 Type 1:****irreversible, consequential decisions**. Once made, these decisions are difficult or impossible to undo. They require careful deliberation and a deep understanding of potential risks and rewards, as a wrong move can significantly impact the business or organization. An example of such a decision is choosing a tech stack or deciding on monolith or microservices architecture.

  2. **🔙 Type 2:****reversible, less impactful decisions.** These decisions allow for more agility and experimentation because the consequences are less significant, and corrections can be made quickly. An example of such a decision can be refactoring a specific function or A/B testing certain features.

Recognizing the difference can help us divide our decision-making resources more effectively.

Occam's Razor

Open: Pasted image 20241007200301.png
IMG-20241106232525410.jpeg
The simplest explanation is usually the correct one. This principle favors simpler solutions over complex ones when possible. In software engineering, this idea translates to favoring minimalism and simplicity over complexity, whether in system design, code structure, or problem-solving.

Here’s how Occam’s Razor can apply in various software engineering scenarios:

The Dunning-Kruger Effec

Open: Pasted image 20241007195957.png
IMG-20241106232525445.jpeg

The Dunning-Kruger Effect describes how people with limited knowledge in a domain may overestimate their competence while experts underestimate theirs (illusion of superiority). Awareness of this cognitive bias in software engineering can foster humility and continuous learning. It encourages us to seek peer reviews, share knowledge, and approach complex problems with an open mind.

Parkinson's Law

Open: Pasted image 20241007200041.png
IMG-20241106232525513.jpeg
Parkinson's law states that work expands to fill the time available for completion. This can manifest as scope creep or inefficient use of development time. Recognizing this tendency can help us set more realistic deadlines, break work into smaller, time-boxed tasks, and maintain focus on delivering value.

A few strategies that can help us fight the Parkison’s Law are:

The Map is not the Territory

This robust mental model I learned during my NLP Practitioner training reveals new perspectives. It was coined by Alfred Korzybski in 1931, and it says that a map (our perception) represents reality (the territory) and represents only an interpretation.

We should always consider that our views on things are only maps, not the actual territory, as maps cannot fully capture the complexities of the real world. We need maps (models) for analysis and planning, and they help us navigate uncertainty and complexity across different outcomes.

For example, in software engineering, design diagrams or system architectures are just abstractions—they can help guide development. Still, they don't account for all the dynamic variables that emerge during implementation. Recognizing this helps us stay adaptable, question assumptions, and continuously adjust based on actual results rather than rigidly enforcing initial plans.

All models are wrong but some are useful.” - George Box

Map of all cognitive biases

Open: Pasted image 20241007200238.png
IMG-20241106232525582.jpeg