Skip to Content

Intro to Computational Thinking

The foundations of computational thinking that are presented here are derived from the four pillars that were defined by Davidson and Murphy in their UPenn/Coursera computational thinking course. Other foundational models are out there, mostly overlapping but with their own nuances and foci.

There are many definitions of computational thinking out there. The one we will use is: Computational Thinking is the mental discipline of thinking about a problem using concepts from computer science, with the ultimate goal of solving the problem with the help of a computer.

The UPenn course uses this definition: “Computational Thinking is an approach to solving problems using concepts and ideas from computer science, and expressing solutions to those problems so that they can be run on a computer.”

Five Foundations of Computational Thinking

The UPenn four pillars are:

  1. Decomposition
  2. Pattern Recognition
  3. Data Representation and Abstraction
  4. Algorithms

We are going to adjust those to be five foundations:

  1. Abstraction
  2. Decomposition
  3. Pattern Recognition
  4. Data Representation
  5. Algorithms

Perhaps we can remember them as: ADPDA

Abstraction

Merriam-Webster defines abstraction as “the act or process of abstracting”, “the state of being abstracted”, or “an abstract idea or term”. But that just uses the same root word abstract.

Abstract has many definitions; the ones relevant for us are:

  • “n1a: relating to or involving general ideas or qualities rather than an actual object, person, etc.”
  • “n1c: insufficiently factual”
  • “n2: naming a quality apart from an object” (Naming!)
  • “n3a: dealing with a subject in its theoretical aspects”
  • “v2: to draw away the attention of”
  • “v5: to think about or understand by separating general qualities, concepts, etc. from specific objects or instances”

Wikipedia:Abstraction says “Thinking in abstractions is considered by anthropologists, archaeologists, and sociologists to be one of the key traits in modern human behaviour”.

Abstraction involves ignoring details that are unnecessary for the current issue at hand, and being able to talk about and reason about a thing apart from those details.

Abstraction is everywhere in our lives, so why treat it as a foundational idea of computational thinking? Because at the very bottom, computers are electronic devices operating on two signal levels we call bits (0 and 1), and so we must build abstractions on top and away from this baseline in order to do anything useful with computers.

A critical part of abstraction is naming. Naming things is fundamental to humans being able to think and reason about things. Indeed, some say that language is one of the critical aspects of being human.

When we come up with a new abstraction, we usually have to give it a name so that we can talk about it as a single thing. If you think about many of the “modern” words that you know – words that were invented recently – many of these words are names of new abstractions that have been devised. Social media is a name of an abstract class of computer applications that allow people to interact in some way. Post is the word (name) we use to talk about many kinds of messages we put out on social media.

Decomposition

We might think of decomposition as food rotting, but in computer science, decomposition is the act of breaking down a large, complex thing into smaller, more manageable, and understandable parts.

Wikipedia:Decomposition says “In computer science, decomposition is the process of identifying and organising a complex system into smaller components or layers of abstraction.”

That’s true, but it not only applies to systems, but also problems and concepts. When we are faced with something that is overwhelming to think about, we must use decomposition to deal with sub-parts of that thing.

There are two ways to apply decomposition: top down and bottom up. Usually we end up doing some of both. In top down thinking, we start with the whole, break it into parts, and then if needed break those down further, until we have pieces that we can tackle. In bottom up thinking, we don’t yet have a good idea of how all the parts will fit together, but we start thinking about the smallest pieces that we know will be part of the whole thing.

Decomposition is another place where naming is critical. When we break a complex thing into subparts, now we need names for those subparts. Decomposition and abstraction often work together, because when we decompose something now we need to think about those subparts, often using abstractions. And then we need to name that part. When people started imagining the modern computer GUI, they decomposed the interaction and came up with many new words: window, icon, mouse, pointer (those four are known as WIMP), drag, drop, selection, cut, paste, etc.

Pattern Recognition

Pattern recognition sounds pretty obvious, but it is worthwhile to treat as a foundation of computational thinking because it is fundamental to computing. We need to consciously be looking for patterns such as:

  • When this happens, that should follow
  • This happens repeatedly, based on these constraints
  • If this condition is true, then this action should occur, or else that action should occur
  • How do separate steps fit together, and do they need to be ordered?

One aspect of pattern recognition is figuring out how the pattern both generalizes and specializes, and what exceptions there might be. Thinking about the most general case possible allows us to define a pattern that is useful over the most possible things. But then we need to know how exactly to make it specific for each thing; this involves parameterization. Finally, it is common to have some things be so different that they need to be exceptions to the pattern; we like to avoid exceptions but sometimes it is impossible.

Data Representation

This foundation and the last are now getting more focused on the computing part of computational thinking.

Things in the real world have all sorts of information about them: people can be short, or tall; everything is of some color; things can be sharp, dull, hard, soft, brittle, flexible, and on and on. These are all attributes, or data about the things in our world.

If we are going to use computers to help us solve problems that involve concepts about real world things, we have to think about representing this data. This involves two aspects:

  1. What data needs to be represented?
  2. How should we represent it?

Remember, on computers, in the end everything is just bits. We need to use, make sense of, and decide what those bits mean to us, and how they represent our data. This heavily involves abstraction, but also involves knowing the constraints that bits and computers place on us.

Algorithms

At the heart of computing is telling a computer to do something for us. This is where algorithms come in. Computational thinking must result in the creation or application of algorithms if it is to make a computer useful to us.

Wikipedia:Algorithm says “An algorithm is an unambiguous method of solving a specific problem.” but then it goes on to say “In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation.”

So an algorithm is a written set of instructions that, when followed, solve a problem, or do something. Humans have been using algorithms, or a set of instructions, for basically ever. A recipe is an algorithm. A step-by-step guide to fixing something or putting together some furniture are algorithms. However, they are usually very simple algorithms, being only a sequence of steps for one particular thing.

In computational thinking, we are more interested in algorithms that solve a general class of problems rather than solving one individual case. So we have to think hard about how our problem generalizes, and what it needs to operate on a specific case of the problem.


My door poster on computational thinking.