Introduction to Natural Computation Lecture 3 Cellular Automata Peter Lewis Overview of the Lecture The aim of this lecture is to appreciate how highly complex global behaviour can emerge from very simple local interaction rules. Cellular automata are a great example of this. We will look at: What Cellular Automata are and how their rules are deﬁned. Interesting global patterns with “simple” CAs. The eﬀect of changing the rules in one and two dimensions. The relationship between the complexity of the rules and the global behaviour. Why CAs are interesting to Computer Scientists. Taking Inspiration from Nature [ videos ] Taking Inspiration from Nature A Brief History of CAs Cellular Automata and complexity are relatively new sciences. Von Neumann and Ulam created the CA concept in 1940’s. Their aim was to build self-replicating patterns and hence self-reproducing robots. In 1960’s Zuse even proposed that universe is a cellular automaton (CA). Many others contributed: Moore, Toﬀoli, Margolus, etc. But CAs didn’t take oﬀ until computers made their simulation easy. Studied in detail and championed by Stephen Wolfram in the 1980s. Now CAs are studied by many in many diﬀerent disciplines. Research related to everything from glaciers to bird behaviour to riots, etc. So what is a Cellular Automaton? A grid of cells with each cell updating its value based on its neighbouring cell values. Typically cells are simply on or oﬀ (0 or 1). Deﬁning Cellular Automata Suppose one has an inﬁnite regular system of lattice points, each capable of existing in various states S1 , . . . , Sk . Each lattice point has a well-deﬁned system of m neighbours. The state of each point at time t + 1 is uniquely determined by the state of itself and all its neighbours at time t. Assuming that at time t only a ﬁnite set of points are active, one wants to know how the activation will spread. Since the state of a cell in the lattice only depends on the state of itself and neighbouring cells, all interactions are local. A set of deﬁned rules determine how to compute the state of a cell at time t + 1, given the state of the neighbours at time t. But the best way to learn about CAs is to play with them! A Very Simple CA A very simple CA might work in the following way: Consider a line of cells, where each cell can take two possible states (e.g. on or oﬀ, 0 or 1). We begin by activating one of the cells (turning it on). The state of the cell inﬂuences the state of its neighbours and vice versa. In a simple CA, the inﬂuence (rules) might be quite straightforward. Suppose: that if a cell is activated, then at the next time step it stays activated and also activates its neighbours. The consequences of this are quite clear... A Simple CA We will use the following convention to express the rules by which the system evolves: Current Next In this 1 dimensional CA, the neighbourhood is the two adjacent cells. Another Simple CA Here’s another set of simple rules: Current Next A cell becomes blue if either of its neighbours were blue, and becomes white if both its neighbours were white. Some Observations Our simple rule gives rise to a checkerboard pattern. This is somewhat similar to waves that we saw in the BZ chemical reaction. An activated site triggers its neighbours, and inhibits itself. Our current sets of simple rules give rise to some simple patterns. We wouldn’t really expect anything complex to emerge from something so simple... right? So the following simple rules should give rise to just another simple pattern: Current Next A cell turns blue when either (but not both) of its neighbours is blue, otherwise it turns white. This is independent of the cell’s current state. More Complex CA Behaviour Current Next Gives rise to: Something more complex is happening here! The Sierpinski Gasket As soon as every other cell is blue, the central region is completely deactivated, leaving only the end points. In the long-term, this has an interesting eﬀect: Complex Global Behaviour Sierpinski Gasket The Sierpinski Gasket is a common example of a fractal. The pattern is self-similar on diﬀerent length scales. The smaller triangles show the same structure as the main triangle. Emergence This is an example of emergence: this intricate pattern emerges from some very simple rules. Nonetheless, the pattern is still regular and predictable, even if it is somewhat surprising. We would naturally expect that a regular set of rules would give rise to regular (although possible complex) patterns. However, at this point our intuition has already broken down: it turns out that our simple, regular rules can give rise to very complex, irregular outcomes. Self-Similarity An object is said to be self-similar if it looks ”roughly” the same on any scale. E.g. Fractals Coastlines Ferns [Videos] Complex Behaviour from Simple Rules Here’s another simple rule: Current Next If the cell and its right-hand neighbour are both white, then the cell takes the colour of its left-hand neighbour. Otherwise, the cell takes the opposite colour to its left-hand neighbour. What pattern do we expect from this? Complex Behaviour from Simple Rules Current Next The ﬁrst few time-steps are: Interesting... Asymmetric Rules From this it is clear that the ﬁnal pattern will be non-symmetric. This is not surprising, since the rules are not symmetric. But we still expect the pattern to be regular, since the rules are so simple. After 100 generations, we get: Chaos Rules! After 500 generations, we get: There is no pattern! Even after 500 iterations there is no discernible pattern. Workers in the ﬁeld have run for many thousands of iterations and found no repeating patterns, even under strict statistical analysis. For a fascinating and detailed study of these kinds of CA, see Wolfram’s book A New Kind of Science. The key point is that even given some very simple rules, a system can still develop into something incredibly complex and detailed. Conway’s Game of Life This is a 2D cellular automaton, invented in 1970 by the mathematician John Conway. Each cell is either populated or The Rules unpopulated. For a cell that is populated: Each cell with one or no neighbours dies, as if by loneliness. Each cell with four or more neighbours dies, as if by overpopulation. Each cell with two or three neighbours survives. For a cell that is unpopulated: Each cell with three neighbours becomes populated. [Simulation] Wolfram’s Rule Number 110 The following is Wolfram’s rule number 110: Current Next This incredibly simple one-dimensional 2-state rule has been proven capable of universal computation. It’s Turing Complete! In other words it is capable of calculating anything that can be computed by the world’s most powerful supercomputers. Life is also Turing Complete Conclusions We began by looking at some patterns and interactions in natural systems. We deﬁned the concept of a CA, and we’ve studied a few CAs governed by simple rules. We looked at one-dimensional CAs and the two-dimensional Conway’s Game of Life. We’ve learnt that though CAs are easy to describe, they can give rise to some extremely complex and unpredictable behaviour. Furthermore, CAs are interesting to Computer Scientists, since they are actually capable of computation! So, when taking inspiration from natural systems in designing computational systems, even complex behaviours can be achieved often with very simple interactions. But ﬁnding the local rules is not easy. Further Reading Wolfram S. Chapter 1: The Foundations for a New Kind of Science. In: A New Kind of Science. Wolfram Media; 2002. Available from: http://www.wolframscience.com/nksonline/toc.html. Conway’s Game of Life;. Available from: http://www.bitstorm.org/gameoflife/.