Today’s Daily (pseudo)Science Update from the Institute for Creation Research’s Brian Thomas is called IBM Attempts to Build Computer ‘Brain’, and waxes lyrical about how IBM – the folk who brought us the hard drive, the ATM, Deep Blue, Watson and appreciable portions of the Linux kernel, among much else – are taking as inspiration the human brain. Here’s his conclusion:
Some of the best and brightest engineering brains are involved in seeing this project to completion. If and when they succeed, they will also have succeeded in proving that the human brain they used as their model could only have been created through intelligently and purposefully directed power. Something that intricately designed could never have “just happened.”
This post has long been my go-to post whenever what Natural Selection can achieve is brought up in the context of what we as humans can design. Now is the time, I think, to expand on that.
In that post I mentioned the experiments of Adrian Thompson with evolving a circuit to differentiate between two input frequencies and produce set outputs. He used a whole bunch of logic gates in a square array that could pass on a signal between themselves, until it got to the output. He got the following circuit as a result, after a little over five-thousand generations of mutation and selection:
As I said at the time, it’s “not something any human would have designed.”
But it is, to a certain extent, analogous to the brain. We have individual ‘neurones’ – the cells on the circuit – which do simple things to their inputs and then pass on their outputs to nearby cells. An input goes in, the circuit recognises a pattern (traditionally a task that computers do very poorly) and an output is produced. What goes on in between is rather complex, but we don’t really care about it – so long as we can mould it by mutating the ‘genetic code’ that produces a specific iteration and select between the results, or use some other kind of learning process.
The team at IBM are not quite going as far as that. They are making chips with ‘neurons’ on them, the architecture of which is inspired somewhat by the macro-structure of the brain, with relays etcetera that already exist inbuilt. Here’s one of many videos on the article about the project that Mr Thomas cites:
“We try to draw as much from the brain-like architecture as we can, and it’s really because the brain does a good job, and it’s architecture seems to be efficient. It’s not that we know it’s optimal – it’s just that it exists and it works…We can’t find anything better than the brain.” – John Arthur
That quote that I’ve pulled (it’s near the end) should really be enough to demolish much of Brian Thomas’ article, which is why I’m not going to concentrate on it too much. Notably, he also quotes it, but only the latter part.
So as we can see, the idea is to build artificial ‘neural nets,’ which are modelled on the networks of neurons in a brain. These can be used to overcome the limitations of the present paradigm in computing, but they can also be used to show how you could get a complex ‘computer’ like the brain which is actually good at pattern recognition, basically from nothing and entirely without supernatural intervention.
I’ve been talking about eyes a lot lately, and how they see. I’ll continue with this as it is arguably the best way to show the power of the concept, and I mentioned at one point that they can explain the processing that goes on after the signals from the retina are received.
Take the Euglena. It’s an interesting single-celled protist, which has characteristics that we recognise in more complex life in both plants and animals. It’s also a common feature in one of the papers in Level 2 Biology, the practise exam for which is on Monday for me – which means that this counts as study 🙂
The Euglena can propel itself through the water using its flagellum. It can absorb some of the nutrients that it needs through the cell membrane, but you can also see that it has chloroplasts that it can use as a plant does. It also has an ‘eyespot’ (orange, or somewhere near there) that can detect the level of ambient light in its surrounds, to find the best spot for photosynthesis.
But what if we take a larger, multicellular organism with a large number of cells all dedicated to sensing light? How do we take an input from a few thousand individual cells that was formally just being used to detect (very well) light levels, and get as much usable information out of it as we can? All while using the processes of natural selection.
One way we can do this is to implement some mechanism of edge detection. For example, we might imagine that a simple patch of light-sensitive cells has grown into a cup shape, on the first step that could potentially pass through the present configuration in the human eye. The edges of this cup shade light on part of the eye surface when it comes in at certain angles. This can be used to find the direction of the light – provided that we can find that edge.
There is a kind of logic gate called the XOR gate. This takes in two binary (on or off) inputs and outputs ‘on’ only when the two inputs are different. If the input data is simply that of light-receiving cells either firing or not, depending on light levels, we can see how a simple ‘neuron’ cell that exhibits an XOR like behaviour could be hooked up to two receptor cells, and when it fires it’ll tell the brain that there is an edge around there. This tells the organism that if it moves its ‘eyes’ it will get a different reading for light levels, which gives it more information about its surroundings. This is a competitive advantage over other organisms, and not a difficult one to acquire.
This can be taken further quite easily. Not many more nodes are required to set up a system where only the cells that are near an edge have their signal relayed back to the brain. This is used in modern eyes to help with compression for data transmission on the optic nerve – much of this simple stuff can be done in the eye itself (which is, after all, literally an extension of the brain in vertebrates). Your brain fills in the centre of objects from the information it gets about their edges, one of the ways that the blind spot can be hidden so readily.
It’s getting late – part two will arrive… Tomorrow? After exams? Who knows!
According to my amazing advanced warning system – this page – a whole bunch of new articles have been posted at the ICR but have not appeared anywhere yet. I’m guessing that they make up September’s Acts and ‘Facts’ magazine – here’s my post for August – which is due to appear on the front page of the ICR’s website any day now. This is good – their appearance generally coincides with a week that is thin on the ground when it comes to DpSUs, perfect if that is next week as I probably wont have much time to post on them, due to the ‘Mock’ exams that I have Monday to Thursday. If I disappear completely, don’t call the police until after then. 😀