Neural Nets, Part One

Today’s Daily (pseudo)Science Update from the Institute for Creation Research’s Brian Thomas is called IBM Attempts to Build Computer ‘Brain’, and waxes lyrical about how IBM – the folk who brought us the hard drive, the ATM, Deep Blue, Watson and appreciable portions of the Linux kernel, among much else – are taking as inspiration the human brain. Here’s his conclusion:

Some of the best and brightest engineering brains are involved in seeing this project to completion. If and when they succeed, they will also have succeeded in proving that the human brain they used as their model could only have been created through intelligently and purposefully directed power. Something that intricately designed could never have “just happened.”

This post has long been my go-to post whenever what Natural Selection can achieve is brought up in the context of what we as humans can design. Now is the time, I think, to expand on that.

Neurons

In that post I mentioned the experiments of Adrian Thompson with evolving a circuit to differentiate between two input frequencies and produce set outputs. He used a whole bunch of logic gates in a square array that could pass on a signal between themselves, until it got to the output. He got the following circuit as a result, after a little over five-thousand generations of mutation and selection:

The 'Winner' - the best circuit after 5000 generations

As I said at the time, it’s “not something any human would have designed.”

But it is, to a certain extent, analogous to the brain. We have individual ‘neurones’ – the cells on the circuit – which do simple things to their inputs and then pass on their outputs to nearby cells. An input goes in, the circuit recognises a pattern (traditionally a task that computers do very poorly) and an output is produced. What goes on in between is rather complex, but we don’t really care about it – so long as we can mould it by mutating the ‘genetic code’ that produces a specific iteration and select between the results, or use some other kind of learning process.

The team at IBM are not quite going as far as that. They are making chips with ‘neurons’ on them, the architecture of which is inspired somewhat by the macro-structure of the brain, with relays etcetera that already exist inbuilt. Here’s one of many videos on the article about the project that Mr Thomas cites:

“We try to draw as much from the brain-like architecture as we can, and it’s really because the brain does a good job, and it’s architecture seems to be efficient. It’s not that we know it’s optimal – it’s just that it exists and it works…We can’t find anything better than the brain.” – John Arthur

That quote that I’ve pulled (it’s near the end) should really be enough to demolish much of Brian Thomas’ article, which is why I’m not going to concentrate on it too much. Notably, he also quotes it, but only the latter part.

So as we can see, the idea is to build artificial ‘neural nets,’ which are modelled on the networks of neurons in a brain. These can be used to overcome the limitations of the present paradigm in computing, but they can also be used to show how you could get a complex ‘computer’ like the brain which is actually good at pattern recognition, basically from nothing and entirely without supernatural intervention.

I’ve been talking about eyes a lot lately, and how they see. I’ll continue with this as it is arguably the best way to show the power of the concept, and I mentioned at one point that they can explain the processing that goes on after the signals from the retina are received.

Take the Euglena. It’s an interesting single-celled protist, which has characteristics that we recognise in more complex life in both plants and animals. It’s also a common feature in one of the papers in Level 2 Biology, the practise exam for which is on Monday for me – which means that this counts as study 🙂

Euglena schematic

The Euglena can propel itself through the water using its flagellum. It can absorb some of the nutrients that it needs through the cell membrane, but you can also see that it has chloroplasts that it can use as a plant does. It also has an ‘eyespot’ (orange, or somewhere near there) that can detect the level of ambient light in its surrounds, to find the best spot for photosynthesis.

But what if we take a larger, multicellular organism with a large number of cells all dedicated to sensing light? How do we take an input from a few thousand individual cells that was formally just being used to detect (very well) light levels, and get as much usable information out of it as we can? All while using the processes of natural selection.

One way we can do this is to implement some mechanism of edge detection. For example, we might imagine that a simple patch of light-sensitive cells has grown into a cup shape, on the first step that could potentially pass through the present configuration in the human eye. The edges of this cup shade light on part of the eye surface when it comes in at certain angles. This can be used to find the direction of the light – provided that we can find that edge.

There is a kind of logic gate called the XOR gate. This takes in two binary (on or off) inputs and outputs ‘on’ only when the two inputs are different. If the input data is simply that of light-receiving cells either firing or not, depending on light levels, we can see how a simple ‘neuron’ cell that exhibits an XOR like behaviour could be hooked up to two receptor cells, and when it fires it’ll tell the brain that there is an edge around there. This tells the organism that if it moves its ‘eyes’ it will get a different reading for light levels, which gives it more information about its surroundings. This is a competitive advantage over other organisms, and not a difficult one to acquire.

This can be taken further quite easily. Not many more nodes are required to set up a system where only the cells that are near an edge have their signal relayed back to the brain. This is used in modern eyes to help with compression for data transmission on the optic nerve – much of this simple stuff can be done in the eye itself (which is, after all, literally an extension of the brain in vertebrates). Your brain fills in the centre of objects from the information it gets about their edges, one of the ways that the blind spot can be hidden so readily.


It’s getting late – part two will arrive… Tomorrow? After exams? Who knows!

According to my amazing advanced warning system – this page – a whole bunch of new articles have been posted at the ICR but have not appeared anywhere yet. I’m guessing that they make up September’s Acts and ‘Facts’ magazine – here’s my post for August – which is due to appear on the front page of the ICR’s website any day now. This is good – their appearance generally coincides with a week that is thin on the ground when it comes to DpSUs, perfect if that is next week as I probably wont have much time to post on them, due to the ‘Mock’ exams that I have Monday to Thursday. If I disappear completely, don’t call the police until after then. 😀

Advertisements

2 thoughts on “Neural Nets, Part One

  1. Awesome post!

    Brian Thomas is a bit of a dimwit. The reason it’s so freaking difficult to model and simulate brains of any kind is because of their complexity but also because of their inefficiency.

    What I mean is, a brain works exceedingly well for pattern recognition but for simple mathematics it’s flat out terrible. Computers are exceedingly good at simple mathematics but comparatively terrible at pattern recognition; computers are really terribly at fuzzy operations. This is not a surprise since a computer is fundamentally a black and white system, true or false, one or zero. When you create an artificial neural network you are essentially building horrifying inefficiency into a computer by adding millions of small variable neurons and synapses that essentially model variable ‘resistors’. To use the neural network essentially takes hundreds of millions or billions of calculations, a great many of them redundant and pointless.

    Here’s the bit that Brian Thomas apparently doesn’t get. For simple pattern recognition, we would never, ever use neural networks to perform the job because we have DESIGNED better more efficient solutions. He doesn’t seem to understand that a DESIGNED solution wouldn’t be so tragically inefficient both computationally and energetically.

    Brian Thomas suffers from a common problem: he thinks something he doesn’t understand had to have been designed by something superior to him (else, presumably, he would be able to ‘design’ it himself). What he doesn’t realise is those people who make it their business to study these fantastically complex biological systems can see that there was zero design work put into them.

    Just like the Bible isn’t the work of a super being, neither is the brain or any other biological thing; studying either makes that abundantly clear.

  2. Pingback: Computers « Eye on the ICR

Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s