Tim Lee, a PhD student from Princeton, argues that Robin Hanson is wrong; we'll never
In a huge oversimplification, here's what happens: a neuron receives a signal. If the incoming signal is weak, it does not cause a cascade in the neuron. But once a neuron receives a signal of high enough intensity to cause an ion cascade, the signal is amplified and propagated down the neuron to another neuron. Repeat. The reason you have memories is because neurons have developed specific, strong connections with each other. When you want to recall something, the neuron chain fires a strong cascade down this pre-ordained path, and your brain "rethinks" the memory. Basically.
In any case, what I think Lee is not addressing is that neurons needn't be modeled as hardware, interconnected via software. Instead I think we should imagine the brain as a collection of 100 billion software programs all connected by other software programs. Whereas logic gates are 1 or 0, software has no such digital restriction. PID controllers are a good example of this.
If we could write a software that accurate mimicked the aforementioned breakdown voltage curves, the propagation speed, and the ability to communicate with other softwares, I can see us making a sufficiently accurate model of a neuron. Then other software could (like brain simulating systems already do) mimic the interconnectedness of the nerves. A software-based emulation of the brain seems plausible to me.
Though not without hardware issues. Namely, 100 billion software-neurons with several trillion axon-softwares would potentially require an individual processor and memory for each program. That's a lot of hardware. Conversely, as CPUs and memory get faster, its possible a single CPU (running at gigahertz) could handle many software-neurons, which operate much slower.
As for Timothy B. Lee's weather prediction analogy, while it is true that a massive amount of computing is required for weather forecasting, and that these forecasts decrease in accuracy the further into the future they predict. This, Lee writes, is because these little imperfections will "snowball" into larger inaccuracies.
But here's the problem: he's saying global weather prediction is accurate in the present but not in the future. And he's saying this would apply to a model of the brain because it is also a complex system. But why do we need to predict what a brain will do in the future? We don't. What we need is a highly-accurate model of the brain in the present. And he admits a highly accurate model of weather can be created for the present.
Who cares what a brain does in the future? It's impossible to know what it will think next! Creating a highly accurate map of a human brain, including the interconnections between each and every neuron would be an incredibly difficult feat...but not an impossible one. Creating a customized software-neuron that accurately represented the behavior of the real-neuron that it was intended to mimic would be an incredibly difficult feat...but not an impossible one. Building a computer capable of running all these neuron-softwares at the same time would be an incredibly difficult feat...but once again, not an impossible one.
Further, its pretty hard to model weather for a small area of the world, given that the input into that system come from outside of it. But with a brain, you can start small scale, quite easily. Research into modeling a cat brain is ongoing. As I've argued before: why not start small, with an insect brain or a rodent brain. The neuron-software will still be as complicated as ever, but the total number of neuron-softwares and axon-softwares needed will be greatly diminished. If we could build a bee-brain first, it would help determine the accuracy of the system...help eliminate "snowballs" from rolling down the hill before we scaled up to your brain and my brain.
I agree that copying our brains into a computer will not be possible in the next couple decades. But as Arthur C. Clarke famously wrote: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right; when he states that something is impossible, he is probably wrong."