Some thoughts on the current AI revolution

(From the perspective of a self-taught software developer)

The idea of “Thinking Machines” has been around since the invention of the computer. Alan Turing first proposed the idea of a machine that would be able to trick a human into thinking it was a human simply by processing information.

This was known as ‘The Imitation Game’ and originally involved three participants — a man, a woman, and a human interrogator — with Turing asking what would happen if a machine took the place of one of them. Turing was trying to address the philosophical paradox that came from that by proposing a simple and concrete real world test for machine intelligence.

Popularized as the ‘Turing Test’, it has become a popular benchmark for chatbots and AI systems that use natural language. Many claim this Turing Test has finally been passed by modern LLMs like ChatGPT and Claude.ai. But we will see that this is not the only hurdle.

Current AI agents make mistakes, don’t always understand context and are generally unreliable in many ways. In my opinion they still have a long way to go. They have issues with basic tasks and are known for ‘hallucinating’ — making up things that have no basis in reality. This last problem seems to be an inherent part of their current design.

This is what Claude had to say after looking up a little known company called Numenta for me:

Some researchers, like Jeff Hawkins at Numenta, believe that current AI systems are built on foundations that don’t really reflect how the brain actually works. If they are right, then today’s LLMs may eventually hit a ceiling that no amount of additional computing power or training data can overcome. That is worth considering before we get too carried away with predictions of an imminent AI revolution.

The God-given design of the human brain is far more complex than the relatively one-dimensional design of current neural networks. I think much more development may be needed before we see the general purpose robots we so often envision for our future.

If you know your history, you know that automobiles started out with tiny engines, low power, limited range and were known for breaking down and being hard to start — and even dangerous sometimes. I think AI may be in that same kind of infancy stage right now.

Now I don’t think AI has no future, or that LLMs aren’t useful for some tasks, but the hype around AI has me a little amazed at what people will say. The fear and worry around how AI will take our jobs, or even whether it will go rogue and circumvent its own programming, is a little sad to see. As a self-taught software developer I just have to wonder sometimes why all this hysteria is necessary.

I think traditional computer programming and software designed for Von Neumann computer architecture may be a better bet for 90% of applications right now. Traditional programming patterns and designs can solve most of the problems posed by our world today.

Now you can run an LLM on very limited hardware (compared to a supercomputer), and there may be a future for neural nets running on limited hardware. But the power requirements for something as advanced and ambitious as Claude or ChatGPT are actually intimidating. This limitation may also force customers into a monthly subscription service they would rather not take on in the current economy.

Final thoughts: AI has actually come to a stage where it can amaze and impress and even become useful — one example being the way it is speeding up software development through agentic coding and other applications. But I think it has a long way to go, and if you really want to see its final form you may have to wait a few decades.

Next time someone pitches you an AI solution, ask why traditional programming wouldn’t work. You might be surprised how often it would.


For further reading: “On Intelligence” by Jeff Hawkins with Sandra Blakeslee (2004) and “A Thousand Brains” by Jeff Hawkins (2021)

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *