Why AI will never rule the world

Call it the Skynet hypothesis, the advent of Artificial General Intelligence, or Singularity—for years, AI experts and non-experts alike have observed (and, for a small group, celebrated) the idea that artificial intelligence might one day replace humans. Might be smarter than that. ,

According to the theory, advances in AI – particularly the type of machine learning capable of taking in new information and rewriting its code accordingly – will eventually catch up with the biological brain’s wetware. In this interpretation of events, each AI proceeds dangerWinning IBM machines to the giant AI language model GPT-3 is taking humanity one step closer to an existential threat. We are literally building out our earliest successors.

Except that it will never happen. At least, according to the authors of the new book Why machines will never rule the world: artificial intelligence without fear,

University at Buffalo philosophy professor Barry Smith and co-author Jobst Landgrabe, founder of the German AI company Cognotech, argue that human intelligence will never surpass “an immortal dictator” anytime soon or anytime soon. They told digital trends Why because of them?

Image showing AI, with neurons exiting the humanoid head

Digital Trends (DT): How did this topic come on your radar?

Jobst Landgrabe (JL): I am a physician and biochemist by training. When I started my career, I did experiments that generated a lot of data. I began to study mathematics to be able to interpret these data, and see how difficult it is to model biological systems using mathematics. There was always this misfit between mathematical methods and biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I was trying to build an AI system to mimic what a human could do. I realized that I was running into the same problem I had in biology years ago.

Customers said to me, ‘Why don’t you build a chatbot?’ I said, ‘Cause they won’t work; We cannot model this type of system properly. That is what inspired me to write this book.

Professor Barry Smith (BS): I thought this was a very interesting problem. I already had a sense of similar problems with AI, but I never thought about them. Initially, we wrote a letter called ‘.Making Artificial Intelligence Meaningful Again, (This was in the Trump era.) It was about why neural networks fail for language modeling. We then decided to expand the paper into a book that explored the topic in more depth.

DT: Your book raises doubts about whether neural networks, which are vital to modern deep learning, emulate the human brain. They are conjectures rather than exact models of how the biological brain works. But do you accept the basic premise that it is possible that, were we understand the brain in sufficient detail, it could be artificially replicated – and that would give rise to intelligence or emotion?

JL: The name ‘neural network’ is a complete misnomer. The neural networks that we have right now, even the most sophisticated ones, have nothing to do with the way the brain works. The idea that the brain is a set of nodes interconnected by the way neural networks are constructed is completely naive.

If you look at the most primitive bacterial cell, we still don’t understand how it works. We understand some aspects of it, but we don’t have a model of how it works – let alone a single neuron, which is much more complex, or billions of neurons interconnected. I believe it is scientific Impossible To understand how the brain works. We can only understand certain aspects and deal with these aspects. We don’t have a complete understanding of how the brain works, and we won’t get it.

If we had a more complete understanding of how each molecule in the brain works, we might be able to replicate it. That would mean putting everything into mathematical equations. You can then replicate this using a computer. The problem is that we are unable to write and construct those equations.

head profile on computer chip artificial intelligence
digital trends graphic

BS: Many of the most interesting things in the world are happening at levels of granularity that we cannot reach. We just don’t have imaging equipment, and we probably never will have imaging equipment, which is what’s happening at very fine levels of the brain.

This means that we do not know, for example, what is responsible for consciousness. There are, in fact, a series of quite interesting philosophical problems that, no matter the method we are following, will always remain unsolved – and therefore we should ignore them.

Another is freedom of will. We are very strongly in favor of the idea that human beings have a will; We can have intentions, goals, etc. But we don’t know whether it is a free will or not. This is an issue that has to do with the physics of the brain. As far as the evidence available to us is concerned, a computer cannot have a will.

DT: The subtitle of the book is ‘Artificial Intelligence Without Fear’. What specific phobia are you referring to?

BS: This was fueled by literature on the singularity, which I know you are familiar with. Nick Bostrom, David Chalmers, Elon Musk, and the like. When we spoke with our colleagues in the real world, it became clear to us that there was indeed a certain fear among people that AI would eventually take over the world to the detriment of humans and change the world.

We have quite a bit in the book about Bostrum-type arguments. The main argument against them is that if the machine cannot have a will, then there cannot be any evil will in it either. Without a bad will, there is nothing to fear. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But that’s because the machines are being managed by people with bad ends. But then it is not AI that is evil; These are the people who are AI. build and program

DT: Why does this notion of singularity or artificial general intelligence interest people so much? Whether they are intimidated by it or fascinated by it, there is something about this idea that resonates with a wide range of people.

JL: There is this idea, which began in the early 19th century, and then declared by Nietzsche at the end of that century, that God is dead. Since the elite of our society are no longer Christians, they needed a replacement. Max Stirner, who was like Hegel’s disciple Karl Marx, wrote a book about it, saying, ‘I am my own God.’

If you are God, then you also want to be a creator. If you can create a superintendence then you are like God. I think it has nothing to do with the hyper-narcissistic tendencies in our culture. We don’t talk about this in the book, but it does explain to me why this idea is so appealing in our time, which no longer has a transcendental entity.

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

DT: Interesting. So to follow through on that, the idea that building AI – or the purpose of creating AI – is a narcissistic act. In that case, the concept that these creations will somehow become more powerful than us is a nightmare at that. This child is killing the parents.

JL: Something like that, yes.

DT: What would be the end result of your book for you if everyone was convinced by your arguments? What will this mean for the future of AI development?

JL: Very good question. I can tell you exactly what I think will – and will – happen. I think in the medium term people will accept our arguments, and that will lead to a better application of mathematics.

Something that all great mathematicians and physicists are fully aware of was what they could achieve mathematically. Because they are aware of it, they only focus on a few problems. If you are well aware of the limitations, you can go around the world to find and solve these problems. Similarly Einstein discovered the equations of Brownian motion; how he came up with his theories of relativity; How Planck solved blackbody radiation and thus started the quantum theory of matter. He had a good instinct for which problems with mathematics can be solved and which are not.

If people learn the message of our book, we believe they’ll be able to build better systems, because they’ll focus on what’s actually viable – and stop wasting money and effort on something that can’t be achieved. cannot be done.

BS: I think some of the message is already happening, not because of what we say, but because of the experiences that lead people to give huge amounts of money to AI projects, and then AI projects fail. I assume you know about the Joint Artificial Intelligence Center. I don’t remember the exact amount, but I think it was something like $10 billion they gave to a well known contractor. In the end, they got nothing out of it. He canceled the contract.

(Editor’s note: A subdivision of the United States Armed Forces, JAIC was intended to accelerate the “delivery and adoption of AI to achieve mission impact at scale,” with two other offices in June of this year. JAIC ceased to exist as its own entity.)

DT: What do you think, in high-level terms, is the single most compelling argument you’ve made in the book?

BS: Every AI system is mathematical in nature. Since we cannot model consciousness, will, or intelligence mathematically, these cannot be simulated using machines. Therefore, machines will not become intelligent, let alone the superintendent.

JL: The structure of our brain allows only limited models of nature. In physics, we choose a subset of reality that best suits our mathematical modeling abilities. This is how Newton, Maxwell, Einstein or Schrödinger obtained their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are the ones we use to engineer the technology. We are unable to build a complete mathematical model of conscious nature.

This interview has been edited for length and clarity.

Editors’ Recommendations

Source link

Leave a Reply

Your email address will not be published.