WE MUST PURSUE AI

AI seems to us as if the most advanced technology humanity has ever faced. Especially these days, with Large Language Models (LLM) and more... But the main question about AI: Is it a threat or opportunity? Read our article to find the answer!

Halil AksuContent Editor

May 10, 2023
6min read

THE FUTURE OF AI related to THE FUTURE OF HUMANITY

Should we really stop or prevent AI, because it (the AI itself) could possibly do extraordinarily bad things (including the end of humanity), and therefore miss the potential that we (humans using AI) could achieve extraordinarily good things and advance our human condition?

Please read the petition put forward by the Future of Life Institute…

My immediate and very convinced answer to this question is a very strong NO!

AI seems to us as if the most advanced technology humanity has ever faced. Especially these days, with Large Language Models (LLM) and Generative Pre-trained Transformers (GPT), it acts very smart, seems like beyond average human intelligence, is almost almighty, knows any language, activity, and subject matter; and its impact on employment, industries, and many other areas is unpredictable, which it is.

I would consider AI in general as an operating system of civilization. We need certain levers, enablers, technologies, concepts, so the world we have developed over the millennia can function. These tools I call components of the operating system.

Let’s look at a few past operating systems of our civilizations. Not particularly in a chronological nor a prioritized order; fire, first cutting tools, the alphabet, philosophy and thought, ink, gunpowder and guns, printing press, the concept of money and banks, positive science, steam engine, steel, combustion engine, electricity, the formation of the corporation, aerodynamics, the semiconductor, the internet, the list could go on and on, and today we talk about artificial intelligence – AI.

Most of these technologies and advancements felt like ending humanity, like devil’s inventions, and truly outperformed what humans could achieve in those days. That is precisely why scientists, thinkers, engineers, and other ingenious people invented these things to solve specific problems and advance humanity. Did it do any harm? Oh yes, in almost all cases, and it still does.

But is it the fault and the responsibility of that particular achievement, that specific technology? No, it’s not. It is in the hand and the spirit of the beholder what to do with that specific capability.

If we reason that today’s politics are corrupted, that today’s economic model has flaws, and that our so desirable way of living is killing our very planet, and if we then let loose of AI, it will just accelerate and amplify this misery, I think this conclusion is incomplete. Yes, it could do so. But it must not.

All the previous technologies could have gone rogue as well and could have brought an end to humanity. In some cases, we were very close to such situations. Did we forbid those technologies? No, we didn’t. We found intelligent solutions, governance structures, checks and balances, and a certain common sense to apply those wonders for the betterment of humanity.

Experts argue that AI is different. Maybe. It is trained on all the knowledge we have created, itself is created by us, people who are living today and are part of this era, of this civilization, shape and benefit from the current human condition. Will AI become smarter than us? Oh yes, that’s the reason why we build technology. Who competes with the washing machine at home? Why would you?

Could AI run thought experiments? Maybe. Could that turn into villain energy, trying to conquer humanity, declare its superiority, create another form of civilization, exhaust this planet’s resources, explore deep space faster and better than us, and build much better spaceships, and then leave planet Earth and the Milky Way for other and better environments? Maybe. Doesn’t this sound a bit familiar? Could be quite interesting to watch this thought experiment unfolding…

Read Geoffrey Hinton’s New York Times interview

Isaac Asimov talked about laws to govern robots already 80 years ago. Stuart Russell suggested updated laws to make AI safer. Yes, we have laws and rules in many areas of our daily lives. Unfortunately, not everybody follows these rules and laws.

Will AI cause inequality? Yes, but AI is not the reason for inequality. And inequality did not start with AI; it happened in the last decades, maybe even a few centuries. And if we behave like an intelligent species and agree that increasing inequality is not sustainable, let’s just behave more responsibly and apply super-intelligent AI to help us reverse and eventually eradicate inequality, as much and as fast as possible, in a sustainable way.

A few years back, when CERN was running experiments to discover the “God particle”, skeptics and critics argued that these experiments could create a black hole ripple effect, wiping out all fabric of our very existence. Apparently, this has not happened. Peter Higgs received the Nobel Prize for Physics.

Jeffrey Sachs and the United Nations worked on the Sustainable Development Goals to create the inventory of noble goals and the main challenges of our times. I only see vast oceans of opportunities in achieving each of these seventeen goals.

Cambridge University Astro physician Lord Martin Rees once argued, that if we succeed to survive the 21st century, the 22nd century will be marvelous. I totally agree.

As we have ethical rules on how to apply genetics to embryo research, as we have guidelines if and how to use autonomous weapon systems on the battleground, we even have rules on what to do in outer space, and as we have the Universal Declaration of Human Rights by the United Nations, we must develop rules, laws, the common sense and an ethical understanding of how to use AI in a safe, aligned, and responsible way.

I believe we can. I believe that the majority of academics, investors, entrepreneurs, and businesspeople are already on the good side. There will always be bad people with bad intentions and there will always be dangerous and risky experiments.

But losing out the opportunities of super smart AI augmenting our endeavor of advancing human conditions, writing even better poetry, creating new art forms, exploring depths of unknown wisdom, and allowing prosperity never seen before, I would regret much more.

Having said that; we definitely must be super aware and super cautious about the risks of unethical and irresponsible usage and deployment of risky AI solutions, let alone the vast energy consumption to create ever funnier cat videos.

I don’t believe that it is even possible to stop or prevent AI from unfolding its full potential.

As said at the very beginning, I suggest focusing on the current and future potential upsides of AI and how we can leverage it to advance the human condition by all means, but still keep all eyes on avoiding terrible things to happen. I could not accept, and I don’t think it’s possible, to hide AI away and ignore all potential benefits, just because there are avoidable existential risks.