A warning issued by Elon Musk, supported by top tech leaders from across the industry, has been the subject of an enormous amount of internet buzz in the last couple of weeks. Musk is urging us to pause development of AI because of its threat to humanity. Musk states that all companies fervently working on AI development should cease for six months and reflect on AI’s potential consequences and dangers.

Should we heed this warning? And would such a “pause” even be possible?

Bill Gates also recently stated his views on artificial intelligence, outlining its possibilities and practical direction. Gates says that the age of AI has begun. Gates states, “Artificial intelligence is as revolutionary as mobile phones or the internet.”

I would take that statement one step further, and assert that AI is even more revolutionary. In the 1970s, when Gates co-founded Microsoft, only a handful of minds could influence the computer industry. Today, this group is gigantic. Five years ago, I gave a speech at DePaul University in which I stated that 24 million programmers were contributing to GitHub open-source repository, and today there are over 100 million. Additionally, millions of people are currently finishing high school and heading for science and technology careers. So the amount of brainpower being applied to computing, and therefore to AI’s development, is exponentially far greater than 40 years ago.

There is one other important change in the industry. Not only are these millions of people programming, but because of our global interconnection innovation no longer takes place in isolated labs, but is collaborated upon by many.

Two Different Models of AI

According to Gates, there are two separate models of artificial intelligence. First, the technical term “artificial intelligence” refers to the model created to solve a specific problem or provide a particular service, such as the AI powering ChatGPT.

AGI—Artificial General Intelligence—is the second model. This is AI capable of learning any subject or task. There is currently a debate in the industry about whether or not AGI is even feasible, because it goes in the direction of computing being more powerful than the humans who create it.

Is Musk’s Proposed Pause Actually Possible?

As Elon Musk urges a cessation of AI development for six months, this question should be posed: could such a pause even be carried out? Let us say that the United States, Australia, New Zealand, Europe and allied countries all agree to stop AI development. Will China and Russia then agree? It is doubtful.

Aside from nations, there could also be companies that agree to pause AI development, but then carry on in secret to gain development advantage.

The anthropology utilized in the Austrian School of Economics tells us that, in general, humans operate mainly in their own self-interest. There would, therefore, be some who cooperate with this “pause” and many more who would not. While I see that Elon Musk’s urging is understandable, it can’t be practically executed.

Would Pausing Be Correct?

There is a second question to pose: would it actually be wise for us to engage in this pause? I compare today’s scenario to when I was young, in 1967, when Dr. Christiaan Barnard successfully performed the first heart transplant in South Africa. He faced considerable opposition—the press in Europe and elsewhere in the world was crying out that such an operation was unethical and immoral. Today, a heart transplant is relatively common. While it is still miraculous, hundreds of surgeons today can carry it off. Fortunately, we didn’t listen to the naysayers and carried on with innovation.

Pausing AI development could very well mean we never find out what it can actually accomplish. OpenAPI GPT-4, the most advanced version of AI to date, was introduced in November of 2022. Solutions are being developed at an incredible rate, and it’s a race of who will win in this market. There is a tremendous amount of investment and competition. It might therefore be very worthwhile to carry on.

Potential Beneficence of AI

“Beneficence” can be defined as “a charitable, kind, merciful act characterized by doing good to others with potential moral obligation.” Instead of pausing AI development, we should examine the core principles underlying AI development.

There are those who advocate throwing out core principles we’ve been utilizing for thousands of years and replacing them with new ones. For example, in America, some claim that our constitution is outmoded and should be done away with and a new one drafted.

But these age-old core principles hold as much truth today as they always have, and perhaps we should proceed with AI development under them. How about utilizing AI to provide better lives to the poor and to heal the sick? How about making it possible for deaf people to hear and for people who are blind to see? How about utilizing AI to feed the hungry? Although the overall scene is better today than it was, say, 20 years ago, 45 percent of all child deaths worldwide are from causes related to undernutrition. That’s 3.1 million children per year. Let’s utilize artificial intelligence to figure out how to distribute the tons of food wasted daily, from supermarkets to restaurants, and even from more affluent homes. Much of the time, the obstacles are logistical—AI could undoubtedly help solve them.

Another field is healthcare. AI could be utilized to select the proper medication for a disorder or illness by instantly comparing it to others.

AI will profoundly impact education, as it can provide in an instant all possible dataData Data is a set of quantitative and qualitative facts that can be used as reference or inputs for computations, analyses, descriptions, predictions, reasoning and planning. covering a whole topic. This is, of course, assuming the data has been verified as correct.

Employment is another vital area that AI could be utilized to address. People need to be able to work, and AI could be used to figure out where people could be best employed.

We Do Need Regulations

At this stage, it has become incredibly obvious that regulations are required for the AI industry. Recently, ChatGPT was banned in Italy over privacy concerns. It has also been recently reported that Samsung workers unwittingly leaked top secret data while using ChatGPT. Elsewhere, ChatGPT has been limited by Amazon and other companies because workers paste confidential information into it.

We can compare AI to another industry. 75 years ago if you wanted to fly a plane somewhere, you just started up your aircraft and took off. Today, because of the sheer number of aircraft and many other reasons, there are countless regulations on the airline industry. An aircraft cannot even leave the ground without a fully qualified flight crew, having been fully cleared mechanically to prevent air disasters, a flight plan having been filed, and clearance from the control tower. Travelers are security checked and heavily regulated. There are many other regulations within that industry.

Elon Musk is urging a pause in innovation of AI. When aircraft were first being pioneered, it was a similar scene—they were considered dangerous and people were warned off of them. But think of what it would be like if we had paused or ceased development of aircraft. How many weeks would it take to go from New York to Europe by train and ship? Or to send a package from San Francisco to Tokyo? Today’s world is as fast-paced as it is because we can fly.

Think about, also, what the airline industry would look like without regulations. It would be a chaotic nightmare to travel anywhere.

Italy’s ChatGPT ban might be somewhat of an overreaction, I feel, and as I earlier stated Elon Musk’s suggestion of pausing may be incorrect. But the issues brought to light by these events are indeed correct. There are legitimate security concerns, and the real answer is to regulate it so that it is safe—just like the airline industry. We need to be protected as the human race, and prevent anyone from committing reckless acts with the technology.

Nations at odds in many other areas have, over time, always agreed on airline industry regulations. It is therefore obvious they can cooperate and coordinate on vital issues. Many of these same nations are currently working on AI development, so they should perform the same way with AI as they have with air travel regulations.

Who should develop such regulations and ensure they are enforced? I’m not here to suggest that I, or we at Pipeliner, know how to do this. But it is vital that it be done.

Pipeliner AI Uses

I have pointed out many times that AI should only be used in a supportive role—as our “wingperson.” This is how it is being used here at Pipeliner.

B2BB2B B2B is an acronym for Business-to-Business, a model for selling, relationship-building, or engagement. salespeople can never be replaced by AI. Without human control and intervention, B2B sales are too variable and complex to move forward. An interesting article by George Bronten one of our competitors makes a very important point about AI being used in complex sales. Bronten writes, “the more complex the problem, the more difficult it is for AI to understand intent. For instance, imagine you have multiple people in the car for a road trip, and they each have different priorities: One wants a scenic route, one needs to take frequent breaks, another wants to drive past their old high school, and the driver has anxiety about road construction. A future version of an AI maps app might be able to incorporate all of these userUser User means a person who uses or consumes a product or a service, usually a digital device or an online service. intents as data points and create an optimal route—but only if it understands everyone’s intent.”

Bronten points out that complex B2B sales is also not a simple journey. It involves multiple stakeholders, differing priorities, complex problems, and often undefined risks and needs. These can only be addressed if the intent from everyone is fully understood. That is why I believe that AI is and will continue to be a strong supportive agent, like a good “wingperson,” not any kind of replacement for B2B salespeople.

At Pipeliner we are using 14 AI tools just within our marketingMarketing Marketing is the field, set of actions, or practice of making a product or service desirable to a target consumer segment, with the ultimate aim of effecting a purchase., after having examined 20 different AI applications. We use AI-generated music for our videos, so we no longer have to pay royalties. The voiceover in videos is also generated by AI, as well as the video transcript. We even have a human-appearing agent in our videos.

Don’t Use Fear as Your Guide

I’m of the opinion that one should never use fear as counsel, which is why I hesitate to agree with Elon Musk’s warning and suggestion. Respect is a much better counsel. Let us respect life and utilize AI to help us bring life to all new heights.

What are the core principles behind our AI development? We should look them over. As Austrian economist Ludwig von Mises said, the better idea will always prevail.

At Pipeliner, we want to use those better ideas for innovation. That is my approach.

Because we as a human race have AI in our hands, we must take responsibility for it. Everyone wants freedom to develop AI as they wish—but as Thomas Mann said, responsibility is the other side of the freedom coin. Responsibility should be taken because we can, with this technology, do incredible things for ourselves and the planet.