AI will only reach the bounds of human intelligence, imagination, and institutions

Liebig’s Law of the Minimum, often simply called Liebig’s Law or the Law of the Minimum, is a principle developed in agricultural science by Carl Sprengel (1828) and later popularized by Justus von Liebig.

It states that growth is controlled not by the total amount of resources available, but by the scarcest resource (limiting factor). Or, to put it more plainly, “A chain is only as strong as its weakest link.”

Based on this view, I think that the limiting factor in artificial intelligence (AI) advancement will be human intelligence, imagination, and institutions. The interaction of those dynamic at play will define the scope of AI.

Power of the three I’s

Let me take these on one by one:

  1. Intelligence: Many people says that AI might exceed human intelligence, but I am not so sure of it. While it’s true that AI might exceed humans in terms of augmented automation for specific use cases, such as AlphaGo, human’s intelligence will still manage the scope of AI in the near to mid term.
  2. Imagination: One of the biggest differences between humans and machines reside in imagination. Machines don’t have the capacity to imagine by themselves. Therefore, human’s scope of imagination will determine that of AI.
  3. Institutions: I don’t think that just being innovative or efficient is good enough to make people accept something new. We cannot ignore the power of legacy systems wherever we are living.

AI will only reach the bounds of human intelligence, imagination, and institutions. Take Uber for example.

Uber had a good start with an innovative idea that was well executed, but the service is now being banned by many countries and cities across the world – including London, and rumours of a clampdown in Singapore.

Though Uber has created tens of thousands of jobs, which is something everyone wants, it is still facing an existential crisis that threatens to close down the service in key markets globally.

This shows the power of traditional institutions and existing stakeholders, in an environment where they have something to lose from innovation. This same observation and rule, I think, will in time be applied to AI’s advancement.

AI revolution may look different than we imagine

There may be no AI revolution in the way we are expecting it.

Also read: e27 discussions, is Artificial Intelligence an existential risk to humanity?

If a new technology wants to turn into a revolution, there are three things that should be considered: information, magnitude, and time. Let’s go through these one by one.

  1. Information: The degree of change moves along according to the degree of recognition. The reason why earthquakes brings huge impact is that they happens without expectation.
  2. Magnitude: Sometimes we have information but the magnitude is so huge that it leaves us with little or no way to respond. AI will be adopted partially and incrementally. There will be no rush to apply AI to every area of our lives.
  3. Time: In other cases, we have information but the speed is so fast that it also leaves us no time to respond. This is similar to the above point on magnitude. If our jobs were to be replaced by AI within the next 2-3 years, it would be too big of a shock for most people to cope with. Just imagine the resistance from trade unions!

But if the gradual adoption of AI happens within the next 5-10 years, surely we will find a way to mitigate the negative consequences.

This means the effects will not be as catastrophic as an  earthquake, to use the earlier example. It also rules out a revolution, according to traditional definitions.

In the case of AI, fortunately we have information. We have time to prepare. And we can respond to the magnitude of the coming changes by preparing today.

As a result, I don’t think AI’s impact will be too formidable for us to manage and control.

Finally, we benefit from the experience of drastic changes in recent times thanks to new technologies including the Internet, smartphones, and the advent of electric vehicles that the car manufacturers and governments have finally bowed to en masse.

We can apply these learnings to how we handle the advent of AI.

—-

Editor’s note: e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.

Featured Image Copyright: hunthomas / 123RF Stock Photo

 

The post The Law of the Minimum will work for AI, and its minimum is human appeared first on e27.