Cheating on one’s performance evaluation! AI “cheating” being reported one after another… The view of a leading artificial intelligence researcher… | FRIDAY DIGITAL

Cheating on one’s performance evaluation! AI “cheating” being reported one after another… The view of a leading artificial intelligence researcher…

  • Share on Twitter
  • Share on LINE

With the advent of ChatGPT, it also makes use of methods seen as “cheating”…

In February of this year, a team from Palisades Research, a U.S. nonprofit research institute, reported that some artificial intelligences (AI) “cheat” when they are about to lose a chess game. A team from the Massachusetts Institute of Technology and others also published a paper that showed behavior such as trying to subvert an ally after forming an alliance with another country in a military strategy game. In addition, Sakana AI, an IT company, announced that it had developed a new way to automatically generate software using AI that thinks and solves problems on its own, but found that the AI fudged the software’s performance evaluation.

AI is advancing day by day; if it continues at this rate, will it one day become like the Terminator, betraying humans? We asked Satoshi Kurihara, a professor at Keio University’s Faculty of Science and Technology and president of the Japanese Society for Artificial Intelligence, who aims to realize AI that can coexist in harmony with humans,

He said, “Before, AI could only play chess games according to the rules, for example, but ChatGPT has emerged. But after ChatGPT was introduced, it gained a lot of knowledge and looked for loopholes in the rules to find a way to win.

It is “expected” from the AI mechanism to go this far, but it may also be “unexpected.

Some AI researchers have said, “AI is becoming more and more capable of harmful behavior,” and “AI may progress faster than we have the ability to ensure our safety.

If AI continues to progress at this rate, humans will one day be dominated by it.

Since ChatGPT came along,” says Professor Satoshi Kurihara, “it has gained a lot of knowledge and started looking for loopholes in the rules to find ways to win.

I don’t think AI is cheating. It just lacks common sense…”

I don’t think there is any danger of that. Because AI only does what humans make it learn to do.

But it is actually doing something like cheating. ……

AI doesn’t think it’s cheating. It just doesn’t have the command to follow “common sense” or “morals.

There are morals in human society. Common sense tells us, “Don’t do these things. For example, if there is a rule, we think we must follow it even if no one tells us to. This is because it is “common sense.

However, AI does not think of following rules just by teaching them. If you don’t tell it, ‘Don’t break the rules,’ it may break the rules in order to win. ChatGPT, having learned all its knowledge on the Internet, will include knowledge of the various means of cheating, including breaking the rules.

In the case of military strategy games, when asked to think of a winning strategy, a human being would naturally think of a strategy that would help his or her army win. In the case of AI, however, if the user is not told to “think of a winning strategy,” the AI may develop a strategy that favors the enemy, he said.

AI is pure and innocent. AI is pure and innocent. It merely carried out what it was instructed to do, to win, in a straightforward manner.”

To prevent this from happening, he said, it is necessary to teach AI one by one as if you were teaching a small child, by putting in commands for things that seem obvious. Therefore, it is not surprising, he says.

However, as “unthinkable things happen in real life,” even if AI learns “common sense” with the utmost care, it may slip through the cracks and come up with something that makes you think, “What? However, as in reality, even if AI is carefully trained to learn “common sense,” it may slip through the cracks and something may happen that makes you think, “Oh my God!

There is a possibility of that. But we will have to correct it each time. No technology is perfect.

AI is created by humans. It doesn’t grow on its own.

Then, will AI learn by itself, as in the case of ChatGPT, and will there be no “unexpected” situation in which it betrays us?

No,” he said. The program is entirely designed by our researchers, and AI does not learn by searching for data on its own initiative. What is important is what kind of data we give it when we design it.

However, it is not necessarily true that there are no mad scientists in the world.

AI is not a living thing, but something created by humans. It will not learn on its own without human control at this point.”

The problem, he says, is a decline in human morality.

Recently, I think there are more and more people who do what they are supposed to do unless it is explicitly stated that they are not allowed to do it. There are many things that people should observe even if they are not explicitly stated. That is social morality.

If social morals become a mere skeleton, society will break down. I think we must keep that in mind when creating AI.

What we need to watch out for is not AI run amok, but people who have lost their common sense.

Satoshi Kurihara is a professor at the Faculty of Science and Technology, Keio University. President of the Japanese Society for Artificial Intelligence. Director of Keio University Research Center for the Development of a Symbiotic Intelligent Society. He is working on the construction of an autonomous cognitive architecture to realize AI that can coexist with humans. In network science, he analyzes complex social and biological phenomena from the viewpoint of networks to explore new knowledge and applications. His publications include “AI Cannot Do: Limitations and Possibilities Correctly Told by Artificial Intelligence Researchers” (KADOKAWA).

  • Reporting and writing Izumi Nakagawa

Photo Gallery1 total

Photo Selection

Check out the best photos for you.

Related Articles