
On July 23, 2025, the White House issued its policy document Winning Race: America’s AI Action Plan. Action Plan provides a thorough analysis of strategies, measures and potential concerns to guide the future development of AI. It focuses on policy measures under the three “pillars” framework under the framework of innovation, infrastructure, and international diplomacy and security under the three “pillars” framework.
However, an important issue is not addressed. There is an urgent need to understand the method of artificial general intelligence (“AGI”), superintendent, and alternative intelligence, which will increase the black box problem.
Within the section called “Pillars I: Accelerate AI Innovation”, AI Action Plan, some aspects of the black box problem have been identified. In favor of a point Invest in achievements of AI’s translation, control and strengthAction Plan indicates a part of the following concern:
Today, the internal work of the Frontier AI system is well understood. Technicians know how LLM works high, but often can’t tell why a model created a specific output. This can make it difficult to predict any specific AI system behavior. Lack of forecasts, in return, defense, national security, or other applications can make it difficult to use modern AI where lives are at stake. If we offer basic achievements on these research issues, the United States will be able to use the AI system in the national security domains for better abilities for its full potential.
The Action Plan recommends that (1) launch a technology development program to advance AI’s interpretation, AI control system, and advanced strength, (2) prefer basic progress in the translation, control, and strengthening of AI, and (3) to attract the best of the AI system to test the AI system.
As far as it goes, it is commendable, which is not enough. It has not specifically highlighted the important need to understand the effects and consequences of the ongoing competition to create a working system. Instrument Level of intelligence
First, national state developers and private developers are running towards AGI. They give the term different meanings, but most common definitions use it to refer to modern AI systems that can perform wide range of humans and better tasks, or AI, which is at least as human in most academic tasks. Google and others have warned that the AGI agent can make the AI system more empowered to act independently and implement it. They warn that this increases the risk of real -world consequences of misunderstanding, Namely, An AI system that leads to the achievement and steps of the goals that know that the developer did not intend. It also increases other risks such as misconduct, errors and structural risks (described as multi -agent dynamics and disadvantages of contradictory privileges). Google has configured it in detail in the title of its April 2025 dissertation A approach to technical AGI safety and safety.
After that, there is a great race to develop AI system with more capabilities than human intelligence. This is sometimes called “supertentiality”. As progress is being made, it will greatly increase the dangers of developers who have failed to understand their creations, operate and practice.
Finally, the world has already seen its appearance that can be described as “alternative intelligence”. AI can refer to the problems and functions of the ways that human beings do not have. For example, consider the development of Alfagozro. Although a traditional Asian sport is thought it is the most complex in the world. An AI system called Alfago was trained by analyzing millions of tactics in 100,000 sports played by humans. Soon he managed to defeat the human champions. But then the Alfagozro was prepared. The system was trained without any figures about human movements already occurred. It was given just the rules of the game, and was trained only through sports against himself. It soon succeeded in dominating human data, even undergoing highly sophisticated systems that humans had never used, or just considered it rarely. That is, he approached the issues presented by the game through analysis, which was completely unaware of the human thought process, and which proved to be very high. It happened in 2017, so these are not some future scenario.
All of them show that the world of AI science fiction can be closer to reality. AI technicians need to immediately analyze how close it is.
The AI Action Plan did not pay any attention to the need to understand what happened and what could happen when the AI system reaches AGI, superintendent or alternative intelligence. Essential research is immediately less profitable than research racing towards the system with accurate abilities system, which needs to be controlled. Therefore, financing for this research should be provided through a full throttle government measure.
Titles
Insthestic data achieved success with artificial intelligence
Is interested Appearance?
Get automatic warnings for this title.