Close Menu
News World AiNews World Ai

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How to Prepare for a Hurricane

    Elevated mortgage rates aren’t discouraging homebuyers

    Why Does A Small Crack On the Roof Mean A Total Loss?

    Facebook X (Twitter) Instagram
    News World AiNews World Ai
    • Entertainment
    • Gaming
    • Pet Care
    • Travel
    • Home
    • Automotive
    • Home DIY
    • Tech
      • Crypto & Blockchain
      • Software Reviews
      • Tech & Gadgets
    • Lifestyle
      • Fashion & Beauty
      • Mental Wellness
      • Luxury Living
    • Health & Fitness
    Facebook X (Twitter) Instagram
    • Home
    • Finance
    • Personal Finance
    • Make Money Online
    • Digital Marketing
    • Real Estate
    • Entrepreneurship
    • Insurance
      • Crypto & Blockchain
      • Software Reviews
      • Legal Advice
      • Gadgets
    News World AiNews World Ai
    You are at:Home»Tech»Crypto & Blockchain»AI Models Lack Reasoning Capability Needed For AGI
    Crypto & Blockchain

    AI Models Lack Reasoning Capability Needed For AGI

    newsworldaiBy newsworldaiJune 9, 2025No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    AI Models Lack Reasoning Capability Needed For AGI
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    According to Apple researchers, the race to prepare the Artificial General Intelligence (AGI) is still a long journey, which has learned that leading AI’s leading models still have problems in reasoning.

    https://www.tiqets.com/en/new-york-new-york-hotel-casino-tickets-l235895/?partner=travelpayouts.com&tq_campaign=bc55a31e7f434e4ab93246c49-615741

    Recent updates of AI’s large language models (LLM) such as Open Chattagpat and Anthropic Cloud include large reasoning models (LRM), but their basic abilities, scaling features and limits are “insufficiently understood”, “a dissertation called” Mutual Mutual “.

    He noted that the current diagnosis is primarily focused on math and coding benchmarks, “emphasizing the correct response accuracy.”

    However, this diagnosis does not provide insights about the reasoning capabilities of AI models.

    Research contradicts the expectation that artificial general intelligence remains only a few years.

    Apple researchers test AI models “thinking”

    Researchers developed various puzzle games to test different forms of “thinking” and “non-thinking”, beyond the standard mathematics standards of Claude Swant, Openi’s O3-Mini and O1, and V3 chat boats.

    He discovered that “Frontier LRMS faces a complete accuracy beyond some complications,” do not make the reasoning effectively, and contrary to the expectations of Agi capabilities, their edge disappears with increasing complexity.

    “We have found that there are limits to the LRM’s exact count: they fail to use clear algorithms and arguments in contradictory puzzles.”

    apple
    The final answers and intermediate reasoning marks (top charts), and the chart that shows non -thinking models are more accurate in a lower complication (below chart). Source: Apple Machine Learning Research

    Researchers say AI chat boats are thinking more

    He found and observed contradictory and shallow arguments with the models, AI chat boats quickly created the right answers and then wandering in the wrong reasoning.

    Related: The role of stabilizing AI in Web 3, challenging defect and gaming: Daprader

    Researchers concluded that the LRMS was imitating samples of reasoning without making them truly internal or generalized, which is less than the AGI level argument.

    “This insight challenges the existing assumptions about the capabilities of the LRM and suggests that the current approach may face the main obstacles to the general reasoning.”

    apple
    Examples of a four puzzle environment. Source: Apple

    AGI’s development race

    AGI AI is the sacred stone of development, a state where a machine can think and argue like a human being and is equal to human intelligence.

    In January, Open CEO, Sam Altman, said the firm was near the construction of AGI more than ever before. He said at the time, “We are now confident that we know how to build AGI because we have traditionally understood it.”

    In November, Antarropic CEO Dario Amody said that the AGI will exceed human capacity in the next two years. “If you only go to the rate that is increasing, it forces you to think that we will reach 2026 or 2027,” he said.

    Magazine: Ignore AI jobs, Domers, AI job is good for Good PWC says: AIII