Interesting. Does this apply to all subjects? From what I understood, a major cause of hallucination was that models are inadvertently discouraged by the training from saying "I don't know." So it sounds like encouraging it to express uncertainty could improve that situation.
That's not a major issue. Any newer model with reasoning/web search has to be able to tell when it doesn't know something, otherwise it doesn't know when to search for it.