Unreflective people really seem to have a "the AI told me it, so it must be true" attitude, when in fact the exact opposite must be our prior—"the AI told me it, so it is presumptively false until verified through other sources." These systems have no ground-truthing mechanism, we _have_ to provide it ourselves.
And if you're of a certain age, you may remember that this is what older people told students of my generation about Wikipedia. The difference is that Wikipedia even at the time had more of a ground-truthing mechanism than LLMs do, and only moreso since. (Despite its many well-documented issues)