@emilymbender "... so you can feel more informed and prepared ..."
Not BE more informed, FEEL more informed. This is dark and ghoulish and gets at a core evilness of AI - that obsequious overconfident presentation of unvetted information, the casual amoral lying and the greasy faux apology when called on it. And to be clear, I'm not anthropomorphizing the software - the system is a proxy for its owners and developers. There is a set of humans that is responsible for creating and deploying and selling these systems, a set who profits from them - the hand inside the puppet head. Regardless of the elaborate statistical text generation mechanism inside the puppet, someone controls the presentation of results and built that fake human-like interface, added that fake apology, etc. That's not the LLM, that's deterministic UI programming, designed and built by humans to let you feel the system is more intelligent and feeling than it is. Remove the raw LLM results from the total response and what you have is that overconfident and obsequious amoral framing which is an intentional choice by the system designers. However obscured, there's still a human hand inside that puppet head and that human's presence is what transforms the LLMs unvetted extruded text into fraud and lies.