<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[&quot;AI outperforms doctors in Harvard trial of emergency triage diagnoses | AI (artificial intelligence) | The Guardian&quot;]]></title><description><![CDATA[<p>"AI outperforms doctors in Harvard trial of emergency triage diagnoses | AI (artificial intelligence) | The Guardian"</p><p><a href="https://www.theguardian.com/technology/2026/apr/30/ai-outperforms-doctors-in-harvard-trial-of-emergency-triage-diagnoses" rel="nofollow noopener"><span>https://www.</span><span>theguardian.com/technology/202</span><span>6/apr/30/ai-outperforms-doctors-in-harvard-trial-of-emergency-triage-diagnoses</span></a></p><p><a href="https://toot.cat/tags/ai" rel="tag">#<span>ai</span></a> <a href="https://toot.cat/tags/medicine" rel="tag">#<span>medicine</span></a> <a href="https://toot.cat/tags/diagnosis" rel="tag">#<span>diagnosis</span></a></p>]]></description><link>https://postcall.pub/topic/6a085aa8-7b2b-4533-9a65-d23ae109b623/ai-outperforms-doctors-in-harvard-trial-of-emergency-triage-diagnoses-ai-artificial-intelligence-the-guardian</link><generator>RSS for Node</generator><lastBuildDate>Fri, 01 May 2026 12:10:56 GMT</lastBuildDate><atom:link href="https://postcall.pub/topic/6a085aa8-7b2b-4533-9a65-d23ae109b623.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 30 Apr 2026 22:27:06 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to &quot;AI outperforms doctors in Harvard trial of emergency triage diagnoses | AI (artificial intelligence) | The Guardian&quot; on Thu, 30 Apr 2026 22:45:01 GMT]]></title><description><![CDATA[<p>This surprises me not at all. And it probably surprises no one except perhaps doctors.</p><p>Back in the '90's while working on my PhD in statistics, I came across some research on how physicians make diagnostic decisions. The most common paradigm was to pick one hypothesis and ride it until it was definitively proven wrong. Then, pick the next hypothesis, and repeat.</p><p>It would be much better to list all possible hypotheses, or at least the top k, and keep them in play, gathering data to discriminate them. Continually refine Bayesian posteriors of competing hypotheses.</p><p>No one looking carefully at diagnostic decision-making would ever have said that physicians generally perform optimally.</p><p>We could have developed diagnostic reasoning tools in the '90's that could have beaten physicians (but without natural language input). But no one would have seriously entertained using it. We weren't allowed. Too many questions about liability.</p><p>Now AI is a 500-pound gorilla and it can do what it wants. That's the biggest change. Medical diagnosis isn't really that hard a problem to solve. Robust handwriting recognition is probably harder.</p>]]></description><link>https://postcall.pub/post/https://toot.cat/users/JimG/statuses/116496079348281365</link><guid isPermaLink="true">https://postcall.pub/post/https://toot.cat/users/JimG/statuses/116496079348281365</guid><dc:creator><![CDATA[jimg@toot.cat]]></dc:creator><pubDate>Thu, 30 Apr 2026 22:45:01 GMT</pubDate></item></channel></rss>