A Meager Mea Culpa
A.I. has passed the US Medical Exam. What if that A.I. system was based on the same sad policies that nearly destroyed our society during COVID?
Should medical professionals be assisted by A.I.? Would that have helped during COVID? A medical student issued a mea culpa on the behalf of the medical and scientific community in a Newsweek essay. One first brush you get the impression that the author Kevin Bass issues a mea culpa. Here is the start:
As a medical student and researcher, I staunchly supported the efforts of the public health authorities when it came to COVID-19.
I believed that the authorities responded to the largest public health crisis of our lives with compassion, diligence, and scientific expertise. I was with them when they called for lockdowns, vaccines, and boosters.
I was wrong. We in the scientific community were wrong. And it cost lives
Bass continues to name the culprits. We know them as the CDC, FDA, Fauci, The Science, the Lancet etc. He states that in each case they were wrong, that all the mandates, studies, obfuscation cost lives, and somehow the public health sector misled us about its own views. This was done as a result of policy being made by people who favored their own preferences. Bass concludes with:
But perhaps more important than any individual error was how inherently flawed the overall approach of the scientific community was, and continues to be. It was flawed in a way that undermined its efficacy and resulted in thousands if not millions of preventable deaths.
I’m not sure what that really means. If a policy is fatally flawed it has no efficacy. Destroyed would be the appropriate word. But maybe not. How is something so WRONG in each and every decision, when it contradicts facets of immunology and basic medicine, not to mention common sense, and not done on purpose?
Bass goes on to state:
We created policy based on our preferences, then justified it using data. And then we portrayed those opposing our efforts as misguided, ignorant, selfish, and evil.
Not quite. Others very rightly pointed out that the data sets were skewed, the collection process was abhorrent as it included populations with comorbidity that should have disqualified it from use. We had county health departments declare that one verified infection increased the number of probable infections by 10, then mixed all those confirmed and probable counts together as “cases”. We had deaths recorded as COVID with no death certificate, we had COVID listed as one of the conditions, yet named as the cause of death for people who died from cardiac arrest.
This is mildly accurate at best. We indeed had a lexicon, we had science on our side, we had decades of medicine on our side that had determined how respiratory viruses were propagated, and we had earlier experimental results from mRNA that demonstrated it was too unstable to use to combat respiratory viruses.
As a medical student Bass doesn’t recognize this? I question his training. I question his ability to think. I suspect he may be a part of the problem if he didn’t see these glaring things right away. That’s the type of doctor I would want to have.
We have a problem in our society, it’s an overwhelming love for credentialism. By that I mean we tend to believe “experts” with titles who can recite what they have been taught and we take it as fact. We were once told that all science begins with observation, but now how many times are we told “correlation does not mean causation”. Sure, but that becomes a dismissal when your instincts are ignored, or worse, contrary facts are dismissed by those who utter that overused phrase. Our experts are unaccustomed to being questioned. “Every educated person knows …” is the pejorative that is supposed to put you in your place.
We also live under the myth that possessing a credential means that someone has come from the Imminently Qualified factory without questioning their true IQ. Spend more time in the IQ Factory and you must be smarta. We’re told that advanced degrees guarantee that. At least the health officials tell us that. They did when I worked with them during the Covid Crisis.
Let’s look at this from a different angle. What does Artificial Intelligence say about the capabilities of the scientific community, does it diminish their worth? There are two headlines that should ponder. The first is Would Chat GPT Get a Wharton MBA? New White Paper By Christian Terwiesch
The second headline, featured in the Tweet to the right, is the that A.I. Chat GPT Platform has passed the US Medical Licensing Exam. Yes, a computer application has performed at an acceptable level that merits a license to practice medicine. So does that say more about the A.I. platform or does it say more about the medical profession? My writing here is not to insult those in the medical profession, there are fine physicians out there who fought the COVID policy nonsense and lost their jobs over it. But I am holding up a rather large mirror to those who cling to the claim that their training guarantees inerrant skill that prepares them to help others heal. Does it? Has Bass received different training than the A.I. application that would have helped protect patients? Clearly he was on board with the protocols issued by the CDC as he didn’t question them in the beginning as he should have. Potentially he was fooled into thinking that the CDC was inerrant but what about the common voices of reason from other sectors of society who recited the very knowledge that Bass should have applied immediately?
Now considered the A.I. knowledge base. What if the A.I. drew it’s conclusions from the faulty data gathered during COVID? Clearly if the application can pass the medical licensing exam it can perform in a fashion to earn it passing grade, you couldn’t conclude that it would deliver a different set of recommendations.
Humans can be lazy, frail beings, subject to passions that derail rational thought. The intellectual elite are no different, and in fact their hubris fuels this condition that blinds them. That’s being kind, because the absolute failure of the public health sector during COVID so miserably begs the question whether this was intentional. That’s another topic for another day. But the Professor Terwiesch who ran the MBA experiment very rightly pointed out:
The skill of looking at a suggested alternative that is well presented and looks totally plausible and then being able to critically evaluate if the suggested alternative is fundamentally flawed or absolutely brilliant is thus among the most important skills in a management career. With Chat GPT3 playing the role of that smart consultant (who always has an elegant answer, but oftentimes is wrong) we thus have a perfect training ground for developing that skill. Just think back to Answer 3 (the Cranberry process). It was well presented and the numbers looked coherent and plausible – but, it was wrong nevertheless.
With COVID that. Just. Didn’t. Happen. Period.
People died, needlessly. All because medical professionals just recited “the protocols tell us we should not treat, we should vaccinate” like some magical spell uttered by Gandalf in Lord of The Rings. How is that different from relying on A.I. to just spit out answers for us? If the answers are from the same poor knowledge base that are somehow deemed correct, yet the disastrous results are ignored, what are we achieving? Perhaps a further shuttering of the mind.