An analogy came up last month that I think bears discussion, and I wonder what you think. Everyone knows we have to watch out for AI’s dangers, but the same was true when cars were new. And electricity. And machinery. How do we learn to move forward – to enjoy new power – without horrible risks??
In the past year I’ve been involved in several projects all revolving around this constant tension. Most recently, at the recent @DIA2024 conference in San Diego, I was the patient voice on a high powered panel about how to move forward in a way that leverages AI’s power without foolish risk. It was moderated by the excellent Martin Hodosi of consulting firm Kearney; fellow panelists were Karla Childers, head of Bioethics-Based Science and Technology Policy for J&J, and Junaid Bajwa, Chief Medical Scientist at Microsoft Research and a practicing NHS physician.
The panel was as sharp as you’d expect, but the most energizing insight arose in Q&A. The DIA audience is highly concerned about safety, regulatory issues, and responsibility to patients. As we tossed this tension back and forth I eventually burst out with this (to the best of my recollection):
Think about when cars were new. We’d never have gotten anywhere if we’d banned them until we had turn signals, windshield wipers, paved roads, speed limits, stoplights, anti-lock brakes, automatic transmissions, guard rails … We could only make progress by gaining experience and learning what was needed.
It was a risky thing to say – there was no time to think it out – but it stopped things in their tracks and people came back to it several times, including in the ideating session that followed.
Is it a valid analogy? I think so: with every breakthrough technology there’s enormous potential and the real risk of deaths. Consider electricity: truly world-changing – but it killed people in ways that had never happened before, and it took decades of improving regulations and building codes before we got to where we are today. (And it still kills some, every day.)
What do you think? Is the car analogy valid? Can anyone out there fill in important details to the question that are missing, which we can learn from?
Jan Oldenburg says
Hi Dave,
I think it is a reasonable analogy, BUT–think about how you would feel if you had a relative who was a victim of the early (lack of) traffic laws. I think this episode of the Criminal Podcast has some instructive components and analogies: https://t.co/DAhGMD7lhL So, while there’s certainly a balance that needs to be found, I don’t think we should assume that “charge forward and experiment with no guardrails” is exactly the right direction. Progress, yes, but with care. And it’s important, of course, that we don’t leave it up to the corporate folks to create the guardrails. Patients and consumers need to be involved as well!
e-Patient Dave says
Excellent reply, as always!
I think it’s safe to say that the whole reason we talk about guard rails today is because of what happened to people when there were no guard rails!
The point we arrived at in the room is not stated in this post, and perhaps I should update it: we Cannot wait until everything has been figured out before we start moving forward. But obviously, as you say, we also can’t just put on blinders and race ahead.
And we absolutely need patient voices involved in the decisions. To learn that lesson all we need to do is look at what Detroit kept doing until Ralph Nader came along and properly embarrass them for their heartlessness.
Bill Reenstra says
There is a major difference between cars, planes, and computer hardware and AI. That difference is that only AI can be used to deceive someone else. AI can also deceive the user, as the input information and the process of analysis is often obscured from the user, the validity of AI output can hard to assess.
e-Patient Dave says
I fully agree, Bill. I need to get off my butt and blog an excellent TEDTalk by Eric Topol that was published last December about AI in medicine. I say that because as you probably know, there are two distinctly different classes of AI: predictive, which is the sort that does diagnosis, and generative, which is ChatGPT and its ilk, which generate expressions of ideas. I believe the latter is what you are talking about, and IMO it is indeed an entirely new class of threat.
Having said that, one could explore the assertion that cars and steam engines presented unprecedented dangers because for the first time ever we had things more powerful than we had ever imagined… More powerful than the strongest horse, to tie it back to the image above.
In any case I think we still find ourselves on a frontier, trying to figure out how to move forward without killing ourselves and others. And yes, the risks of intentionally fraudulent use are really bad, because detecting it requires an astute mentality, which is not something we can count on around the world or even in our own neighborhoods.
Bill Reenstra says
I am of a school where, even correct answers to test question without showing your work, how you arrived at the answer, were not sufficient answers and where graded accordingly.
My 30,000 ft view of generative AI is that it generates correct answers but the process by which it achieved its answer is unknowable to both the end user and the programmer. As such it has limited appeal to me.
A fundamental and unanswered question of biochemistry is ‘How do proteins, composed of long linear chains of amino acids (200 to 2000 amino acids long), fold into the compact 3-dimensional shapes of proteins?’
Much work has gone into crystallizing individual proteins and using a labor intensive process called x-ray crystallography to determine their structure. This povides structures but does not address how question of how they folded into their final structures.
Because of the labor/time required to solve individual structures, there have been theoretical studies that attempted to solve the structure of proteins with unknown structures. I would characterize 70 years of work on this problem as a total failure.
About 3 years ago an AI program was developed, trained on a data base of 10,000+ known structures and asked to predict the structures of proteins with unknown structures. The structures generated were amazingly good. They then ran structures for all proteins without a known structure and made them public. They have proved to be a valuable biomedical resource, especially for drug design as you can now use other programs to search for molecules that might interact with proteins that previously had no known structure.
The problem was that many claimed that this answered the problem of how proteins folded. It did not. It provided structures, but only because it had a judge library of known structures. It didn’t solve protein structure de novo. It didn’t even provide information on how it developed the structures.
In medicine I don’t want cures without knowing how they work. I fear that using generative AI to look for cures lead to this.
e-Patient Dave says
Bill, you’re spot-on … at least as far as I know, given that I’m no technical expert about AI.
I would not for an instant assume the AI is right about anything. The trick is to use it to help us think.
What you’re talking about seems to fall under “explainable AI.” Here’s the Wikipedia article on the subject, and here’s a quote from a paper on explainable AI in healthcare:
“Publications on artificial intelligence (AI) and machine learning (ML) in medicine have quintupled in the last decade1,2. However, implementation of these systems into clinical practice lags behind due to a lack of trust and system explainability2,3. Solving the explainability conundrum in AI/ML (XAI)4,5 is considered the number one requirement for enabling trustful human-AI teaming in medicine2,3,6 …”.)
EVERYONE I know who’s a non-loony thinker in medical AI agrees with you. (Note, to me “loony” includes any commercial interest that wants people to buy their s^@+; I’m only talking about people whose principal priority is to advance the field scientifically … to advance the field while being responsible and cautious.)
An important prompt-writing technique is to include “show your work” or “explain your reasoning.” For instance, here’s a prompt I used recently to explore a weird symptom someone was having.
“Thinking as a neurologist, please provide a differential diagnosis, explaining your reasoning. Here is the case. …” then added as much info as I had about the person and the case.
If you want to do a little mutual experiment, propose a question and we can play with it offline then “publish” the results.
N.B. I am not hawking any particular POV here. I’m exploring and sharing, and your collaboration would be valuable.
Note also that I don’t think I’m a doctor! The example above was just an illustration of learning to interact with these tools.
Dustin Cotliar says
You make some great points here
about needing to embrace new technology and learn from failures in order to regulate better. We are definitely on the same page with this. One thing to note in this discussion is that AI innovation is a different beast in that it’s progression is exponential as opposed to linear. The rate of change and advancement will present regulatory and safety challenges that are both hard to keep up with and anticipate. However the potential benefits are so great that we need to take the plunge regardless.