Guest Contributor – ArthurinCali – 14APR2023 – Scientific Dark Ages

AI medical imaging potential should not be stifled.

 

Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.”

-Marie Curie (1867-1934)

 

The de facto position concerning new medical discoveries cannot be that it will inevitably be used for sinister purposes. This anti-science viewpoint stifles innovation and progress for humanity, thus stunting potential breakthroughs in medicine. Tools available to researchers and health professionals are just that, tools. The usage and application are dependent on who wields these instruments. It is not hard to imagine the first Neanderthals who discovered fire might have been dubious about its utilization, once one of them tried to touch it.

Recently it was found that Artificial Intelligence (AI) used to screen medical images can detect with near-perfect accuracy the patient’s racial ethnicity without prior knowledge from outside inputs. This ability for AI covered a broad series of image types; from CT scans to chest X-Rays, and persisted over all anatomical regions. This finding was replicated even from images that were cropped, corrupted, or not ideally clear. Even more surprising is that human experts trained to analyze these scans cannot determine the patient’s race, yet AI trained with deep learning developed this ability by itself.

The AI utilizes ‘Deep Learning’ as the machine learning model that imitates the human learning process to a degree with predictive analytics, and complex learning algorithms.

A good analogy describes the process as:

“To understand deep learning, imagine a toddler whose first word is dog. The toddler learns what a dog is — and is not — by pointing to objects and saying the word dog. The parent says, “Yes, that is a dog,” or, “No, that is not a dog.” As the toddler continues to point to objects, he becomes more aware of the features that all dogs possess. What the toddler does, without knowing it, is clarify a complex abstraction — the concept of dog — by building a hierarchy in which each level of abstraction is created with knowledge that was gained from the preceding layer of the hierarchy.”

In this fashion, deep learning differs from traditional linear algorithms that are dictated by the computer programmer. A deep learning model will independently sift through massive data sets to create complex statistical models from large amounts of unstructured data. This represents a new frontier in AI technology. Prior to the 21st century, access to big data sets and cloud computing were out of reach for many computer programmers.

Not everyone in the medical field sees the potential in AI recognition. This article from MIT discounts any benefit to the computer model’s ability to detect race.

“The fact that algorithms ‘see’ race, as the authors convincingly document, can be dangerous. But an important and related fact is that, when used carefully, algorithms can also work to counter bias,” says Ziad Obermeyer, associate professor at the University of California at Berkeley, whose research focuses on AI applied to health.

This understandable caution for bias by AI is the prevailing theme for a significant part of the medical community. Numerous papers and articles sound the alarm for potential abuses of AI racial recognition. However, not one study or opinion piece expanded on the potential uses for AI’s talent to discern race in medical imaging. Remember, no human experts can do this, regardless of experience or training. What if this could unlock new, highly effective therapies tailored to different races depending on the disease? Maybe this leads to discovering distinct correlations that would allow for earlier detections of cancer and other diseases. AI using multivariate analysis would give all patients an edge in their treatment plans. An entirely new realm of possibilities could open up for science, but only if the courage is there to look for it.

This may be of no value in the field of medicine, and possibly even be a feature of AI that needs to be eliminated entirely. Yet to dismiss out of hand the ability for AI to detect racial identity in medical imaging without more research into potential uses would be folly and a discredit to the goal of furthering medical science and artificial intelligence.

Links:

AI recognition of patient race in medical imaging: a modelling study

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00063-2/fulltext

Artificial intelligence predicts patients’ race from their medical images

https://news.mit.edu/2022/artificial-intelligence-predicts-patients-race-from-medical-images-0520

AI systems can detect patient race, creating new opportunities to perpetuate health disparities

https://news.emory.edu/stories/2022/05/hs_ai_systems_detect_patient_race_27-05-2022/story.html

Risks of AI Race Detection in the Medical System

https://hai.stanford.edu/policy-brief-risks-ai-race-detection-medical-system

AI Can Detect Race When Clinicians Cannot, Increasing Risk of Bias

https://healthitanalytics.com/news/ai-can-detect-race-when-clinicians-cannot-increasing-risk-of-bias

AI programs can tell race from X-rays, but scientists don’t know how. Here’s why that’s bad.

https://www.boston.com/news/health/2022/05/18/scientists-create-ai-race-from-x-rays-dont-know-how-it-works-harvard-mit/

Hidden in Plain Sight: If AI Can Detect Race, What About Bias?

https://www.medscape.com/viewarticle/977619

Deep Learning

https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network

22OCT2022 – OCF Update – Sony Photography

 

Politics has been so all consuming lately that I haven’t had a chance to write about anything else.  But in the last month or so there’s been a lot of buzz about Sony’s imminent release of their latest update of the high resolution camera, the A7R V.  When I’m interested in breathless reporting I go to sonyalpharumors.com and listen as Andrea tells us confusing and sometimes inconsistent things about the future.

The Sony A7R cameras are very nice pieces of kit and using them for macro is a very attractive proposition with their 61 megapixel sensors and other high resolution accoutrements.  But 61 megapixels is a little bit more than I think I need.  Plus the price tag is now coming in above $4,000.  And for a man of my limited means that’s beginning to seem high.  Plus I do a sort of mixed landscape, macro, walk-around photography that seems to play to the Sony A7 IV all-around camera sweet spot.  So let’s just say that my interest in the Sony A7 V hullabaloo is more on the academic side.

But what did intrigue me was the talk about AI based autofocus.  And here’s why.  I’ve been hearing and reading from various sides that phone cameras are catching up with dedicated professional cameras.  And the reason given for this is that phone cameras have highly intelligent algorithms that provide very precise autofocus and excellent sharpening and color representation.  And that as we reach the limits of what lenses and mechanical devices can do for optical focusing and image stabilization it will be this advanced artificial intelligence that will render phones as the future of photography.

Now, currently I don’t think things are really all that simple.  In fact I’ve spoken to some photographers who have very good phone cameras and they say that although they get very nice shots from their phones, they wouldn’t put one of these files up against a full frame landscape shot as a source for a large print or even as a basis for a cropped photo.  Apparently there is a bit of surface magic going on that doesn’t stand up to close scrutiny.

But that being said, I am positive that adding a strong algorithm to a very good camera like any of the various Sony A7, A9 or A1 cameras would be a very useful and fruitful step.  For things like birds in flight and sports tracking it would be a step in the right direction.  Just like the eye-AF function was a game changer for good focus, having a program with many times the speed of human reflexes concentrating on evaluating the auto focus and recognizing the changes to the image and anticipating the expected changes to the focus result would improve results greatly.  Even for something like fast moving insect macro photography, having the latest algorithm to optimize the changes in focus point would provide a much higher rate of success in something that has a very short time window.  I could imagine that specialized algorithms might be available for individual subjects like birds in flight or hovering hummingbirds or even butterflies on flowers.

As you can see I’ve let my imagination run away with me but I’m sure there are sports photographers salivating over the idea of an algorithm specifically formulated for football wide receivers running for catches down the side line.  Or how about one specially formulated for NBA stars pretending to be fouled to obtain a penalty shot?

Anyway, I’m intrigued by just what AI may have in store for us in the future.  One thing I hope is that a lot of it might be available as a firmware update to our already very capable cameras.  But honestly I’m not very aware of exactly what capability my camera’s “Bionz-XR image processor has.  It may be a genius or a dolt.  So I’ll have to wait and see.

Meanwhile my present camera has been doing a splendid job of taking all the pictures I ask of it.  At this point I have no need of an upgrade.  All I ever watch nowadays is the new lens info.  And even that is sort of a reflex.

So I’ll just see where everything goes and enjoy photography for what it is.  Fun.