Could Artificial Intelligence Create Outcomes Beyond Humans’ Comprehension?
Anyone interested in a good roundtable discussion, in book form, of the subject of artificial intelligence (AI) and machine learning, might want to turn to the 2019 book Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, whom the volume’s book jacket describes as “a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. The founder and publisher of the online salon Edge (edge.org), he is the editor of the Edge Question book series, which includes This Idea Is Brilliant, Know This, This Idea, Must Die, This Explains Everything, This Will Make You Smarter, and other volumes. Perhaps it took a polymath and gadabout like Brockman to assemble such a kaleidoscope of diverse perspectives on this important subject.
And Brockman has indeed gathered together a roundtable of diverse contributors—yes, a full 25 of them, including the expected computer scientists and entrepreneurs, but also professors of electrical engineering, architecture, physics, developmental psychology, philosophy, and the history of science, among other thinkers. Each of the 25 has unique perspectives to share.
Of course, my eye was drawn to the perspective shared by Venki Ramakrishnan, a Nobel Prize-winning biologist at the Medical Research Council Laboratory of Molecular Biology at Cambridge University in the United Kingdom, because of the title of his chapter: “Will Computers Become Our Overlords?” I mean, who could resist dipping into that chapter?
Ramakrishnan writes, “A former colleague of mine, Gérard Bricogne, used to joke that carbon-based intelligence was simply a catalyst for the evolution of silicon-based intelligence. For quite a long time, both Hollywood movies and scientific Jeremiahs have been predicting our eventual capitulation to our computer overlords. We all await the singularity, which always seems to be just over the horizon.”
Meanwhile, after exploring some of the potential devastation to people’s jobs and careers that AI might bring, Ramakrishnan states that, “As a scientist, what bothers me is our potential loss of understanding. We are now accumulating data at an incredible rate.” And there is great potential for problems down the road, he says, as AI responses to challenges could actually come to outstrip humans’ abilities to use them. Instead, he writes, “If AI is to become more humanlike in its abilities, the machine-learning and neuroscience communities need to interact closely something, that is happening already.” And as mysterious as AI is in general, how its future will evolve forward in healthcare remains even more inscrutable at the moment.
In other words, how all this will turn out in healthcare remains mysterious, even to those at the forefront of AI development in our industry; we will only know over time where this train is headed. The same statement could be made about the future of health information exchanges (HIEs), the organizations that facilitate the sharing of key clinical and related data between and among appropriate parties, including physician practices, hospitals and health systems, long-term care and rehab facilities, public health agencies, and other organizations. So much has happened in the healthcare policy world of late, including changes of controlling party in Congress and the White House, as well as the impact of the COVID-19 pandemic, that has impacted the trajectory of HIE evolution. There are numerous potential futures here, and it will be important for the leaders of patient care organizations to follow developments closely. Our cover story in this issue (see p. 6) shares the perspectives of several leaders in the HIE sector, and provides fuel for thought.
As with the future of AI and machine learning, there are innumerable potential outcomes—and ultimately, more than 25 perspectives on any specific topic in either area of endeavor.