The IBM computer system known as Watson at IBM’s T.J. Watson research center in Yorktown Heights, N.Y., in 2011. Watson is being tapped by one of the nation’s largest health insurers, WellPoint Inc., to help diagnose medical problems and authorize treatments.

During the Democratic Party presidential infomercials, otherwise known by the pseudonym “debates,” not much (if any) was dedicated to the impact Artificial Intelligence (AI) will have on job loss.

Each time technological improvements have entered the marketplace, they have been accompanied by a certain amount of job reduction that corresponds with job creation. But will America have a workforce prepared for the new jobs?

We’ve progressed from Hal 9000 opening the pod bay doors in the movie 2001: A Space Odyssey to self-driving cars.

IBM’s Watson defeated two of the quiz show Jeopardy’s best players over the past decade. Google’s AlphaGo machine won a game of Go against China’s Ke Jie, considered to be the world’s best player; and scientists at Carnegie Mellon created a system that won $2 million in a poker tournament from top players. It will only be a matter of time before AI writes a weekly social/political column, hosts a radio talk show and writes a bestselling novel.

Fewer humans are needed to provide customer service, check out groceries, provide basic banking services, order fast food and assemble automobiles. Amazon is testing technology in its warehouses that can package orders five times faster than humans. In short, increased productivity voids human labor, reduces wages and increases profits.

None of this reflects a Neanderthal call to halt progress, as some sort of 20th century protectionist plan to momentarily save jobs. Progress is inevitable, and with it comes a certain amount of pain.

As AI progresses, some believe that it will engulf large sectors of the workforce, causing mass-scale unemployment and social unrest. A 2013 Oxford study suggests as much as 47% of current U.S. jobs are at risk of automation. Whether this statistic is accurate remains to be seen. But is doing nothing a viable option?

Is there not a moral imperative to think through the implications of foreordained change? Do we want to wait until jobs are lost and communities decimated to enact a reactionary job-training programs? That’s a difficult sell to a country yet to be fully weaned from immediate gratification.

The only way to address the inescapable reality is to be proactive, lest politicians be forced to offer empty campaign promises such as bringing back the coal industry or longing for the “good old days” when baristas at Starbucks were human.

The problem rests not with having a definitive answer, but rather the lack of courage required to ask the right questions. Once AI is fully integrated, what jobs will be eliminated, what jobs will be created and what should our education system look like in response?

It may be assumed that education will place an even higher value on science, technology, engineering and mathematics (STEM), but the counter-intuitive reality suggests there will be a premium on the critical thinking and problem solving not always found in the aforementioned disciplines.

Technology moves the culture forward, but the human condition is cyclical. Since the recording of human history, we may, over time, do it faster, cheaper and more efficiently, but we still get angry, jealous and elated.

Without a visionary plan, the technological advances will inevitably lead to job loss that will be answered with cheap reactionary responses that do just enough to elicit emotion — ineffectual solutions for complex problems. How many times do we need to watch the same movie before we realize the ending is not going to change?

This reflects badly on our culture and politics. Either the impact of AI doesn’t poll well enough among the electorate to warrant concern or those who seek the office do not yet know how to talk about it. But political leadership is charting a course before it polls well.

As President John F. Kennedy stated in his famous “Moon Speech” at Rice University, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.”

Is it not the doing of “hard” things rather than retreating to the silos of a nostalgia that probably didn’t exist that has made America great?

But something that is 20, 40 or maybe 50 years down the road is not widely embraced in a culture that is primarily transactional. If we’re hamstrung by the inability to have judicious conversations about guns and climate change, I suspect it is overly ambitious to hope at least one individual running for president has his or her eye on the future — the not-so-distant future.

Make sure you never miss our editorials, letters to the editor and columnists. We’ll deliver the Journal’s Opinion page straight to your inbox.

The Rev. Byron Williams (byron@public morality.org), a writer and the host of “The Public Morality” on WSNC 90.5, lives in Winston-Salem.

Load comments