Artificial intelligence rules more of your life. Who rules AI?

March 14, 2018

Technology companies are racing to get ahead of regulators to shape the future of artificial intelligence as it moves deeper into our daily lives.

Companies are already working artificial intelligence, or AI, into their business models, but the technology remains controversial. So IBM, Intel, and associations representing Apple, Facebook, and Alphabet’s Google unit are seeking to set ethical standards, or a sort of code of conduct, through alliances with futurists, civil-rights activists, and social scientists.

Critics, however, see it as an effort to blunt outside regulation by cities, states, or the federal government, and they question if tech companies are best suited to shape the rules of the road. For the corporations, the algorithms will be proprietary tools to assess your loan-worthiness, your job application, and your risk of stroke. Many balk at the costs of developing systems that not only learn to make decisions, but that also explain those decisions to outsiders.

When New York City proposed a law in August requiring that companies publish source code for algorithms used by city agencies, tech firms pushed back, saying they needed to protect proprietary algorithms. The city passed a scaled-back version in December without the source-code requirement.

AI, broadly speaking, refers to computers mimicking intelligent behavior, crunching big data to make judgments on anything from avoiding car accidents to where the next crime might happen.

Yet computer algorithms aren’t always clear on their logic. If a computer consistently denies a loan to members of a certain sex or race, is that discrimination? Will regulators have the right to examine the algorithm that made the decision? What if the algorithm doesn’t know how to explain itself?

The Obama administration sought to address these issues. Under Mr. Obama, the Office of Science and Technology Policy issued white papers on the ethical implications on AI. Under Mr. Trump, the office still doesn’t have a director, and its staff is down to about 45, from about 130.

For now, the Trump administration has signaled it wants business take the lead. The administration is worried overarching regulation could constrain innovation and make the U.S. less competitive, Michael Kratsios, the deputy in charge of tech policy at the Office of Science and Technology Policy, said at a conference in February. He noted China’s push into artificial intelligence, which it is doing without much ethical quibbling.

In the past six months, Intel, IBM, Workday Inc. and the Washington, D.C.-based Information Technology Industry Council—whose members include Facebook, Apple, and Google—all issued principles on the ethical use of artificial intelligence. In January, Microsoft Corp. put out an entire book on “Artificial Intelligence and its Role in Society.”

Proposed rules have ranged from specific guidelines for government use, to requirements that any algorithm be able to explain its process to consumers. Many would apply existing regulations to AI case-by-case, adapting aviation rules to drones or applying privacy protections to personal data in algorithms.

Setting rules is complicated because companies often can’t explain how their more complex systems, called “deep neural networks,” arrive at answers. Last year, University of Washington researchers reported that an algorithm that had learned—by itself—to distinguish between wolves and husky dogs appeared to be doing so by noticing snow on the ground in the wolf pictures, not because of any insight into the animal.

Some in Congress are taking up the ethical debate. Washington Sen. Maria Cantwell and Maryland Rep. John Delaney, both Democrats, led a bipartisan bill introduced in December to establish a federal advisory council on the technology’s potential impact. There is also an AI caucus on the Hill, started in May, and committees have been holding hearings on AI, algorithms, and autonomous vehicles. The European Union has already set regulations for AI algorithms that are set to take effect in May.

The Wall Street Journal has the full story

Sponsored Recommendations

The Healthcare Provider's Guide to Accelerating Clinician Onboarding

Improve clinician satisfaction and productivity to enhance patient care

ASK THE EXPERT: ServiceNow’s Erin Smithouser on what C-suite healthcare executives need to know about artificial intelligence

Generative artificial intelligence, also known as GenAI, learns from vast amounts of existing data and large language models to help healthcare organizations improve hospital ...

TEST: Ask the Expert: Is Your Patients' Understanding Putting You at Risk?

Effective health literacy in healthcare is essential for ensuring informed consent, reducing medical malpractice risks, and enhancing patient-provider communication. Unfortunately...

From Strategy to Action: The Power of Enterprise Value-Based Care

Ever wonder why your meticulously planned value-based care model hasn't moved beyond the concept stage? You're not alone! Transition from theory to practice with enterprise value...