With the National Defense Authorization Act for fiscal year 2021 finally becoming law, Congress has taken a much-needed step forward in artificial intelligence policy. A terrific outcome of a nearly two-year process — and the start of great things to come — the NDAA directs the National Institute of Standards and Technology to launch a trustworthy AI framework and lay the necessary foundation to continue responsible innovation in this growing area of technology.
AI and machine learning technology can give people better predictions on which to make more informed decisions and help enable organizations operating across all sectors of the economy to become more efficient, productive, and innovative. Of particular significance, given the economic shifts we’ve seen related to the COVID-19 pandemic, AI and ML are facilitating increased workforce agility through a skills-based approach to talent. While these technologies hold tremendous potential to improve our lives, they have also raised concerns about perpetuating or exacerbating improper bias in decision making and producing unintended consequences.
Policymakers on both sides of the Atlantic have taken notice. Overseas, the European Commission is moving ahead swiftly with specific concepts related to AI regulation. Early last year, the commission’s High Level Expert Group finalized its Ethics Guidelines for Trustworthy AI, a set of principles intended to guide the development of AI technologies for use in Europe. Similarly, with AI policy a priority for European Commission President Ursula von der Leyen, the commission completed a consultation process on a white paper focused on AI regulation this past spring and legislation addressing AI risk and transparency is promised for 2021.
In the United States, while the Trump administration took a largely nonregulatory approach to AI via the recently finalized Office of Management and Budget guidance to agencies, Congress has expressed growing interest in and appetite for developing AI- and ML-related policy — a development all the more important given Europe’s head start. As we saw with its privacy-related General Data Protection Regulation, the European Commission is no stranger to adopting thoughtful technology policy that effectively regulates the global marketplace beyond Europe. If we are to see a globally harmonized approach to AI regulation, the United States has some catching up to do.
At Workday, in addition to taking steps to ensure an ethical approach to our use of ML, we advocate for public policies that help ameliorate concerns with AI and ML and grow trust in the technologies as well as support Congress in developing AI policy that can lead to a harmonized approach.
With these objectives in mind, the task of developing tools, best practices, and guidelines that can eventually form a consensus-driven foundation for a U.S. regulatory approach to AI is a natural fit for NIST. NIST’s groundbreaking work on a cybersecurity framework is widely considered the gold standard for collaborative processes. The effort facilitated widespread adoption of cybersecurity best practices, especially among small- and medium-sized enterprises. In addition, NIST recently rolled out a similar framework approach dealing with digital privacy that was met with widespread praise. Given its strong track record of effectiveness, no federal agency is better positioned than NIST to convene a collaborative process focused on developing a trustworthy AI framework.
What started as a congressional letter asking NIST to convene a framework process grew to a successful bipartisan legislative push with broad stakeholder support. Standalone AI-related bills with framework references were introduced in both the U.S. House of Representatives and the U.S. Senate, with support from both Democrats and Republicans. In addition, the Bipartisan Policy Center AI Initiative led to an AI strategy resolution mentioning a framework being adopted in the House, and the NIST AI framework concept drew support from both the House and Senate appropriations committees. These efforts all led to the inclusion of language directing NIST to launch a trustworthy AI framework in both the House-passed NDAA bill and in the final NDAA conference report that ultimately became law.
Congress’ timing couldn’t be better. The process that yielded the NDAA AI language catalyzed a recognition in the tech sector, on both sides of the Capitol, on both sides of the aisle and beyond that trust is key to ensuring that the benefits of AI are realized and embraced by all. In the coming weeks, President-elect Joe Biden will be inaugurated and his administration will set about the task of addressing the pressing issues facing the country and advancing their policy agenda, including with respect to tech policy. Congress’ recent efforts have provided a ready-made first step for the incoming administration when it comes to advancing a timely U.S. approach to AI and ML given the evolving process in Europe.
On Capitol Hill as in anywhere, many hands make light work and Sens. Roger Wicker (R-Miss.), Maria Cantwell (D-Wash.), Cory Gardner (R-Colo.), Gary Peters (D-Mich.), and Jerry Moran (R-Kansas), as well as Reps. Eddie Bernice Johnson (D-Texas), Frank Lucas (R-Okla.), Brenda Lawrence (D-Mich.), Jose Serrano (D-N.Y.), Robert Aderholt (R-Ala.), Kendra Horn (D-Okla.), Will Hurd (R-Texas), Robin Kelly (D-Ill.), Pete Olson (R-Texas), Suzan Delbene (D-Wash.), Darren Soto (D-Fla.) and Anthony Gonzalez (R-Ohio) all lent a hand in this bipartisan push for trust in AI.
Workday looks forward to joining lawmakers, industry leaders and stakeholders, as well as the Biden administration and NIST officials in launching the trustworthy AI framework and ensuring its success.
Jim Shaughnessy is executive vice president for corporate affairs at Workday, responsible for the company’s public policy, advocacy, and tech for good initiatives.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.
Correction: A previous version of this piece misstated the name of the National Institute of Standards and Technology.