The AI Buzz in D.C.

By Nicole Fuller, Policy Manager
10 minute read

A snapshot:

  • Biden-Harris Executive Order: The October 30, 2023, executive order underscores the federal commitment to AI’s positive impact while addressing unintended bias, particularly in critical sectors like healthcare and education.
  • Education Focus in the EO: The EO instructs the U.S. Secretary of Education to create policies within a year for responsible AI use in education, emphasizing non-discrimination and considering impacts on vulnerable communities.
  • Disability Concerns in AI: Recognizing potential bias, the EO highlights the risk of AI technologies disproportionately affecting people with disabilities, especially in applications like virtual test proctoring.
  • Collaborative Civil Rights Initiatives: Collaborative efforts, including NCLD’s role in the EDSAFE AI Alliance and the Leadership Conference for Civil and Human Rights’ Center for Civil Rights and Technology, aim to ensure a safer and more trusted AI education ecosystem while countering bias and discrimination.

 

It is hard to believe, but ChatGPT was initially released just one year ago—November 30, 2022. The reality is that artificial intelligence (AI), especially in education, is not brand new. Personally, I remember back to the spring of 2016, when I was teaching 8th grade math, and I told my students that their annual state assessment (the Math 8 SOL in Virginia) was going to be a computer adaptive test for the first time. I explained to my students that their upcoming test, powered by AI algorithms, would adapt in real-time to change the difficulty of the questions they were answering. 

Fast forward to 2023: “predictive” and “generative” AI are now in my lexicon when they weren’t in 2016. I’ve used and seen others in the LD community using AI notetakers on Zoom calls. I’m dabbling with AI chatbots, including an interesting one I just discovered on Universal Design for Learning. 

There’s been a lot of buzz in our nation’s capital as policymakers realize AI’s sheer power to transform and innovate in so many sectors but also bear the responsibility of safety and protection. Here’s a recap of some recent policy news on AI.

What are the key messages and actions on AI from policymakers?

The Biden-Harris Administration

It’s been widely recognized that the federal government must react and respond to this emerging technology. AI touches nearly every function of the federal government, and President Biden’s Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (unveiled on October 30, 2023) described the action steps that this Administration plans to take to “harness AI for good and realizing its myriad benefits requires mitigating significant risks” in all sectors.

Here are a few ways in which the executive order touches upon the issues that the learning disability community cares about:

1. It recognizes the need to enact appropriate safeguards against unintended bias and discrimination.

The EO notes that “such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation.”

2. It directs the U.S. Secretary of Education to develop resources, policies, and guidance regarding AI within one year.

The EO states that the resources “shall address safe, responsible, and non-discriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate.” 

We’ve noted that the U.S. Department of Education has already started to develop support for AI-enabled education technology, including a new report on AI and the Future of Teaching & Learning, with some recommendations for education leaders in the field as well.

3. It recognizes that people with disabilities might experience unequal treatment from AI technologies that collect biometric data (i.e., gaze direction, eye tracking, hand motions).

While this is mentioned in the EO in the context of “Civil Rights in the Broader Economy,” our team is concerned that some AI-powered software (e.g., virtual test proctoring) might disproportionately flag students with disabilities based on disability-specific movement, speech, and cognitive processing (learn more from the Center for Democracy and Technology).

The EO serves as a directive with some lofty but necessary goals of achieving AI safeguards and guidance. These are important steps forward, but a lot remains to be seen about the specific actions that the Administration and Congress will take.

What now? Who are the movers and shakers at this moment? 

There is still a lot to learn, but there are also opportunities to react, ask questions, and advocate alongside the right people. Here are some ways that NCLD and others in the civil rights and disability community are working together to learn from one another and take collective action:

  1. NCLD is a steering committee member in the EDSAFE AI Alliance, an initiative coordinated by InnovateEDU to develop a safer, more secure, and more trusted AI education ecosystem.
  2. NCLD’s CEO, Dr. Jacqueline Rodriguez, is on the Advisory Board of the Center for Innovation, Design, and Digital Learning (CIDDL). A national center to improve faculty capacity to use educational technology in special education. 
  3. The Leadership Conference for Civil and Human Rights recently launched the Center for Civil Rights and Technology to harness AI’s power for positive change while safeguarding against bias, discrimination, and disinformation. 

All of these are evolving and growing initiatives, so we encourage you to stay tuned to learn from the resources available to inform conversations in your local community.