Bonamici Announces Human-Centered Framework to Address AI Education and Workforce Readiness
Today Congresswoman Suzanne Bonamici issued the following statement on her upcoming legislation to create a comprehensive framework to improve AI education and workforce readiness:
Artificial intelligence is shaping how Americans learn, teach, work, and compete, but our education and workforce systems have not kept pace. Advocates for a moratorium on state AI laws have failed to present a national, comprehensive proposal, and current federal policy remains fragmented, reactive, and focused more on technology deployment than human readiness.
Students use AI tools daily without clear standards. Educators navigate new technologies without adequate guidance or professional development. Workers face rapid change with few protections or pathways to adapt. According to the Kenan Institute, nearly 80 percent of U.S. workers see at least 10 percent of their tasks affected by AI. The same study found that 19 percent of workers will see major job disruption caused by AI. If we fail to meet this moment and chart a path for responsible AI use, Americans will lose opportunities.
AI-ready education and workforce systems require more than developing new models and powerful processors; it means preparing people to elevate critical thinking skills to recognize benefits and avoid risks. Ethics, creativity, humanities, civics, and project-based learning are tools that can mitigate AI harms and embrace its opportunities, and they must be included in forward-looking AI policy. Responsibly deploying AI technology demands scrutinizing all of its effects on American life.
As a senior member of the House Education and Workforce Committee and the Committee on Science, Space and Technology committees, I’m developing a comprehensive, human-centered framework that will strengthen teaching and learning, prepare workers, and address security and privacy so Americans can thrive in a world rapidly changing by AI.
My framework establishes a coordinated, human-centered approach to AI in education and the workforce. It emphasizes evidence-based policy, educator and worker readiness, student protections, and clear guardrails that support innovation without sacrificing trust or equity.
We cannot lead the world in artificial intelligence if students, educators, and workers are not adequately prepared to interact with this new technology. Human-centered AI policy prepares people, sets guardrails, and measures success by creating opportunities, not widening gaps.
My framework reflects a simple principle: AI policy should prepare people, not just deploy technology. By focusing on coordination, readiness, protections, and evidence, it lays the groundwork for responsible innovation that strengthens education, supports workers, and sustains American leadership in an AI-enabled economy.
This framework will address these urgent issues:
Federal Coordination on AI, Education, and Workforce Readiness
The Problem: Today, responsibility for AI in education and workforce development sprawls across federal agencies with little coordination. Education, labor, science, and civil rights systems operate in parallel, creating gaps, duplication, and confusion for states, institutions, and workers. The World Economic Forum’s Future of Work Report forecasts that AI literacy, creative thinking, and curiosity will be core skills in 2030. Investments in reskilling, upskilling, and public education ranked as the top three priorities among employers. Without cross-government coordination and clear strategies, policy chases technology rather than shaping outcomes.
The Solution: My framework establishes a coordinated federal approach to align AI policy with education and workforce readiness. It looks beyond the tech industry alone to support economic sectors affected and disrupted by AI. It affirms the boundaries that restrict federally-directed curriculum and directs agencies to coordinate to support students, educators, and workers, rather than leaving institutions to navigate conflicting signals or fend for themselves. Coordination strengthens innovation by creating clarity, consistency, and accountability in digital learning and workforce policy.
Preparing Educators and Institutions for Responsible AI Use
The Problem: Educators and institutions face rapid AI adoption without adequate guidance, training, or support. A Digital Education Council survey found that only 6 percent of faculty are satisfied with AI literacy resources. Faculty and instructors shoulder responsibility for academic integrity, learning quality, and ethical AI use, with nearly 80 percent reporting they receive insufficient professional development or evidence-based standards. This uneven preparation risks widening inequities and undermining trust.
The Solution: My framework supports educator and institutional readiness by promoting professional development, evidence-based practices, shared resources for responsible AI use, and model curricula backed by research on AI’s effects on teaching and learning. It recognizes educators as essential to a thriving workforce, and provides resources for institutions to adapt thoughtfully without federal mandates on curriculum or pedagogy.
Strengthening the AI-Ready Workforce
The Problem: AI is reshaping the future of work, and workers in every sector – including agriculture, manufacturing, health care, education, the creative arts, and more – are navigating rapidly-changing careers. A recent Stanford Digital Economy Lab found that early-career workers in AI-affected roles have seen a 13 percent decline in employment. Without dedicated resources and strategies, including those that support learners in non-traditional education pathways, Americans risk being left behind by job displacement and employment bias.
The Solution: My framework focuses on community colleges, apprenticeship programs, small businesses, and underserved workers to provide inclusive, worker-based opportunities. This will include expanding regional partnerships, hands-on and project-based learning, industry-recognized credentials, and ethical AI literacy skills to elevate training, upskilling, and career pathways for AI-affected occupations.
Expanding Access to AI Education and Careers
The Problem: According to the National Education Association, at least 16 million students live without access to the resources necessary for technology education. Underrepresented communities are disproportionately at risk of job displacement, algorithmic bias, and exclusion from programs to prepare for the changing workforce. Rapidly changing uses for AI present opportunities to address these disparities, but also pose the risk of widening the digital divide in unjust educational and employment gaps. Rural and underserved communities need access to resources to compete in a changing economy.
The Solution: My framework focuses on empowering community-rooted institutions to lead by making AI learning opportunities available to all learners. It provides resources to build AI literacy capacity at Minority Serving Institutions, Historically Black Colleges and Universities, Tribal Colleges and Universities, and Hispanic Serving Institutions. It establishes benchmarks for workforce participation and AI education access, and provides resources for rural and low-income communities to support AI and data literacy and workforce development initiatives.
Building Partnerships for Long-Term Success
The Problem: People in career sectors in every state are navigating the changes caused by AI differently, and industry needs vary across regions. The World Economic Forum reports that 86 percent of employers expect AI to transform their business by 2030. Workers, employers, workforce development programs, and federal and state agencies need clear lines of communication and common venues to share best practices to prepare individuals in AI-affected careers for an economy changed by AI.
The Solution: My framework proposes public-private collaboration among schools, community colleges, nonprofits, and workforce boards to identify sector-specific skills gaps and develop solutions to fill them. It establishes Innovation Hubs to evaluate how AI is affecting the workforce and share open-access model curricula. The framework includes robust accountability and reviews to prevent corporate capture and technology exclusivity, and establishes an AI Workforce and Industry Advisory Council to develop ethics and transparency safeguards.
Trustworthy Oversight and Transparency
The Problem: AI leadership requires more than fast deployment; it requires trust, preparation, and policies that implement these tools responsibly. According to a Pew Research Center study, 62 percent of Americans report having little confidence in AI regulation without clear guidelines, ethical frameworks, and public input. Without a robust federal regulatory system to address the risks posed by AI students, workers, and the public are at greater risk of bias, job displacement, and exploitation. This is particularly true as many advocate for a federal moratorium on state AI laws. Any federal proposal to address AI and the future of work needs transparent reporting, clear metrics, and the ability to respond to the evolving technology landscape.
The Solution: My framework requires a unified report across all federal programs for AI education and workforce readiness. It directs interagency data sharing to track and respond to program outcomes, and establishes an AI Workforce and Education Readiness Advisory Committee to provide consistent recommendations to Congress so future AI policy is proactive, not impulsive. It requires reports to measure what works and what doesn’t, supporting policy that evolves with evidence and aligns innovation with public benefit rather than speculation.
Protecting Students and Creating Trust in Digital Learning
The Problem: Students increasingly rely on AI tools with little transparency about how those systems work, what data they collect, or how they affect learning outcomes. In 2023, 80 percent of pre-K-12 providers and 79 percent of higher education institutions experienced ransomware attacks. Inconsistent safeguards expose students to privacy risks, biased systems, and unclear academic expectations, undermining trust in technology and institutions.
The Solution: My framework advances clear, student-centered protections that promote transparency, accountability, and responsible use of AI in learning environments. It creates safeguards against commercial use of student data, establishes security standards, and elevates parental involvement to protect learners from exploitation. The framework supports innovation that enhances education without sacrificing student rights, equity, or confidence in educational systems.
Bonamici was a member of the bipartisan task force on artificial intelligence.