Chancellor Foreword
As the Chancellor of NYC Public Schools, as a longtime educator, and as an NYCPS parent, I believe our students deserve safe, academically rigorous, and truly integrated schools. So when it comes to technology in the classroom, I always consider how AI and other digital tools align to these priorities. I ask myself: Does this technology affect students' safety? How can we use it to increase academic rigor? Will it maintain culturally responsive instruction?
That's what this guidance is all about—ensuring that when AI is leveraged in our schools, it is done safely, thoughtfully, ethically, and responsibly, enhancing learning and supporting our students' growth. This guidance establishes a foundational vision for how to use AI going forward, creating a base we can continue to build upon in the coming months and years. This guidance also sets clear boundaries that help to protect our students, support our educators, and empower our families.
By releasing this guidance, we aim to set a clear standard for innovative, education-focused, equity-centered AI adoption across our entire school system. I'm proud that this has been a community-driven effort, including input from over 1,000 stakeholders. I appreciate the contributions of all those involved, and I ask for your continued collaboration as we further develop and strengthen our protocols around AI use. Specifically, we will have a 45-day window for public comment and feedback on this guidance—please visit
Thank you for utilizing this resource and for being our partners in this work.
— Kamar H. Samuels, Chancellor
Our Starting Point: Shared Value of Public Education
This Guidance is issued by New York City Public Schools (NYCPS) and reflects commitments made to our students, families, educators, leaders, communities, and partners.
As the collective community of NYC Public Schools, we are the reason nearly one million students walk into New York City public schools each day and encounter possibilities. We build relationships no algorithm can replicate. We see growth before it appears in data. We see promise where numbers fall short. This Guidance on Artificial Intelligence (AI) begins from the truth that teaching and learning are human endeavors served by technology - not replaced by technology.
NYCPS is advancing a shared vision for learning that ensures every student experiences rigorous instruction and meaningful preparation for the future. NYC Reads, NYC Solves, and Future Ready NYC reflect that commitment across literacy, math, and postsecondary readiness. Lasting, large-scale change is evidence-based, educator-led, and community-informed.
This Guidance carries that same responsibility forward, defining how AI can support teaching and learning in ways that are intentional, equitable, and grounded in professional judgment.
Our students do not need technology for its own sake. They need accurate instruction, meaningful practice, and adults who know them well enough to decide when AI belongs in their learning, and when it does not.
Our students are already encountering AI beyond school walls. The question is whether they are equipped with critical thinking, ethical grounding, and creative agency, or whether they are left to navigate AI alone. When public systems fail to lead, the cost is not shared equally. It falls most heavily on the students who rely on public schools for access, opportunity, and possibility.
This Guidance is an affirmation that NYCPS will lead thoughtfully, responsibly, and in service to every student.
This Guidance exists because of the students we are committed to serve:
- The fourth grader whose reading score lags behind her curiosity and insight;
- The multilingual learner navigating two languages in a system that too often overlooks what he already knows;
- The student with a disability whose need is clear, but whose classroom still lacks the right tools to meet it.
Why This Matters Now
AI is advancing rapidly across society, including in workplaces and classrooms. The pace of change can feel fast and uncertain. What matters most is clarity about what has not changed.
Students still grow and learn in deeply human ways. They learn best when they feel safe and connected, when they build language and understanding, when they learn to read, write, and solve problems, when they ask questions and make sense of the world, and when they learn how to both work with others and develop independence. Those foundations endure regardless of how technology evolves.
What has changed is the speed at which information, tools, and decisions move. When change accelerates, a clear instructional focus matters more, not less. For NYCPS, that focus remains unwavering: ensuring that every student experiences high-quality instruction and meaningful preparation for the future.
Across neighborhoods, cultures, and perspectives, there is shared agreement that all students deserve an educational experience that supports them to become thoughtful problem-solvers and confident, independent learners. An education experience that prepares them with strong reading and math skills for college, careers, and life. While the world around us changes, schools must remain safe, caring, and equitable places where opportunity is not determined by circumstance–while also preparing students for the modern world.
NYCPS is committed to what will always come first: students, learning, and strong instructional foundations.
- Educators, relationships, and professional judgment remain central.
- Learning remains about thinking and understanding, not the latest technology.
- Reading and mathematics remain essential.
- Technology, including AI, must serve this work, not define it.
The future of learning is built by people, and AI must be governed and used in ways that strengthen and amplify high-quality teaching and learning.
Our Core Commitments
Every child in New York City deserves a free and appropriate public education delivered by educators who know them, supported by families who trust the system, and protected by policies that prioritize them.
Foundational literacy and math, meaningful learning experiences, social connection, and preparation for life beyond school are the building blocks of student opportunity.
Meeting these commitments requires that we work in partnership, internally and externally, to understand and apply existing policies that establish what we can and cannot do with AI. In partnership, we will determine the focus areas where we explore AI safely, ethically, and responsibly - always in service of student learning, student agency, student opportunity, and our collective good.
Equity is the standard we build toward in every decision. The students who depend most on public schools are the same students who benefit most when systems integrate technology responsibly, and who are harmed first when systems do not.
Our Instructional Stance
In NYCPS, the use of technologies such as AI is intended to augment but not replace high-quality, culturally responsive instruction and the professional expertise of the adults who serve students. All technology use must be aligned to our core commitments and a clearly defined instructional purpose. AI does not replace professional expertise. Educators and leaders remain responsible for instructional decisions, ethical judgment, and the relationship-based work that technology cannot replicate.
Technology may support instruction, but it does not define pedagogy or replace educator judgment. Technology is not a shortcut to learning.
“Equity is not an abstract idea. It’s a set of choices we make together in policy. What matters is not just what we do, it’s how we do it: by listening to educators, by respecting families, by seeing students as whole people with enormous potential.”
— Kamar H. Samuels, Chancellor
This Guidance is one of those choices the Chancellor describes: a choice to approach technology as an equity issue.
We're All Learning Together
The use of AI in K-12 education is an evolving field. The long-term effects on how children learn, think, and develop in the era of AI are not fully understood. No school system in the world has accounted for all the implications.
NYCPS will not pretend to have answers we do not have. We will not wait for certainty that may never come. And we will not let uncertainty become an excuse for inaction while our students navigate this technology.
What we can do is build an approach designed to support human connection and learning. An approach that creates space for diverse perspectives, that treats disagreement as a resource rather than a problem, and that holds itself accountable to evidence as it emerges. That is the commitment this Guidance makes.
What This Guidance Establishes
This Guidance establishes a framework for emerging technologies in NYCPS schools, starting with AI. It applies to all AI and AI-powered tools used for educational purposes, such as instruction, assessment, support for diverse learners, and student services, across all grade bands and school types.
While beyond the scope of this Guidance, the use of AI in schools for certain non-educational applications, such as surveillance, can raise strong concerns. As we work towards developing a comprehensive Playbook, this Guidance does not address the impact of AI in important societal areas such as environmental impact, mental health, and non-educational uses of AI outside of school.
Clear Boundaries
The NYCPS Traffic Light Approach detailed below establishes when AI use is prohibited, when AI use requires careful consideration and judgment, and when AI is approved for use. These categorizations are based on evaluation conducted by the NYCPS Central AI Task Force; informed by our external AI Advisory Council; and grounded in student privacy laws and regulations, civil rights protections, and educational standards.
AI tools are currently evaluated at the central level for educational purposes, and for data privacy and security compliance using the NYCPS Data Privacy and Security Compliance Process. Every AI tool must meet these standards to be approved for use in any NYCPS school.
This Guidance commits to further strengthen AI tool evaluation practices, addressing algorithmic bias, instructional alignment, cultural responsiveness, and developmental appropriateness across grade bands. This process will be developed through robust governance structures, through the sustained work of the Central AI Task Force, the Data Privacy Working Group, and the AI Advisory Council, and through democratic partnerships with the communities we serve.
NYCPS work begins this spring and will be reflected in the comprehensive Playbook planned for June 2026.
What AI Is (and Isn't)
A shared understanding of what AI is and is not supports evidence-based decision-making that serves teaching and learning.
What AI Is
There is no single, universally agreed-upon definition of artificial intelligence (AI). The term is applied broadly to a wide range of tools, models, and systems with different capabilities and risks. For the purposes of this Guidance, AI refers to computer systems that can perform tasks that usually require human thinking, like finding patterns, sorting through information, making predictions, or creating content.
What AI Isn't
AI is not a thinking, reasoning, or conscious entity. It does not understand meaning, possess values, or exercise judgment the way humans do. AI systems process patterns in data; they do not comprehend context the way people do.
AI tools cannot and should not replace:
- The relationship between students and teachers
- The professional expertise of educators and school leaders
- The trust and partnership with families and communities that are central to teaching and learning
- Human-based instructional services and educational programs
Generative AI
Generative AI (GenAI) is a type of AI that creates something new based on what a user instructs it to do. A person provides an input, such as a question or a prompt, and the GenAI tool generates an output, such as text, images, code, or audio. GenAI tools do this by predicting what comes next based on patterns learned from large amounts of data.
Not All AI Is Generative AI
Many AI tools analyze, predict, or classify information without producing new content. The distinction matters because generative AI (GenAI) raises concerns about accuracy, originality, intellectual property, and whether students are doing the thinking and the work that learning requires.
Why Human Judgment Matters Most
AI tools and systems can produce inaccurate, unfair, or misleading information. GenAI in particular can produce responses that sound confident but are factually incorrect or entirely fabricated, a phenomenon sometimes called “hallucination” in the AI industry. This is especially important in an educational setting, where students may not yet have the background knowledge to recognize when AI gets something wrong.
AI can also reflect biases present in its training data, producing outputs that may misrepresent, stereotype, or exclude certain perspectives, experiences, and identities. Identifying and addressing bias in AI tools is an active area of development and focus for NYCPS.
How Tools Are Evaluated
Before using student data with an AI tool, educators and leaders must confirm that it has been approved through the NYCPS Data Privacy and Security Compliance Process.
ERMA (Enterprise Request Management Application) is NYCPS’s established process for data privacy and security compliance. Staff can confirm a tool’s ERMA compliance by visiting the NYCPS Ed Tech Portal and logging in with their NYCPS credentials.
ERMA is a privacy and security compliance review. It evaluates tools for compliance with FERPA, NYS Education Law §2-d, and Chancellor’s Regulation A-820. Detailed ERMA information is available at Data Privacy and Security Policies.
In December 2024, NYCPS implemented additional AI-specific standards requiring vendors to disclose AI capabilities, prohibit AI model training on student data, and meet transparency requirements. This process is managed by NYCPS in accordance with existing policies.
All AI tools must serve a clearly defined educational purpose. Any AI tools processing student data must complete an ERMA compliance review, regardless of who provides, funds, or markets the tool.
While the ERMA process currently reviews AI tools for data privacy and security, it does not yet evaluate algorithmic bias, equity impact, or instructional effectiveness. NYCPS is committed to building an expanded evaluation capacity rooted in evidence and clear criteria.
Tools that have not been assessed by the ERMA compliance review are NOT approved for use with any student or staff data in any NYCPS school. Concerns about students’ privacy or safety raised through established governance channels will receive timely, substantive responses. NYCPS is committed to transparency and accountability in AI governance.
Enterprise Request Management Application (ERMA)
- All Tools: All tools that process PII must go through ERMA.
- Who Submits: Only school, district, or central executive leaders can submit.
- What's Required: Vendors must show how they protect PII.
- How to Check or Submit for Approval: Visit the Ed Tech Portal and ERMA InfoHub.
What is PII?
Personally identifiable information (PII) is any data that can identify an individual directly or indirectly, either on its own or when combined with other information. Examples of PII include names, dates of birth, student ID numbers, grades, special education or multilingual learner status, or photos and videos).
ERMA Approval and Safe Use Considerations
ERMA approval confirms compliance with data privacy and security standards, but it is not the only requirement for tool use. Once a tool is approved through ERMA, the following requirements apply:
- Human judgment required
All AI tools must be used with human judgment, oversight, and review. AI supports—never replaces—educator and leader decision-making. - No PII in AI tools
Never input personally identifiable information (PII) into AI tools that have not completed ERMA review. Only ERMA-approved tools that meet data privacy and security standards may be used with student data. - Age restrictions apply
Know and apply tool-specific age restrictions. Teach students and families where to find age requirement information. - Critical review of AI outputs
Educators and school leaders must critically evaluate all AI-generated output for accuracy, appropriateness, and potential bias. AI responses should never be accepted at face value–they must always be reviewed, assessed, and validated against reliable sources. Educators should also teach students, as age-appropriate, to approach AI output with the same critical lens.
ERMA approval does not mean a tool is appropriate for every context. Before using an ERMA-approved tool, consider:
- Is this approved for student use, staff use, or both?
- What is the age-appropriate use of the tool?
- Does this tool align with my students’ learning goals?
- Is it appropriate for my students’ grade level and needs?
- Does it support student and educator thinking, or replace it?
- How will I introduce the tool and monitor its use?
The 10-Step ERMA Process
The ERMA (Enterprise Request Management Application) review is NYC Public Schools’ 10-step compliance process for approving any third-party software or AI tool before it can be used with staff or students.
- Identify the Need
Schools must first explain the problem they want to solve and why the AI tool is the best solution. They must show expected benefits and measurable outcomes. - Submit an ERMA Request
Only authorized leaders (such as principals or central executives) can initiate the request in the ERMA portal. Teachers cannot submit directly. - Vendor Agreement
The vendor signs a Data Processing Agreement (DPA) that meets strict privacy and security laws and provides a detailed data protection plan. - Security Check
The vendor completes a security questionnaire covering encryption, access controls, and compliance standards. NYCPS security team reviews and approves. - Cloud Review
If the tool is cloud-based, the city’s Office of Technology and Innovation (OTI) checks data storage, security architecture, and compliance with citywide, state, and federal policies. - Legal and Compliance Review
NYCPS teams review legal terms, privacy protections, security measures, instructional value, and AI-specific issues like bias and transparency. - Fix Issues
If problems are found (e.g., weak security), the vendor must correct them and resubmit. - Final Decision
The tool is designated as:- Approved (added to the official list),
- In Progress (cannot be used), or
- Denied (cannot be used)
- Implementation
Schools may only begin using a tool after ERMA approval. Tools cannot be used during the review process, and schools must never bypass ERMA. - Ongoing Monitoring
NYCPS audits tools regularly. Vendors must report changes and maintain compliance. Approval can be revoked for violations and vendors can be penalized in accordance with applicable laws.
🚦NYCPS Traffic Light Approach for AI Use
AI tools and systems, especially GenAI, differ from traditional educational technologies. Rather than following fixed instructions, AI generates outputs by identifying patterns, making inferences, and adapting over time. Its outputs may be incomplete, probabilistic, or influenced by design choices not visible to users. Because of this, NYCPS uses a risk-based approach to guide every decision about AI.
AI-related risks take many forms, including but not limited to:
- Risks to students include bias, rights violations, loss of agency, loss of privacy, developmental and mental health harms, exposure to exclusionary discipline, or erosion of the thinking, creativity, and problem-solving skills students must develop themselves.
- Risks to staff include over-reliance on automated outputs or unclear accountability for AI-assisted decisions.
- Risks to society include reinforcing inequity or reducing human judgment in civic institutions, and exacerbating the climate crisis, use of limited natural resources, strain on the electric grid, and mental health challenges.
Some risks emerge even when AI tools are used with good intent.
NYCPS requires that all AI tools be explainable to every stakeholder including what the tool does, why it produces a given output, and how human judgment can intervene.
The Traffic Light Approach translates these risks into clear, actionable guidance. It is organized by the level of risk an AI use or application presents and the safeguards required.
The NYC Department of Education’s Chancellor’s Regulations (CRs) cover a wide range of policies. Volume A Regulations, including those referenced in this Guidance, address student-related issues, from admissions to promotion. We name what AI will never be allowed to do before we name what it is allowed to do. The left column describes the use or application. The right column is a non-exhaustive list of policies or regulations that apply.
Red: Do Not Proceed
These uses or applications are prohibited. They represent the highest risk to students, families, and the integrity of the system.
What AI Will Never Be Allowed To Do in Our Schools
| AI Use or Application | Policy and/or Regulation |
|---|---|
| Decisions about Students: Placement, discipline, eligibility, promotion, graduation, and program access require qualified human decision-making. | CR A-101 (Admissions, Readiness, and Placement) CR A-443 (Student Discipline); CR A-501 (Promotion Standards) |
| IEP and 504 PlanDevelopment: All special education documents are developed by qualified professionals. | IDEA (federal) Special Education SOPM; CR A-710 (Section 504 Policy and Procedures for Students) |
| Assessments and Grading: The educator of record determines what a student knows. AI-generated data is advisory only. | NYCPS Academic Policies CR A-501 |
| Surveillance and Behavior: Behavioral monitoring and student surveillance are prohibited. | CR A-443 (Student Discipline) CR A-832 (Respect for All) Students’ Bill of Rights Data Privacy and Security Policies; Citywide Behavioral Expectations to Support Student Learning (“Discipline Code”) |
| Care and Counseling: Counseling, crisis intervention, and therapeutic support are provided by qualified staff. | CR A-411 (Supporting Students in Behavioral Crisis) CR A-755 (Suicide Prevention and Intervention); Discipline Code |
| Progression Pathways: Every student has full access to advanced coursework. Any algorithmic pathway can be overridden by educators, leaders, or students. | CR A-101 (Admissions, Readiness, and Placement) |
| Protecting Data: Student data will not train AI models, be sold, or be monetized. PII may only be entered into ERMA-approved tools. | Ed Law §2-d FERPA COPPA CR A-820 (Data Privacy and Security) |
Yellow: Proceed with Caution
These uses or applications require careful consideration and need to meet specific conditions before being permitted. Professional judgment and additional safeguards are essential.
Where Professional Judgment is Essential
| AI Use or Application | Policy and/or Regulation |
|---|---|
| Student and School Data: AI may surface patterns in data. Educators interpret findings with knowledge of each student. | CR A-820 (Data Privacy and Security) Ed Law §2-d FERPA |
| Critical Communications: AI-generated translations must not be used as final content for critical communications. All translations must be reviewed, edited, CR A-663 (Language Access for Parents) CR A-820 (Data Privacy and Security) IAUSP and approved by a qualified linguist prior to distribution to ensure accuracy, clarity, and compliance. Only ERMA-approved tools may be used to support translation of student or other sensitive information. | CR A-663 (Language Access for Parents) CR A-820 (Data Privacy and Security) IAUSP |
| Diverse Learners: AI may generate translations and transadaptions of bilingual instructional material as well as accommodations and scaffolds to support student learning, and all outputs must be reviewed by qualified staff, including certified bilingual and ENL teachers, and IEP team members as appropriate. | CR A-443 (Student Discipline) IDEA; 8 NYCRR Part 154 |
| Student Use of AI: Students may use AI for research, exploration, and creative projects. Educator guidance, critical evaluation of outputs, and age-appropriate context are required. | Internet Acceptable Use and Safety Policy (IAUSP)CIPA |
Green: Proceed with Confidence
These uses or applications are approved for use with ERMA-approved tools, when accompanied by professional judgment.
Approved, Encouraged, and Supported
| AI Use or Application | Policy and/or Regulation |
|---|---|
| Brainstorming and Organizing: Educators use AI to explore lesson ideas, approaches, and unit planning, aligned with intellectual property guidance. | IAUSP Data Privacy and Security Compliance Process (Intellectual Property) |
| Drafting and Refining Communications: Educators may use AI to draft or refine materials on any topic. Human review and ownership are required before distribution of both sensitive and non-sensitive materials, with heightened attention to tone, accuracy, and impact. | IAUSP |
| Simplifying and Streamlining: Educators and leaders use AI for scheduling, formatting, and summarizing non-sensitive information. | |
| Operational Data Synthesis: Leaders use AI to synthesize operational data and support resource planning. | NYCPS Fiscal Policy |
| Translation: AI supports the translation of non-critical school communications for families and communities who prefer a language other than English. All AI- generated translations should be reviewed, edited, and approved by a qualified human reviewer prior to distribution. If that is not possible, a disclaimer must be included indicating that the translation was generated using AI, along with guidance on how recipients can request clarification or language support if needed. | CR A-663 (Language Access for Parents) CR A-820 (Data Privacy and Security) IAUSP; NYC Public Schools Speak Your Language |
| Accessibility: AI supports the creation of accessible materials for families and communities. | Applicable federal and city accessibility standards (Section 504, ADA, IDEA) apply. |
| Professional Learning: Educators and leaders use AI to support their own professional development, preparation, and research. | IAUSP |
Pillars for Responsible AI Integration
These commitments define how NYCPS governs, evaluates, and uses artificial intelligence in the service of students, families, and educators. Together, they establish the shared expectations that guide decision-making, protect student rights, and ensure AI strengthens, rather than replaces, teaching and learning.
Together, these pillars establish the shared commitments that guide how NYCPS governs AI, builds knowledge and capacity, protects students, and supports educators. Every decision about AI is grounded in learning, equity, and human responsibility.
Pillar 1: Share Decision-Making
AI decisions in NYCPS are made through transparent evaluation and partnership. Stakeholders have a voice through Community Education Councils (CECs), Citywide Councils, the Panel for Educational Policy (PEP), School Leadership Teams (SLTs), and district-specific events, ensuring accountability and public trust.
Pillar 2: Ensure Equitable Access to Rigorous Learning
AI is used to expand access to high-quality, grade-level, and intellectually rigorous learning for all students, including students with disabilities, multilingual learners, and students from historically underserved communities. AI must support learning and productive struggle without narrowing opportunities or lowering expectations.
Pillar 3: Build AI Knowledge and Capacity (Foundations to Pathways)
NYCPS builds AI literacy through shared foundations and role-specific pathways for educators, leaders, students, and families. Learning is developmental and ongoing. Critical thinking, ethical reasoning, and human judgment are embedded across all contexts.
Pillar 4: Protect Student Data and Be Clear About AI
Student data belongs to students and families and is protected by law. All AI tools must meet NYCPS standards for data protection, transparency, and explainability, with clear limits on data use and clear human accountability for decisions that affect students.
Pillar 5: Empower Educators and Leaders
Educators and leaders are trusted professionals. They receive the guidance and support needed to use AI responsibly in the service of teaching and learning, with professional judgment retained at all times.
Our Call to Action
AI represents a stress test of our ability to stay aligned to our public mission. The children who need the most support are already in classrooms where AI is part of the tools they use. Governing well is the condition under which equity is possible.
The peer-reviewed evidence base, including large-scale research syntheses, the American Academy of Pediatrics (AAP) 2026 policy statement, and the Brookings Institution’s global study (400+ studies, 50 countries), supports this approach:
- Center human relationships
- Prepare educators and families
- Protect students through governance
The evidence also identifies real challenges: algorithmic bias, unequal access, insufficient preparation for educators and students, and developmental effects that differ across age bands.
The readiness case is equally clear. The New York State Education Department’s Portrait of a Graduate identifies critical thinking, collaboration, communication, creativity, and civic readiness as the competencies every student must develop. These skills cannot be built by avoiding the technologies shaping the world students will inherit. They require direct engagement, guided by educators, grounded in evidence, and embedded in rigorous instruction.
The question is not whether AI belongs in schools, but rather will we collectively create a system that governs AI to serve every student and every stakeholder.
Partnership: How We Collaborate
NYCPS collaborates with others by holding the center: the relationships, the professional judgment, the equity commitments, and the shared expectations that matter more as technology advances. Our approach is designed to protect students as systems evolve and to ensure that every student benefits from what becomes possible.
It is built on three foundations: People, Process, and Partnership.
People
Every level of NYCPS has a role. The Central AI Task Force includes 76 members from across divisions: the Division of First Deputy Chancellor, Division of Instructional and Information Technology, Office of Policy and Evaluation, Office of the General Counsel, Office of Family and Community Engagement, and student representatives.
Process
Every AI tool that processes student data is evaluated to the same standard through the Enterprise Request Management Application (ERMA). ERMA and the Traffic Light Approach, create consistent protections. The same standard in every school, in every borough, for every student.
Partnership
NYCPS builds AI guidance with the communities it serves. We partner with the Central AI Task Force, the Data Privacy Working Group, the AI Advisory Council, the Chancellor’s Advisories (Student, Family, Principal, and Superintendent), and more. Labor partners (UFT, CSA, DC37) are engaged and will be enlisted for feedback, recommendations, and alignment throughout.
The 32 Community Education Councils (CECs), Citywide Councils, Panel for Educational Policy (PEP), City Council and School Leadership Teams (SLTs) have established pathways for active partnership. This Guidance reflects ongoing engagement with 1,000 stakeholders, 25 rounds of feedback, and 150 unique users’ feedback.
Governance and Collaboration Structure
NYCPS is committed to creating opportunities for public input at every stage of AI adoption. This commitment reflects a core belief: inviting community voices earlier is not only better policy, it is also one of the most powerful ways to build the AI literacy and civic readiness this Guidance calls for. When students have the opportunity to participate in decisions about how AI is used in their schools, they are practicing the critical thinking, collaboration, and democratic participation that prepares them for life beyond school. Governance is not separate from learning. It is learning.
The following structure governs how AI decisions are made across NYCPS.
Governance Levels:
- Chancellor
- Chancellor’s Cabinet AI Lead
- Students, Families, Educators, Leaders, Communities, and Partners (PEP, CECs, CPAC, Presidents' Councils)
- UFT, CSA, DC37
- Central AI Task Force (Internal Stakeholders)
- AI Advisory Council (External Partners)
Central AI Task Force Subcommittees:
- Subcommittee 1: AI Governance, Systems, and Stakeholders—Manages the NYCPS Traffic Light Approach, AI data standards, and CEC/PEP engagement.
- Subcommittee 2: AI Knowledge and Capacity Building—Leads AI literacy training, co-design cohorts, and grade-band guidance.
- Subcommittee 3: Validated AI Tools, Practices, and Resources—Manages the public AI tool inventory and evaluation.
- Subcommittee 4: Communications and Engagements—Leads the Family Guide, AI Knowledge Hub, and multilingual publication.
- Data Privacy Working Group (PEP) Subcommittees: AI, Data Security, Parent Engagement and Education—Strengthens data security policies, leads privacy education for NYCPS, engages stakeholder communities.
Gaining Clarity Together: Concerns and Considerations
Useful guidance means being transparent about what we have built and what we are building next. We have heard the public's questions and concerns about AI, and we will actively partner to examine the impacts of AI usage. Student use of AI raises complex and evolving questions related to academic integrity, equity and access, and instructional practice. We are actively developing a bank of questions and answers aimed at supporting staff with clarity on AI use across grade bands, supporting schools in integrating AI in ways that uphold rigorous learning, protect student agency, and promote equitable access.
Biometric and Behavioral Data
NYCPS is reviewing how existing policies apply to AI-related collection and use of biometric and behavioral data. This includes areas such as behavioral monitoring, device monitoring, emotional recognition, facial recognition, and other forms of surveillance. These technologies raise significant concerns around student privacy, consent, civil rights, and the potential for disproportionate impact on students of color, students with disabilities, and other historically marginalized groups.
Bias and Equity Review
ERMA evaluates data privacy. NYCPS is actively building centralized capacity to review for algorithmic bias, equity impact, and instructional effectiveness.
Grade-Band Guidance
The appropriate role of AI differs across K–5, 6–8, and 9–12. The AAP’s 2026 research and international assessment data on the relationship between screen exposure and learning outcomes provide the evidence base. Drawing on the AAP’s distinction between active and passive technology use, NYCPS is developing differentiated guidance for developmental appropriateness, screen time, and the balance between AI-supported and independent work.
AI Tool Inventory
NYCPS requires vendor disclosure and is building a comprehensive public inventory of all approved AI tools in use across the system.
Homework and Academic Integrity
NYCPS is developing guidance on assessment design and academic honesty in an AI-enabled environment, grounded in existing academic policies and research on effective practice.
Family Notification Process
NYCPS will work with families wishing to understand more about the use of AI tools in their child’s instruction, consistent with CR A-820 and the Parents’ Bill of Rights. Families can contact studentprivacy@schools.nyc.gov today with questions about any tool used in their child’s school.
Family Rights and AI-Related Policy
NYCPS will clarify existing policies in action and consider policy needs beyond what currently exists, ensuring families understand their rights and protections as AI tools are integrated into instruction.
Cognitive Offloading
Effective AI integration preserves the intellectual work of learning. NYCPS is developing research-informed guidance on instructional design that ensures AI supports, rather than substitutes for, student thinking.
Managed Tools vs. Personal Accounts
There are important differences between the use of AI in NYCPS-managed environments versus personal accounts. Guidance will be included in the June 2026 Playbook.
Environmental and Climate Impact of AI Infrastructure
Concerns about AI’s environmental impact can potentially be addressed in an interagency fashion as part of an holistic AI tool evaluation process.
Beyond NYCPS Authority
Some dimensions of AI’s impact on students, families, and communities fall outside the scope of what NYCPS can govern through school policy. These include how commercial AI platforms are designed and monetized, how students and families use AI outside of school, how AI is reshaping the labor market, and how algorithmic decision-making operates in other sectors such as housing, healthcare, and criminal justice.
While NYCPS cannot set policy in these areas, we recognize that they shape the lives of the students and families we serve. This Guidance focuses on what is within our control: how AI is evaluated, governed, and used within NYCPS schools for educational purposes.
Where these broader issues intersect with student wellbeing and equity, NYCPS will continue to engage with policymakers, researchers, and advocates working at the city, state, and federal level.
Roadmap to Responsible AI Integration
From now through June 2026, NYCPS will collaborate and partner with students, families, educators, leaders, community members, labor partners, and external advisors to build the conditions for responsible AI integration. No phase moves forward without input. No guidance is finalized without the people it affects.
Every phase includes collaboration with students, families, educators, leaders, communities, and partners (PEP, CECs).
Phase 1 (March): Set the Foundation and Boundaries
Publish initial NYCPS Guidance on Artificial Intelligence:
- Clarify existing policies
- Establish non-negotiables
- Define what is allowed and what is not
- Create opportunities for stakeholder feedback and recommendations
- High School AI Student Instructional Module
- Exploring Equity in AI Educator Fellowship
Phase 2 (March–April): Build Shared Understanding
Activate stakeholder engagements:
- Explain Guidance to all stakeholders (events and webinars)
- Clarify existing policies, protections, and rights
- Open 45-day window for stakeholder feedback and recommendations
- NYCPS Landscape analysis
Phase 3 (April–May): Build Capacity Together
Educate and evaluate:
- Revision period to incorporate stakeholder feedback and recommendations
- AI Literacy role-based course development and trainings
- Targeted stakeholder training, AI tool testing and feedback
- AI Tool Evidence-Based Scorecard
Phase 4 (May–June*): Co-Design and Publish
Align and Adapt:
- Open 25-day window for stakeholder feedback and recommendations
- Publish Playbook (First Edition) and Resources
- Develop Strategic Plan and Roadmap
*Publication in June marks a milestone, not an endpoint.
Policies, Regulations, and Knowing Your Rights
The protections referenced in this Guidance are established in law and policy.
For more information visit https://schools.nyc.gov/StudentPrivacy
Student Privacy
- FERPA: Family Educational Rights and Privacy Act
- COPPA: Children’s Online Privacy Protection Rule
- NYS Education Law §2-d: Part 121 Regulations
- Chancellor’s Regulation A-820: Data privacy and security
- NYCPS IAUSP: Internet Acceptable Use and Safety Policy
Digital Citizenship
- Social Media Guidelines for Students 12 and Younger
- Social Media Guidelines for Students 13 and Older
- CIPA: Children’s Internet Protection Act
- DASA: Dignity for All Students Act (NYS)
Student and Family Rights
- NYCPS Parents’ Bill of Rights for Data Privacy and Security
- Respect for All citywide expectations
- Students’ Bill of Rights
- PPRA: Protection of Pupil Rights Amendment
- IDEA: Individuals with Disabilities Education Act
Terms and Acronyms
Terms
| AI Advisory Council | External experts informing NYCPS AI governance. |
| Algorithmic Bias | Systematic errors in AI that create unfair outcomes for particular groups. |
| Artificial Intelligence | Computer systems that can perform tasks that usually require human thinking, like finding patterns, sorting through information, making predictions, or creating content. |
| Central AI Task Force | Cross-divisional NYCPS body governing AI implementation. |
| Cognitive Offloading | When AI performs learning-related tasks that students need to do themselves. |
| Community Education Council (CEC) | 32 elected parent-majority bodies. |
| Computer Science | The foundational study of abstraction, data analysis, automation, algorithmic design, and ethical reasoning used to solve problems. |
| Data Processing Agreement (DPA) | Contract specifying how student data is collected, stored, used, and protected. |
| Enterprise Request Management Application (ERMA) | NYCPS privacy and security compliance process for all educational technology that access student data. |
| Explainability | The ability to describe how an AI tool reaches its outputs so users can understand, question, and where necessary override them. |
| Generative AI (GenAI) | A type of AI that creates something new based on what a user instructs it to do. |
| Panel for Educational Policy (PEP) | 23 voting members, including mandatory parent seats. |
| Personally Identifiable Information (PII) | Any data that can identify an individual directly or indirectly, either on its own or when combined with other information. May only be processed in ERMA-approved tools. |
| School Leadership Team (SLT) | Equal parent-staff representation in every school. |
Acronyms
| AAP | American Academy of Pediatrics |
| AI | Artificial Intelligence |
| CEC | Community Education Council |
| CIPA | Children’s Internet Protection Act |
| COPPA | Children’s Online Privacy Protection Rule |
| CPAC | Chancellor’s Parent Advisory Council |
| CR | Chancellor’s Regulation |
| CSA | Council of School Supervisors and Administrators |
| DASA | Dignity for All Students Act (NYS) |
| DC37 | District Council 37 |
| DFCSE | Division of Family, Community, and Student Empowerment |
| DIAL | Division of Inclusive and Accessible Learning |
| DIIT | Division of Instructional and Information Technology |
| DFDC | Division of First Deputy Chancellor |
| FERPA | Family Educational Rights and Privacy Act |
| IAUSP | Internet Acceptable Use and Safety Policy |
| IDEA | Individuals with Disabilities Education Act |
| IEP | Individualized Education Program |
| K-12 | Kindergarten through 12th grade |
| NIST | National Institute of Standards and Technology |
| NYC | New York City |
| NYCPS | New York City Public Schools |
| NYS | New York State |
| OET | Office of Educational Technology |
| OGC | Office of General Counsel |
| OPE | Office of Policy and Evaluation |
| OSP | Office of Student Pathways |
| OTI | Office of Technology and Innovation |
| PEP | Panel for Educational Policy |
| PPRA | Protection of Pupil Rights Amendment |
| SLT | School Leadership Team |
| UFT | United Federation of Teachers |
References
Research and Policy Sources
- AAP Center of Excellence on Social Media and Youth Mental Health. (2025). Screen Time at School. Q&A Portal Library
- American Academy of Pediatrics. (2026). Policy Statement: Digital Ecosystems, Children, and Adolescents
- Brookings Institution, Center for Universal Education. (2025). AI in Education: Global Study and Framework for Action
- New York State Education Department. Portrait of a Graduate
- Responsible AI and Tech Justice: A Guide for K-12 Education
Federal Laws and Regulations
- Family Educational Rights and Privacy Act (FERPA), 20 U.S.C. § 1232g
- Individuals with Disabilities Education Act (IDEA), 20 U.S.C. §§ 1400–1482
- Protection of Pupil Rights Amendment (PPRA), 20 U.S.C. § 1232h
State Law
Security Frameworks
NYCPS Policies and Regulations
- Chancellor’s Regulation A-101 (Admissions, Readiness, and Placement)
- Chancellor’s Regulation A-443 (Student Discipline)
- Chancellor’s Regulation A-663 (Language Access for Parents)
- Chancellor’s Regulation A-755 (Suicide Intervention and Prevention)
- Chancellor’s Regulation A-820 (Data Privacy and Security)
- Chancellor’s Regulation A-832 (Respect for All)
- Children’s Internet Protection Act (CIPA), 47 U.S.C. § 254
- Children’s Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501–6506
- Data Privacy and Security Policies
- Internet Acceptable Use and Safety Policy (IAUSP)
- NYCPS Parents’ Bill of Rights for Data Privacy and Security
- NYCPS Student Bill of Rights
- Supplemental Information for Parents About DOE Agreements With Outside Entities
Additional Frameworks and Resources
Share Feedback
This Guidance is a living document. It will be continually reviewed and improved based on our partnerships from our governance structures, implementation data, community feedback, and the evolving evidence base. What we learn will change what we do, and we will continue to evolve together.

