Our Starting Point: What Public Education Means to Us
This Guidance is issued by New York City Public Schools (NYCPS) and reflects our commitments to students, families, educators, leaders, communities, and partners. Teaching and learning are human endeavors served by technology—not replaced by technology.
NYCPS is committed to what will always come first: students, learning, and strong instructional foundations. Educators, relationships, and professional judgment remain central.
Our students do not need technology for its own sake. They need accurate instruction, meaningful practice, and adults who know them well enough to decide when AI belongs in their learning—and when it does not. Our students are already encountering AI beyond school walls. The question is whether they are equipped with critical thinking, ethical grounding, and creative agency—or left to navigate AI alone.
This Guidance exists because of the students we are committed to serve: the fourth grader whose reading score lags behind her curiosity and insight; the multilingual learner navigating two languages in a system that too often overlooks what he already knows; the student with a disability whose need is clear, but whose classroom still lacks the right tools to meet it.
NYCPS is committed to what will always come first: students, learning, and strong instructional foundations. Educators, relationships, and professional judgment remain central. Technology, including AI, must serve this work—not define it.
This Guidance is an affirmation that NYCPS will lead thoughtfully, responsibly, and in service to every student.
What Artificial Intelligence Is (and Is Not)
AI is...
- Computer systems that perform tasks usually requiring human thinking—like finding patterns, sorting information, making predictions, or creating content
- A tool that can help with research, writing, translation, and planning
- Something your child likely already encounters outside of school
- A fast-moving technology that requires clear rules and strong oversight
AI is not...
- A thinking, reasoning, or conscious being—it does not understand meaning or exercise judgment the way people do
- A replacement for your child's teacher, counselor, or school leader
- A replacement for the trust between families, educators, and communities
- Always accurate—AI can produce errors, made-up information, and biased outputs
About Generative AI
A specific type of AI called Generative AI (GenAI) creates new content—like text, images, or audio—based on a user's instructions. It does this by predicting what comes next based on patterns from large amounts of data. GenAI can produce responses that sound confident but are factually wrong or entirely made up. This is sometimes called a "hallucination." AI can also reflect biases present in its training data, producing outputs that misrepresent or exclude certain perspectives. This is why human review of AI outputs is always required.
How AI Tools Are Evaluated Before They Reach Your Child's School
Before any AI tool can be used in an NYCPS school with student data, it must go through a review process called ERMA—the Enterprise Request Management Application. ERMA is NYCPS's established process for data privacy and security compliance.
What ERMA Does
ERMA is a privacy and security review.
It evaluates tools for compliance with:
In December 2024, NYCPS added AI-specific standards to this process. These standards require vendors to:
- Disclose exactly what AI capabilities their tool includes
- Prohibit the use of student data to train AI models
- Meet transparency requirements so tools can be explained to families and students
What ERMA Does Not Yet Cover
The ERMA process currently reviews tools for data privacy and security. It does not yet evaluate algorithmic bias, equity impact, or instructional effectiveness. NYCPS is committed to building that expanded evaluation capacity. This work will be reflected in the comprehensive Playbook planned for June 2026.
What Families Need to Know About ERMA
ERMA approval confirms compliance with data privacy and security standards, but it is not the only requirement for tool use. Once a tool is approved through ERMA, these requirements still apply:
Human Judgment Required
All AI tools must be used with human oversight and review. AI supports—it never replaces—educator decision-making.
No Personal Information in Unapproved Tools
Personal information about your child (PII) may never be entered into AI tools that have not completed ERMA review.
Age Restrictions Apply
Educators must know and apply tool-specific age restrictions and teach students and families where to find that information.
Critical Review of AI Outputs
Educators must critically evaluate all AI-generated output for accuracy, appropriateness, and potential bias. AI responses should never be accepted at face value. Educators also teach students, as age-appropriate, to do the same.
The 10-Step ERMA Process
Every AI tool must complete all 10 steps before it can be used in any NYCPS school:
-
Identify the Need
Schools must explain the problem they want to solve, why the AI tool is the best solution, and what outcomes they expect.
-
Submit an ERMA Request
Only authorized leaders—such as principals or central executives—can initiate the request. Teachers cannot submit directly.
-
Vendor Agreement
The vendor signs a Data Processing Agreement (DPA) that meets strict privacy and security laws and provides a detailed data protection plan.
-
Security Check
The vendor completes a security questionnaire. NYCPS's security team reviews and approves it.
-
Cloud Review
If the tool is cloud-based, the city's Office of Technology and Innovation checks data storage, security architecture, and compliance with city, state, and federal policies.
-
Legal and Compliance Review
NYCPS teams review legal terms, privacy protections, security measures, instructional value, and AI-specific issues like bias and transparency.
-
Fix Issues
If problems are found, the vendor must correct them and resubmit.
-
Final Decision
The tool is designated as Approved (added to the official list), In Progress (cannot be used), or Denied (cannot be used).
-
Implementation
Schools may only begin using a tool after ERMA approval. Tools cannot be used during the review process, and schools must never bypass ERMA.
-
Ongoing Monitoring
NYCPS audits tools regularly. Vendors must report changes and maintain compliance. Approval can be revoked for violations.
The NYCPS Traffic Light Approach for AI Use Framework: What AI Can and Cannot Do
AI tools—especially Generative AI—are different from traditional school technology. Rather than following fixed instructions, AI generates outputs by finding patterns, making inferences, and adapting over time. Its outputs may be incomplete, uncertain, or shaped by design choices not visible to users. Because of this, NYCPS uses a risk-based approach to guide every decision about AI.
AI-related risks take many forms:
- Risks to students include bias, privacy violations, loss of agency, developmental harms, exposure to unfair discipline, or erosion of the thinking, creativity, and problem-solving skills students must develop themselves
- Risks to staff include over-reliance on automated outputs or unclear accountability for AI-assisted decisions
- Risks to society include reinforcing inequity or reducing human judgment in civic institutions
The Traffic Light Framework organizes these risks into clear, actionable guidance. NYCPS names what AI will never be allowed to do before naming what it is allowed to do.
Red – Do Not Proceed: What AI Will NEVER Be Allowed to Do in Our Schools
These uses are completely off-limits. They represent the highest risk to students, families, and the fairness of our school system. No exceptions.
Decisions About Students
Placement, discipline, eligibility, promotion, graduation, and program access require qualified human decision-making.
IEP and 504 Plan Development
All special education documents are developed by qualified professionals.
Assessments and Grading
The educator of record determines what a student knows. AI-generated data is advisory only.
Surveillance and Behavior
Behavioral monitoring and student surveillance are prohibited.
Care and Counseling
Counseling, crisis intervention, and therapeutic support are provided by qualified staff.
Deciding Your Child's Course Path
Every student has full access to advanced coursework. Any algorithmic pathway can be overridden by educators, leaders, or students.
Protecting Data
Student data will not train AI models, be sold, or be used to make money. Personal information can only be entered into tools that have been reviewed and approved by NYCPS.
Yellow — Proceed with Caution: Uses That Require Careful Judgment
These uses are allowed, but only with careful thought and strong adult oversight. A trained professional must always review what AI produces before it is used with or about students.
This category covers situations where AI can be helpful, but where a mistake could have real consequences for students. Educators must be actively involved -- they cannot simply accept what AI produces without reviewing it carefully.
Student and School Data
AI may surface patterns in data. Educators interpret findings with knowledge of each student.
Critical Communications
AI-generated translations must not be used as final content for critical communications. All translations must be reviewed, edited, and approved by a qualified linguist prior to distribution to ensure accuracy, clarity, and compliance. Only ERMA-approved tools may be used to support translation of student or other sensitive information.
Diverse Learners
AI may generate translations and transadaptions of bilingual instructional material as well as accommodations and scaffolds to support student learning, and all outputs must be reviewed by qualified staff, including certified bilingual and ENL teachers, and IEP team members as appropriate.
Student Use of AI
Students may use AI for research, exploration, and creative projects. Educator guidance, critical evaluation of outputs, and age-appropriate context are required.
Green — Proceed with Confidence: Approved and Encouraged Uses
These uses are approved and supported. They help teachers, school leaders, and staff do their jobs more effectively -- so they can spend more time focused on students.
These are the uses NYCPS actively encourages. They must still use tools that have been reviewed and approved through ERMA, and a professional must always be in charge of the final product.
Brainstorming and Organizing
Educators use AI to explore lesson ideas, approaches, and unit planning, aligned with intellectual property guidance.
Drafting and Refining Communications
Educators may use AI to draft or refine materials on any topic. Human review and ownership are required before distribution of both sensitive and non-sensitive materials, with heightened attention to tone, accuracy, and impact.
Simplifying and Streamlining
Educators and leaders use AI for scheduling, formatting, and summarizing non-sensitive information.
Operational Data Synthesis
Leaders use AI to synthesize operational data and support resource planning.
Translation
AI supports the translation of non-critical school communications for families and communities who prefer a language other than English. All AI-generated translations should be reviewed, edited, and approved by a qualified human reviewer prior to distribution. If that is not possible, a disclaimer must be included indicating that the translation was generated using AI, along with guidance on how recipients can request clarification or language support if needed.
Accessibility
AI supports the creation of accessible materials for families and communities.
Professional Learning
Educators and leaders use AI to support their own professional development, preparation, and research.
Our Commitments to You
These commitments define how NYCPS governs and uses AI in the service of students, families, and educators:
- Share Decision-Making
AI decisions are made through open evaluation and partnership. Families have a voice through Community Education Councils (CECs), Citywide Councils, the Panel for Educational Policy (PEP), and School Leadership Teams (SLTs).
- Ensure Equitable Access to Rigorous Learning
AI is used to expand access to high-quality, grade-level learning for all students, including students with disabilities, multilingual learners, and students from historically underserved communities. AI must support learning without narrowing opportunities or lowering expectations.
- Build AI Knowledge and Capacity
NYCPS builds AI literacy for educators, leaders, students, and families. Critical thinking, ethical reasoning, and human judgment are embedded in everything we do.
- Protect Student Data and Be Clear About AI
Student data belongs to students and families and is protected by law. All AI tools must meet NYCPS standards for data protection, transparency, and explainability.
- Empower Educators and Leaders
Educators are trusted professionals who receive the guidance and support needed to use AI responsibly, with professional judgment retained at all times.
The peer-reviewed evidence base—including the American Academy of Pediatrics (AAP) 2026 policy statement and the Brookings Institution's global study of 400+ studies across 50 countries—supports centering human relationships, preparing educators and families, and protecting students through governance. The question is not whether AI belongs in schools. The question is whether we will collectively build a system that governs AI to serve every student and every stakeholder.
How We Work Together
NYCPS does not make AI decisions alone. Our approach is built on three foundations:
People
Every level of NYCPS has a role. The Central AI Task Force includes 76 members from across divisions.
Participating divisions
- Division of First Deputy Chancellor
- Division of Instructional and Information Technology
- Office of Policy and Evaluation
- Office of the General Counsel
- Office of Family and Community Engagement
- Student representatives
Process
Every AI tool that processes student data is evaluated to the same standard through ERMA. The same standard applies in every school, in every borough, for every student.
Partnership
NYCPS builds AI guidance with the communities it serves. Labor partners are engaged and enlisted for feedback, recommendations, and alignment throughout.
Our partners
- Students, Families, Educators, Leaders, Communities, and Partners
- UFT, CSA, DC37
- Central AI Task Force
- AI Advisory Council
- And more
This Guidance reflects ongoing engagement with 1,000 stakeholders, 25 rounds of feedback, and 150 unique contributors.
Questions We Are Still Working to Answer
Useful guidance means being transparent about what we have built and what we are building next. We have heard the public's questions and concerns about AI, and we will actively partner to examine the impacts of AI usage. The topics below are under active development and will be included in the June 2026 Playbook.
Student use of AI raises complex and evolving questions related to academic integrity, equity and access, and instructional practice. We are actively developing a bank of questions and answers aimed at supporting staff with clarity on AI use across grade bands, supporting schools in integrating AI in ways that uphold rigorous learning, protect student agency, and promote equitable access.
Biometric and Behavioral Data
NYCPS is reviewing how existing policies apply to AI-related collection and use of biometric and behavioral data. This includes areas such as behavioral monitoring, device monitoring, emotional recognition, facial recognition, and other forms of surveillance. These technologies raise significant concerns around student privacy, consent, civil rights, and the potential for disproportionate impact on students of color, students with disabilities, and other historically marginalized groups.
Bias and Equity Review
ERMA currently evaluates data privacy. NYCPS is actively building the capacity to also review for algorithmic bias, equity impact, and instructional effectiveness. This expanded review will be reflected in the June 2026 Playbook.
Grade-Band Guidance
The appropriate role of AI differs across K-5, 6-8, and 9-12 grade levels. NYCPS is developing differentiated guidance for developmental appropriateness, screen time, and the balance between AI-supported and independent work, drawing on the AAP's 2026 research.
AI Tool Inventory
NYCPS requires vendor disclosure and is building a comprehensive public inventory of all approved AI tools in use across the system. This will be published as part of the June 2026 Playbook.
Homework and Academic Integrity
NYCPS is developing guidance on assessment design and academic honesty in an AI-enabled environment, grounded in existing academic policies and research on effective practice.
Family Notification Process
NYCPS will work with families who want to know more about the use of AI tools in their child's instruction. Families can contact StudentPrivacy@schools.nyc.gov today with questions about any tool used in their child's school.
Family Rights and AI-Related Policy
NYCPS will clarify existing policies and consider new policy needs to ensure families fully understand their rights and protections as AI tools are integrated into instruction.
Cognitive Offloading
Effective AI integration preserves the intellectual work of learning. NYCPS is developing research-informed guidance on instructional design that ensures AI supports—rather than substitutes for—student thinking.
Managed Tools vs. Personal Accounts
There are important differences between AI tools used in NYCPS-managed environments and AI tools accessed through personal accounts. Guidance on this will be included in the June 2026 Playbook.
Environmental and Climate Impact
Concerns about AI's environmental impact—including energy use—can potentially be addressed in an interagency fashion as part of a broader AI tool evaluation process.
What Is Beyond NYCPS Authority
Some dimensions of AI's impact on students, families, and communities fall outside what NYCPS can govern through school policy. These include how commercial AI platforms are designed and monetized, how students and families use AI outside of school, how AI is reshaping the labor market, and how algorithmic decision-making operates in other sectors such as housing, healthcare, and criminal justice.
While NYCPS cannot set policy in these areas, we recognize that they shape the lives of the students and families we serve. This Guidance focuses on what is within our control: how AI is evaluated, governed, and used within NYCPS schools for educational purposes.
Where these broader issues intersect with student wellbeing and equity, NYCPS will continue to engage with policymakers, researchers, and advocates working at the city, state, and federal level.
What Comes Next: Our Four-Phase Plan
NYCPS is working in four phases from now through June 2026.
Phase 1
Publish initial NYCPS Guidance on Artificial Intelligence.
- Clarify existing policies
- Establish non-negotiables
- Define what is allowed and what is not
- Create opportunities for stakeholder feedback and recommendations
- High School AI Student Instructional Module
- Exploring Equity in AI Educator Fellowship
Phase 2
Activate stakeholder engagements.
- Explain Guidance to all stakeholders (events and webinars)
- Clarify existing policies, protections, and rights
- Open 45-day window for stakeholder feedback and recommendations
- NYCPS Landscape analysis
Phase 3
Educate and evaluate.
- Revision period to incorporate stakeholder feedback and recommendations
- AI Literacy role-based course development and trainings
- Targeted stakeholder training, AI tool testing and feedback
- AI Tool Evidence-Based Scorecard
Phase 4
Co-design and publish.
- Open 25-day window for stakeholder feedback and recommendations
- Publish Playbook (First Edition) and Resources
- Develop Strategic Plan and Roadmap
No phase moves forward without input. No guidance is finalized without the people it affects. Publication in June marks a milestone, not an endpoint.
Policies, Regulations, and Knowing Your Rights
The protections in this Guidance are established in law and policy.
See Student Privacy for more information about your data privacy rights.
Student Privacy
- FERPA—Family Educational Rights and Privacy Act: Gives families the right to access and protect their child's education records
- COPPA—Children's Online Privacy Protection Rule: Protects personal information collected from children under 13 online
- New York State Education Law Section 2-d: Protects student data and limits how it can be shared or used
- Chancellor's Regulation A-820: Data privacy and security
- NYCPS IAUSP—Internet Acceptable Use and Safety Policy: Sets rules for how technology is used in schools
Digital Citizenship
- NYCPS IAUSP—Internet Acceptable Use and Safety Policy: Teaches students how to use technology responsibly and safely
- Social Media Guidelines for Students 12 and Younger
- Social Media Guidelines for Students 13 and Older
- CIPA—Children's Internet Protection Act: Requires schools to protect students from harmful online content
- DASA—Dignity for All Students Act (New York State): Protects students from harassment and discrimination
Student and Family Rights
- NYCPS Parents' Bill of Rights for Data Privacy and Security
- Respect for All citywide expectations
- Student Bill of Rights
- PPRA—Protection of Pupil Rights Amendment: Gives families rights related to student surveys and research
- IDEA—Individuals with Disabilities Education Act: Protects the rights of students with disabilities and their families
What You Can Do: Your Rights as a Family
Families have real rights and real ways to take action. Here is what you can do right now:
What is this AI tool used in my child's school?
Email StudentPrivacy@schools.nyc.gov. NYCPS is committed to timely, substantive responses to concerns about student privacy and safety.
I want to learn more about data privacy rights.
Visit Student Privacy.
I have some feedback about this Guidance.
Visit Feedback NYCPS. A 45-day public comment window is open now.
I want to raise a concern about how AI is being used.
Contact your school's principal, School Leadership Team (SLT), or your Community Education Council (CEC). Concerns raised through established governance channels will receive timely, substantive responses.
I want to help shape the next phase of guidance.
Attend CEC meetings, community events, and webinars. The 32 Community Education Councils, Citywide Councils, and the Panel for Educational Policy all have established pathways for active partnership.
Let's Build It Together
The use of AI in K-12 education is an evolving field. The long-term effects on how children learn, think, and develop in the era of AI are not fully understood. No school system in the world has accounted for all the implications.
NYCPS will not pretend to have answers we do not have. We will not wait for certainty that may never come. And we will not let uncertainty become an excuse for inaction while our students navigate this technology.
What we can do is build an approach designed to support human connection and learning—one that creates space for diverse perspectives, treats disagreement as a resource rather than a problem, and holds itself accountable to evidence as it emerges.
This guidance will be continually reviewed and improved based on community feedback, implementation data, and the evolving evidence base. What we learn will change what we do, and we will continue to evolve together.
Equity is not an abstract idea. It's a set of choices we make together in policy. What matters is not just what we do, it's how we do it: by listening to educators, by respecting families, by seeing students as whole people with enormous potential.
— Kamar H. Samuels, Chancellor

