
Stanford’s newly launched initiative, AI Meets Education at Stanford (AIMES), coordinates efforts from the Office of the Vice Provost for Undergraduate Education and the Center for Teaching and Learning to help faculty integrate generative AI into classes, while also defining limits on student use, tells Stanford Report.
A key goal is to empower instructors to rethink assignments, class policies, and learning outcomes in light of accessible AI tools. For example, in a writing and rhetoric class, students are prohibited from submitting AI-generated prose, though they may use AI to locate sources or check grammar, provided they cite it transparently.
In an art-practice course, the instructor allows AI to spark ideas or visual sketches, but insists on student-driven research, critique, and final composition, co-creating an AI-use agreement with students to clarify boundaries.
In a philosophy seminar on artificial intelligence, students must craft journal entries and a final paper without relying on large language models for writing, so instructors can assess each student’s voice and reasoning.
In a computer-science class on equity and governance for AI, students role-play as legislative staffers and are allowed limited AI use (for quick digestion of dense policy material) but must disclose it and meet high standards of integrity.
These examples reflect a broader intention to foster critical thinking, ethical awareness, and student ownership of projects, rather than relying solely on AI as a shortcut. AIMES also serves as a hub for sharing these teaching approaches and helping faculty adapt them responsibly.
Stanford is navigating the AI-in-education challenge by equipping instructors with frameworks and case studies that integrate generative tools in meaningful ways without undermining the core learning mission.