Breadcrumb
Awareness of Agentic AI: Essential Knowledge for Faculty
Written in collaboration by Center for Teaching and Office of Teaching, Learning, and Technology staff
By this point, the majority of faculty are aware of generative AI, like Copilot. According to the most recent national survey developed by AAC&U and Elon University, 68% of faculty are using generative AI tools for teaching and learning purposes in their classes.1 Less widely known are the more recent developments in agentic AI.
What is Agentic AI?
Beyond generative AI is agentic AI. While generative AI involves back and forth interactions between the user and a large language model, an AI agent can take a set of instructions from the user, and then independently complete actions on the user’s behalf. AI agents are goal-directed systems that can plan, decide, and take actions over time using other tools, sometimes with minimal human intervention. AI Agents are being used in customer support, personal productivity, and even in ways to support research.
Why Agentic AI is a Critical Concern?
While agentic AI has positive uses, it may also create significant risks. In addition to the above examples, some AI Agents are capable of logging into a learning management system like Canvas and completing coursework without involving the student. This is not a hypothetical science fiction story; the technology to do this is here, now.
These tools ask students to hand over their login credentials, which is a security risk, a privacy issue, and a violation of the university’s acceptable use of technology resources policy. If a student uses one of these tools, like the now-decommissioned Einstein AI, your ICON course, including grades, discussion forum posts, and the course roster, becomes accessible to the AI. If an instructor were to grant one of these tools access, FERPA protected student records would be exposed.
How to Respond
Awareness of this agentic AI is a first step, but awareness is not enough. It is time to look at your course assessments and ask how they measure student learning and whether they are resistant to being completed by AI.
Some guidelines and strategies to get started:
- Focus on the learning process instead of the final product. Require students to submit drafts that show the development of their thinking instead of just a finished product. Consider adding metacognitive video reflections using Panopto Assignments to your course.
- Design active learning activities and low-stakes assessments that make students’ learning more visible in the classroom. In‑class problem‑solving, group work, peer feedback, and intentionally designed checkpoints help students engage authentically, while giving instructors insight into their thinking that AI tools cannot replicate.
- Incorporate pen-and-paper assignments, uploaded to Gradescope for easier grading, to document the development of students’ thinking and establish clear evidence of authorship.
- Clearly articulate to students what they will learn, why it matters, and how the work supports their intellectual development. Emphasize the importance of the “friction” in the learning process and make explicit connections between assignments and course learning objectives.
- Set clear expectations early around what is considered acceptable AI use and what is not permitted in your classroom. Explicitly mention what is out of bounds, such as AI glasses and agentic AI tools. Make these boundaries visible in your syllabus and assignment instructions so students understand these expectations before they begin their work.
These strategies can help create a learning environment where students engage more deeply and ethically in your course. Please know that you don’t have to navigate these challenges alone; the Center for Teaching can serve as a thought partner as you refine course and assignment design or communicate your expectations around AI use. You can use their online form to request a teaching consultation. You are also welcome to explore their resources on designing assignments that address AI and communicating expectations about AI use with students.
Final takeaway
Agentic AI is a new level of computing capability that introduces challenges to academic integrity and data privacy. ITS will continue watching for developments in this space. The rapid pace of development, such as this shift from generative to agentic capabilities, requires faculty to stay informed about these new technologies. Consider joining the AI Faculty Interest Group for a monthly meeting. Agentic AI is a challenge that presents an opportunity for you to examine your courses, to explain the 'why' behind your assignments, and to set clear guidelines for AI use in your classroom. Take steps to ensure that technology remains a tool for education rather than a replacement for it.