As we consider our work with AI, it’s vital to consider both practice and policy. I can just hear the groans now, as I share a second post on policy. That’s because policy is often treated as the endpoint of innovation, where risk is mitigated and rules are drawn. But, we’re intentionally taking a different approach at Lipscomb. We’re focused on seeing AI policy as a framework for formation, where our identity meets our imagination.
As generative AI became more prevalent, we recognized that simply policing its use would not serve our community. Taking a surveillance stance feels like an old way of thinking, trying to catch our students doing something wrong rather than leaning into teaching them how to use AI in ethical, effective ways. So, as I’ve shared before, we’ve been asking ourselves some deeper questions, such as: What kind of learning community are we becoming in an age of artificial intelligence?
That question became the foundation of our AI policy journey. And so, our first move wasn’t to write restrictions, but to craft a set of guiding principles that I’ve shared with you in previous posts, grounded in our institutional Core Values. These principles, which include love for God, curiosity in learning, service to others, and a commitment to joy and collaboration, are not abstractions to us. They serve as ethical anchors that shape how we think about authorship, academic honesty, and the dignity of work in the age of automation. As a team, our AI Task Force saw this mindset as key to our approach to AI at Lipscomb.
We also realized that principles must become practice, and that’s why we created two new structures that I briefly shared in the last post, Collaboration at the Core: the Academic AI Standing Committee and the AI Super User Task Force. We didn’t conceive these groups as oversight bodies. Instead, they are learning communities in themselves, charged with supporting faculty adoption, student understanding, and ethical discernment in real time. They lead with hospitality, offering feedback, dialogue, and scaffolding rather than top-down enforcement.
This posture is deeply intentional. In our Fear to Flourishing framework, policy must evolve from static rules to adaptive guidelines that reflect the co-creative nature of learning today. AI is not a fixed force, it iterates, adapts, and evolves. Our policies and practices must do the same, with an understanding that this work will be both iterative and communal. This is why we’ve prioritized ongoing conversation over one-time compliance. We understand that policy should reflect our identity and invite us into ongoing, co-creative conversations.
Through faculty training on AI and academic integrity, we’re introducing language that helps faculty discern authorship, evaluate collaboration, and engage students in ethical reflection. Those faculty members who engage in the AI sessions and series that we hold in our Center for Teaching and Learning stay connected with us. We’re not just tracking completion of series or sessions, we’re working with faculty to track transformation, so they continue to share what they learn with the Center for Teaching and Learning as they are rewriting assignments, opening dialogue with students, and imagining new models of assessment .
We’ve also updated our syllabus templates and academic integrity policy to include specific guidance on AI use. We do this for standardization, of course, but not for its own sake. We also do this for clarity. When expectations are transparent, students are more likely to learn responsibly and faculty are more confident in guiding them. Our syllabus templates also provide multiple options for faculty to guide students in their AI use, depending on the assignment and the course. This provides students with both AI literacy and AI fluency, building their understanding of when and how to use AI ethically and effectively.
In The Courage to Teach (2007), Parker Palmer reminds us, “The model of community we seek is one that can embrace, guide, and refine the core mission of education - the mission of knowing, teaching, and learning…to teach is to create a space in which the community of truth is practiced.” (p.97) It is this model of community that we seek as we build an iterative policy in community. What our policy process aims to do is not to control behavior, but to foster a deeper kind of trust, one that invites truth, integrity, and growth.
AI is not an issue to solve, but a reality to shape. At Lipscomb, we are attempting to do that by cultivating a policy culture that listens before it legislates, that reflects who we are, and who we’re called to become as we move together into the unmade future.
References
Palmer, P. J. (2007). The courage to teach: Exploring the inner landscape of a teacher's life. Jossey-Bass.