Generative AI Best Practices

ACCEPTABLE USE OF AI TOOLS IN TEACHING AND LEARNING

Artificial Intelligence (AI) tools can support creativity, productivity, and student engagement when used thoughtfully and ethically. At Xavier, we encourage faculty to explore AI as a complement to sound pedagogical practice while remaining mindful of its limitations and risks.

Faculty are welcome to use generative AI tools (e.g., ChatGPT, Copilot, Perplexity) to assist with course planning, resource development, and administrative tasks, provided the use aligns with the following guidelines:

  • Academic Integrity: AI should not replace critical faculty responsibilities such as grading, providing personalized feedback, or ensuring academic standards. When AI is used to support these tasks, results must be reviewed and edited to ensure accuracy and alignment with course outcomes.

  • Transparency: If AI tools are used to generate content shared with students (e.g., syllabi language, instructional materials, rubrics), it is recommended that faculty acknowledge this use to model transparency and ethical engagement with AI.

  • Student Use: Faculty should establish clear expectations around student use of AI in coursework. Guidance should be included in the syllabus and aligned with Xavier’s academic honesty policy.

  • Data Privacy: Avoid inputting confidential student information or proprietary materials into AI tools, especially those without institutional data protections. When in doubt, treat AI tools as public platforms.

  • Human Oversight: Generative AI can assist but should not be the sole author of educational decisions. Faculty expertise, context, and judgment remain essential in any AI-supported process.

To support responsible exploration of AI tools, we’ve compiled tool-specific guidance and teaching ideas on each GenAI tool page.

BEST PRACTICES FOR FACULTY USING AI TOOLS

As AI tools continue to evolve, faculty can thoughtfully integrate them to enhance teaching, streamline course design, and foster student engagement. Below are some best practices to consider when using AI in your academic work:

1. Start Small and Purposefully

Experiment with AI in low-stakes areas first—such as brainstorming discussion prompts, rewriting learning objectives, or drafting email templates. Use these tools to enhance—not replace—your voice and expertise.

2. Critically Review AI Outputs

AI-generated content may include errors, biases, or outdated information. Always fact-check and revise to ensure clarity, accuracy, and alignment with your discipline and course goals.

3. Model Ethical Use

When using AI in your teaching, consider showing students how you use it responsibly. This might include demonstrating prompt engineering, evaluating outputs, or discussing the ethical considerations involved.

4. Clarify Expectations for Students

Be explicit in your syllabus and assignments about whether and how students may use AI tools. Provide examples of acceptable use (e.g., outlining an essay) and misuse (e.g., submitting AI-generated work as their own).

5. Protect Sensitive Information

Avoid submitting student data, assessment questions, or unpublished research into public AI tools. Treat these platforms as public unless they are licensed, institution-approved, or offer clear data protections.

6. Use AI to Save Time—Not Cut Corners

AI can reduce prep time for course materials, emails, or policy drafts—but it should not be a shortcut for building relationships, giving feedback, or engaging students. Keep the human connection central to your teaching.

7. Stay Informed and Curious

The AI landscape is rapidly changing. Stay updated through professional development, campus conversations, and experimentation. What works this semester may evolve next term.


LEGAL & ETHICAL CONSIDERATIONS WHEN USING AI TOOLS

While AI tools can offer time-saving and creative benefits, faculty must remain vigilant about legal compliance and data security—especially when it comes to student data, copyrighted materials, and institutional responsibilities.

✅ DOS AND DON'TS OF AI USE IN HIGHER ED

✅ Do ❌ Don’t
Use AI to brainstorm, outline, or draft course content with your oversight Use AI to grade student work without careful review and customization
Ask AI to rephrase or polish your communications or policies Submit confidential or proprietary university documents into AI tools
Provide students with AI guidance in your syllabus Assume students know your expectations around AI without explicit instructions
Evaluate AI output for accessibility, tone, and inclusivity Use AI to create final versions of materials without human editing
Stay updated on institutional policies regarding AI use Upload student work to detect plagiarism via unvetted AI tools

🔒 FERPA AND STUDENT DATA

  • Do not enter student names, ID numbers, grades, or assignment submissions into public AI tools like ChatGPT, Copilot, or Perplexity.

  • Do treat AI tools as public spaces unless the tool is vetted and approved by the university for secure use.

  • Remember: FERPA protects personally identifiable information (PII). Even context clues (e.g., referencing a student’s major, project, or disability accommodations) may qualify as PII.


⚠️ COPYRIGHT AND CONTENT OWNERSHIP

  • AI tools may generate or remix copyrighted content without proper attribution. Before using AI-generated media (images, text, video) in course materials, review for originality and licensing.

  • Be cautious when using AI to analyze or summarize articles or textbooks, especially if those materials are not open-access.

  • If in doubt, default to faculty-generated or licensed resources.


📚 INSTITUTIONAL POLICIES

While Xavier University continues to develop its formal AI use policies, all faculty are expected to:

  • Uphold academic integrity

  • Protect student data

  • Maintain accountability for any AI-generated content used in instruction

Check Information Technologies for updates or guidance on secure tools.