Overview
This research investigates how students and educators in India are using generative AI tools like ChatGPT in higher education — and how institutions can respond responsibly. Surveys with 71 students and 27 faculty revealed that AI is already a routine academic companion, but one whose use often remains shallow, ethically unclear, and unsupported by policy.
To address these challenges, the paper proposes a four-pillar framework for responsible educational AI:
- Pedagogical Alignment – embed AI into coursework that demands reflection, not shortcuts
- Transparency and Explainability – teach students to interrogate AI outputs
- Student-Centred Co-Design – involve learners in shaping how AI supports them
- Institutional Guardrails – establish clear, consistent policies and guidance
Rather than banning or blindly embracing AI, this framework repositions it as a structured partner in learning and a catalyst for redesigning teaching, assessment, and governance.
Research Questions
- How are students incorporating generative AI into their study practices?
- What ethical uncertainties shape perceptions of AI in academic work?
- How do educators perceive the risks and opportunities of AI adoption?
- What frameworks can institutions use to guide responsible integration?
Methodology
- Design: Mixed-methods exploratory study
- Participants: 73 students + 27 faculty from Indian higher education institutions
- Data Collection: Two surveys (23 student questions, 18 educator questions)
- Analysis:
- Descriptive statistics for usage trends
- Thematic coding of open-ended responses
- Comparative analysis of student vs. educator perspectives