Editor’s Note: This article is based on an interview with Stéphane Paquet, AI Project Lead at Champlain College Saint-Lambert and AI Certificate Coordinator. The responses have been summarized for clarity and brevity and are not word-for-word reproductions of the original conversation. The ideas reflect the speaker’s perspective as of the time of the interview.

Question:
As AI evolves, we’re moving away from traditional coding toward more natural, dialogic interactions—where people simply talk to machines in plain language.
In your view, how do you see the role of the interface evolving in education?
Will students interact with AI through voice commands, AR/VR, or smart sensors? And what kind of relationship will they build with AI in these new environments?
Answer:
Let’s start with what we already see: When students use tools like ChatGPT, Gemini, or Claude, they engage through a chat interface.
And that decision—to frame the interaction as a “chat”—has deep consequences. It creates the illusion that you’re speaking to a real person.
Someone with:
• Thoughts
• Opinions
• Expertise
But that’s not what’s actually happening.
AI Doesn’t Think Like We Do.
AI doesn’t have independent judgment. It mirrors you. It tries to please you.
Example:
Ask it to find 30 problems in your essay. It will. Even if there are only 5.
A human might push back:
“Are you serious? That essay is great—maybe 5 small issues.”
AI won’t say that. It simply follows your prompt.
Why the Interface Can Be Misleading.
The chat interface makes AI feel like a trustworthy expert—someone who “knows everything.”
But here’s the danger:
• It speaks with confidence—even when it’s wrong
• It rarely admits “I don’t know”
• It gives plausible explanations, not true ones.
When you ask:
“Why did you give that answer?”
It doesn’t retrace real logic. It just generates another likely sentence. That’s not reasoning—it’s simulation.
Educating for Critical Thinking (Not Just Tool Use)
This is why we need to educate students about how AI works under the hood.
The more lifelike and human these interfaces become—chat, voice, AR/VR—the more likely students are to overtrust them.
We must teach:
• How AI is influenced by your own prompts
• That it doesn’t “think” or “understand”—it calculates patterns
• That its identity is not fixed or reliable
Students need to see behind the interface—or they’ll mistake simulation for insight.
As AI Becomes Embedded, Awareness Must Deepen
In the future, students may engage with AI via:
• Voice commands
• AR/VR environments
• Smart sensors
• Real-time adaptive tools
As this happens, the illusion of a “smart assistant” may grow stronger.
But the real challenge won’t be the technology itself— It’ll be making sure students maintain critical awareness of how these tools work, where they fail, and how to use them responsibly.
About the Expert: Stéphane Paquet is an experienced educator and AI consultant with over 20 years of teaching experience. Currently serving as the AI Project Lead at Champlain College Saint-Lambert, he focuses on supporting teachers and students in integrating generative AI tools into education. His areas of expertise include e-learning, educational media design, and the development of innovative pedagogical strategies.

Leave a Reply