Author: Thomas Bedard

  • Understanding AI Hallucinations

    AI hallucinations are a modern artificial intelligence problem that makes AI models generate incorrect, fabricated, or nonsensical information. Let’s find out why this happens and why they matter. What are AI Hallucinations? AI hallucinations happen when AI systems produce content that has no basis in their training data or doesn’t align with reality. These errors aren’t random…

  • Context Windows & Tokens in AI: A Simple Explanation

    Context windows and tokens are two foundational concepts in AI, especially in Large Language Models (LLMs). Understanding these concepts will make more sense of why AI models seem to “forget” parts of the conversation once developed enough as well as why complex prompts can be misinterpreted by the model. What are Tokens ? In AI,…

  • Chain-of-thought reasoning (Simply Explained)

    What is Chain-of-thought reasoning (CoT)? “Chain of thought (CoT) mirrors human reasoning, facilitating systematic problem-solving through a coherent series of logical deductions”. IBM: https://www.ibm.com/think/topics/chain-of-thoughts Essentially, CoT encourages an AI model to take intermediate steps before getting to the final answer. Instead of going straight to the solution, the model will explain its thought process in…

  • Zero-Shot Learning (Simply Explained)

    What is Zero-shot learning (ZSL) ? Zero-shot learning (ZSL) is a machine learning approach that enables models to recognize and categorize things they have never seen before without needing any labeled examples of those new things during training. Unlike supervised learning that require a lot of labeled data, that must be categorized explicitly, ZSL uses…

  • What is RAG & How Does it Work?

    Retrieval-Augmented Generation (RAG) is an advanced technique in Generative AI that blends text generation with real-time information retrieval. Simply put, RAG allows AI models to access external databases or knowledge bases to fetch accurate and up-to-date information when generating responses. This ensures that the content produced is not only coherent but also factually correct and relevant to the…

  • How Do LLMs Understand Words with Multiple Meanings?

    Embeddings! They are ways to convert words, sentences or even full fledged documents into numerical representations (vectors) that computers can understand. You can think of embeddings as translating human language into a form that machines can easily process and analyze. To better understand embeddings, let’s start with a simple analogy. Let’s say you are arranging…

en_CAEnglish