
As a student researcher, I chose the question “To what extent do predictive coding, Bayesian inference, and reinforcement learning explain how the brain updates its internal models during learning?” because it sits right at the intersection of curiosity, confusion, and—if I’m honest—a little bit of academic bravery. This topic explores how the brain, that mysterious three-pound “CEO of everything we do,” continuously updates what it believes about the world. And frankly, if I can understand even 10% of that process, I might finally explain why I still check my phone even when I know there are no new messages.
The concepts of predictive coding, Bayesian inference, and reinforcement learning are not just fancy terms meant to intimidate students before exams. They represent powerful frameworks used to explain how humans learn from experience. Predictive coding suggests that the brain is constantly making predictions and correcting errors—basically, it’s like a friend who always thinks they know what will happen next but is occasionally (and sometimes embarrassingly) wrong. Bayesian inference, on the other hand, implies that the brain updates beliefs based on probabilities, almost like it’s secretly doing statistics in the background while we struggle to pass our math tests. Reinforcement learning introduces rewards and punishments into the mix, meaning the brain is also a bit like a child who learns quickly when snacks are involved.
Now, here’s where the “rumors” come in. Around campus (and by campus, I mean students whispering in libraries while pretending to study), there seems to be an ongoing rumor that the brain might actually be the most efficient machine ever created—far more advanced than any artificial intelligence. Some even joke that if the brain were a student, it would never miss a deadline, never forget a concept, and probably tutor the professor. Of course, reality quickly humbles us when we forget where we put our keys five minutes ago. This contradiction makes the topic even more fascinating: if the brain is so powerful, why does it sometimes act like it just woke up from a nap it didn’t plan?
Another “rumor” is that these three theories—predictive coding, Bayesian inference, and reinforcement learning—are competing to be the “ultimate explanation” of how learning happens. It’s almost like a scientific reality show: “Who Wants to Be the Brain’s Favorite Theory?” Predictive coding claims, “I handle expectations!” Bayesian inference responds, “I manage uncertainty!” Reinforcement learning jumps in with, “I control behavior through rewards!” Meanwhile, the brain quietly uses a bit of all three, like a chef mixing ingredients without telling us the full recipe.
I chose this question because it allows me to explore whether these theories work independently or together in explaining learning. It also challenges me to think critically about how abstract models relate to real human experiences. For instance, when a student studies for an exam, predicts questions, adjusts their understanding after mistakes, and feels rewarded by good grades—is that predictive coding, Bayesian updating, reinforcement learning, or all of them working behind the scenes like a well-coordinated (but slightly chaotic) team?
