A Predictive Theory of Spatial Remapping.

A Predictive Theory of Spatial Remapping.

Lately, I've been quite obsessed with the Hippocampus and how the different hippocampal areas and related regions all interact to enable exploration. However, I recently took a step back to consider how all of this fits into the rest of my understanding of how the brain might function. This effectively lead to the following mermaid diagram (a flowchart). I've included an analogy of how to interpret it underneath, followed by a more succinct explanation of my thinking.

graph TD A["Agent enters a new room."] --> B{"Is this a new environment? (Is precision from the prior environment high?)"} B -- Yes --> C["Populate the environment with place fields."] B -- No --> D["Retreive stored place field map."] D --> E["Set precision high."] E --> F{"As the agent explores, the current spatial location (determined by place fields) is referenced and bound features are retreived (bound by grid-reference from grid cells). If no features are stored, similar features from other environments are retreived from memory (patterns broken down in the DG and then filled in via pattern completion in CA3?). These features are used as priors to make predictions. This is effectively the same as asking: ''Did I correctly predict what I would find here?''"} F -- Yes --> Q["Increase precision."] F -- No --> I{"Is precision high above error threshold?"} I -- Yes --> P["Lower the precision and discount the error as noise."] I -- No --> L["Bind the incoming feature info (prediction error) to the spatial-ID."] L --> M{"Is precision below re-map threshold? (This is for error handling where a similar environment is mistaken for a familiar one)."} M -- Yes --> B M -- No -->F P --> M Q --> F C --> R["Set precision low."] R --> F

Analogy - A Neighbour's Apartment

One aspect of spatial exploration and navigation that I've been interested in is how we determine whether an environment is familiar to us, or if it is novel. Necessarily, this must involve memory retrieval. We look to our memories and determine whether the features we see are similar to what we've seen before. And so, the question becomes "how familiar are the features within this environment"?

That's where I got stuck for a while, as it seems to beg the question; to determine whether an environment is familiar, we must decide if its features are familiar. But to determine if the features are familiar, we must know if the context that we find them in is familiar to us (i.e. the environment). For example, consider going to a neighbours apartment. It's the same layout, the same furnishings, etc. The features must all be similar - it's the context that matters. And so to answer whether an environment is familiar, we must first form an expectation based on the previous environment like so:

1) I'm currently in my own apartment, and it's familiar as I've been here for a long time.

2) I'm going to move into the hallway from my apartment. I recall being in the hallway before, so I expect it to be familiar.

3) I'm then going to move from the hallway into my neighbour's apartment which I've never been in before, so I expect it to be unfamiliar.

Thus, even though the features might appear familiar, my expectation of the new apartment environment is that it's unfamiliar. In predictive processing terms, this would mean that I've entered this new environment with low precision - the confidence I have in the predictions about what I will see is low, and so I will rely more upon my incoming sensory data and less on my predictions. This effectively gets around 'begging the question' so-to-speak.

Next, we need a way to collect information about this novel environment so that we can begin to make accurate predictions about it in the future. In this case, our incoming sensory data will match with our knowledge our own apartment since some of the features are similar. We'll begin to make predictions, like when we initially walk in, the guest bathroom will be on the right. When this is confirmed, we notice that there's a lack of prediction error, and precision is increased, meaning that we're more likely to rely upon future predictions rather than sensory data. However, when you notice a difference - maybe the sofa in the living room is orientated differently, this results in a cascade of prediction error. Since our precision is still relatively low, we treat this prediction error as valid, and use it to update our knowledge of the current environment. Then, the next time a prediction is made regarding the living room or sofa, no precision error occurs and precision is therefore increased. Eventually, you end up with a valid collection of spatially bound features that constitute a cognitive map of the neighbours apartment.

Lets say that you leave the neighbours apartment after having a great time, and they invite you around the next day for a party. Having spent a number of hours there the previous day, you have a good idea of what to expect of the environment - it's familiar to you. So, before you knock on the door, your brain is expecting to see the same layout as yesterday, and precision is therefore high - you're going to rely more on your predictions than your sensory data. However, when you enter, you notice that the layout of the livingroom has changed. A board game and various foods lay on the coffee table, and the sofa has been moved to make space. Your predictions were wrong once again, causing a cascade of prediction error - however your precision was initially high. Thus, your brain initially discounts the discrepancies as noise in the sensory data, but gradually lower's its confidence in the predictions (the precision) with each error until the error finally propagates far enough to update our model of the neighbours apartment.

Functionally, this isn't noticeable to us - it happens so quickly that it's negligible. Though it explains why sometimes we might have to do a double-take to reconfirm something surprising that we see, or why we might jump at shadows in the evening, mistakenly seeing movement where there is none - It's our precision either discounting noise, or putting too much emphasis on our senses and therefore allowing too much noise to reach our awareness.

Succinct Explanation

This is my theory on how predictive processing could interpret the hippocampus's role in updating spatial memory in new and familiar environments. The diagram above demonstrates the decision-making process an agent goes through when entering a new room, with a focus on the concepts of precision, place fields, and spatial memory.

  1. Entering New Environments: The diagram starts with an agent entering a new room, immediately questioning if the environment is new or familiar. This initial distinction is critical, as it determines the subsequent cognitive process.
  2. New Environment Processing: If it's a new environment, the agent populates it with place fields and sets precision low, which results in an openness to new information and less reliance on prior knowledge.
  3. Familiar Environment Processing: Conversely, in a familiar environment, the agent retrieves a stored place field map, setting precision high. This results in a greater reliance on existing memory and less openness to new information.
  4. Predictive Processing in Spatial Navigation: As the agent explores, it continuously checks whether its predictions about the environment match the actual features encountered. This is a dynamic process where the precision is adjusted based on prediction accuracy.
  5. Error Handling and Precision Adjustment: The diagram attempts to describe how the system handles errors – by adjusting precision levels and possibly remapping following repeated prediction error due to the environment being initially misidentified as familiar.
  6. Complex Interplay of Memory and Perception: Overall, I've tried to capture a complex interplay between memory (both of similar and specific environments) and current perceptual input, where the hippocampus plays a central role in navigating this interplay. Needless to say, this is an oversimplification of a complex set of processes. However, it's a useful starting point for investigation, and one I hope to use in future research.

A Potential to Explain Partial and Rate Remapping

When coming up with this theory, I only considered how to explain when global remapping occurs vs the loading of a previously defined mapping of place fields. I had not considered partial remapping nor rate remapping. However, I think the theory does have the explanatory power to handle these circumstances, though the following is just off the top of my head and needs considerably more thought put into it before I'd be happy asserting its merit. In any case, here's the thought:

When at the decision point "Is precision high above error threshold?" is reached, the option to either discount a conflicting observation as noise or to update the existing map is available. Either of these could influence the remapping of specific place fields or their firing rates. For instance, if a feature must be updated, it stands to reason that a partial remapping must occur in order to facilitate this update. If a conflicting observation is discounted as noise and the precision is lowered, we would need a way to confirm that the observation was truly in error.

This might be achieved via a dampening or sharpening effect whereby expected incoming signals from that group of cells are inhibited (or the expected error is sharpened by being brought closer to threshold), prioritising the identical prediction error as previously received. If this error reoccurs, this is a strong indication that the precision (the confidence in that particular prediction and its respective model) is still too high, and a greater reduction is necessary. A consequence of this effect would be a change in the likelihood of that place cell firing due to the inhibitory effect, and therefore a potential change to its firing rate, thereby explaining rate remapping.