A copy of the manuscript with full citations is available at the link above. I’ll summarize the gist of it below. COVID-19 made me change directions halfway through the semester but I think I was still able to pull together a nice result.
Here’s a video showing the process of adding and labeling mnemonics for the periodic table of elements:
The method of loci (a.k.a. memory palace technique) is an incredibly popular strategy among expert memorizers that involves associating visual representations of information with spatial layouts. Spatial computing devices (i.e. AR and VR headsets) make it easier for beginners to learn and practice the technique, as it turns from a mental task into an experiental one.
Past research has shown that virtual memory palaces yield higher recall through an immersive HMD compared to a 2D display. Studies also found that anchoring mnemonics in space improved recall compared to simply fixing them in the center of user’s vision (as was the case in MIT’s NeverMind). The method is typically taught with the recommendation to use an environment that is personally familiar like one’s home our route to work, so it would seem that mixed reality is the ideal platform for a virtual memory palace. However, when applied mentally it has been shown that the MoL performs no better with a personally familiar environment than with a novel one. For an immersive virtual memory palace where information encoding occurs experientially, it’s not clear if the same results would emerge.
Do personally familiar environments yield higher recall performance for an immersive virtual memory palace? In turn, is mixed reality superior to virtual reality for the method of loci?
What does a mixed reality memory palace look like?
For this project, I decided to use the fairly novel approach of simulating mixed reality on VR hardware (specifically an Oculus Quest). The idea was to 3D scan some real spaces and place the scans as the environment in a 3D engine, then grant the user full freedom to walk with physical locomotion and interact with hand tracking as if they were really in that space.
The rationale here was that VR hardware is far more affordable (I was able to apply for a grant to buy the $400 Quest), provides much higher FOV, and would provide the effect of perfect spatial tracking and occlusion compared to AR/MR headsets like HoloLens. Additionally, it grants flexibility to control the “real” environment in my experiment.
Initially I was going to run a memory experiment on students from UMass and Amherst College. I would digitally reconstruct a dining hall on the UMass campus, which would be “familiar” for the participants from UMass and “unfamiliar” for the other half from Amherst College. Then I’d have subjects memorize lists of simple words (~11 words per list) paired with visual representations in various conditions (e.g. no strategy, mental MoL, simulated MR application). I’d test them on these wordlists at various delay phases, up to around 4 weeks after the experiment. By comparing performance across the two subject groups, I could look for significant effects in recall over time.
Most of my work for this thesis revolved around organizing this experiment, as I had to apply for IRB approval, get grant funding for equipment, create advertisements, perform 3D reconstruction of the dining hall, build an evaluation tool, and reserve a large space to run the experiment. I worked with dining hall managers to access them over break to take tons of 360 captures and DSLR photos for my manual reconstruction. Here’s a little render of some of the assets from Worcester:
Shortly before I was going to run the experiment, COVID-19 was ramping up in the US and the whole campus had to transition to remote learning which made it impossible to run an in-person experiment.
I switched gears to simply make a prototype MR mempal. I built the demo in Unity with MRTK thanks to the MRTK-Quest extension provided by Eric Prvncher. The main features include:
Rather than using the dining hall, I made a rough 3D scan of my own room using a mobile app leveraging the old Google Tango algorithms. I shared an early demo of the app on twitter with a side-by-side video showing the MR simulation next to its real world equivalent, which Eric shared:
A later video of the final prototype is at the top of this page.
I also want to share this interesting exchange I had with the creator of MunxVR (perhaps the most polished VR memory palace application) regarding his thoughts on AR/MR as a platform for memory palaces.
Click the tweet to see the full thread.
1) AR for memory palaces could be great, but it depends on application. There are memory palaces and then there are memory palaces. The technique is usually demoed for simple lists using a straightforward journey. Most subjects can’t be condensed to simple lists.— Aaron Ralby (@Linguisticator) May 11, 2020
I’d like to work more on XR memory tools. This summer I am practicing my own memory skills, which I think will make it clearer how to make a useful memory palace application. I think Aaron has the right idea about embedding virtual environments as small replicas in the real world. That seems to be the most robust approach that mixes the benefits of VR and AR.