Lately, TAE has been pondering the different way the human mind stores mental memories versus smell memories or sound memories. For example if I asked you to picture in your head what your mom looks like, you could probably conjure up a pretty fair image of her. But if I asked you to picture in your head what she smells like, you will almost certainly draw a blank. Sure, you could say "she smells of lilac" but you wouldn't be able to actually remember the smell of lilac such that your brain is filled with the remembrance of the scent.
However, if you were walking down the street and smelled lilac, you'd almost certainly think "that is the smell of my mom".
With sight memories and smell memories at the polar ends of a spectrum, sound memories come in some sort of middle. If I queried you to remember the song "My Heart Will Go On" by Celine Dion, made famous through the movie Titanic, you almost certainly could recall the chorus, and perhaps the verses. But what if I played a different Celine Dion song for you, and asked you to identify the singer? In moments you'd be able to determine, from memory, what Celine Dion sounds like and identify her as the singer of this song you had no memory of ever hearing.
The different ways humans store and recall memories for different senses leads to some interesting implications when one develops human-machine interfaces.
One concept I find interesting is directly adding a bypass to the ocular nerve, allowing visual input to the brain without having to use the eyes. The idea being that information shunted to the ocular nerve would be perceived by the brain as visual input, yet it would also receive actual visual input from the eyes. I humbly submit that you could "overlay" augmented reality in this fashion without any device over the eyes. Imagine seeing the internet in front of your face when nothing was there.
Now take it one step further. What if the device that could output visual data to your ocular nerve could also input data from the ocular nerve as well? Why not read as well as write? This has implications for memory. Imagine you want to go to a building in a town you have never visited. You would probably research the location on a map, maybe if you are lucky you could see the building's facade on streetview.
But what if instead (for a small fee) you could have someone else's visual memory (with permission) of traveling to that building directly input onto your ocular nerve? You could observe someone else's memory, and by doing so, make it your own, so that when you went looking for the place, your brain would remember being there before...
Or what if you could directly write to the nasal nerves, so that when a person wanted to "remember" what a peach smelled like, the nerves could be remotely innervated in such a way that the brain would think the nose was actually smelling a peach? The implant could monitor the nerve excitations when a person smelled something, and then later when they wanted to remember, it could recreate that smell. Imagine being able to recall the exact smell of girlfriend, the night you first kissed her, or recall the smell of that barbecue place you went to last week.
Imagine if you could hear, in your head, Beethoven's symphony. Since the organ of Corti would be circumvented, you could in theory turn the volume up to infinity, and listen to your favorite rock band at "11" with no regrets. But take it one step further again. Imagine you wanted to remember what that concert was like last night. Having recorded the neural input from your ear to your brain last night, an implant could now recreate it, and you could live out the concert again in your head. Or, you could upload it directly to your friend's head (with permission of course). Suddenly you don't have to describe "the part where Billy Joe Armstrong screamed "I am Jesus" and lightning bolts fired off overhead as though the thunderstorm in the sky were part of the act", instead you can literally share the visual and aural stimuli you experienced at that moment with a friend.