The beautiful photographs made using Kodachrome depended on the work and artistry of engineers at Kodak who made decisions about contrast, color interpretation, saturation, white balance, and more, and coded these decisions into the chemical software of the film. All a photographer has to do is get the exposure at the right level for the scene, press the shutter, and hey presto! The hard work of the engineers goes to work for us and makes gorgeous color and contrast. The difference between various films is the different decisions encoded into them. Velvia? More saturation. Tri-X? No saturation.
The photons making landfall on opsin molecules in our retinas, like those reaching a Kodachrome emulsion, are just starting points. The same types of decisions (and more) are made by the visual circuits in our brains encoded by evolution instead of engineers. What wavelengths should yield the sensory experience we know as blue? How much contrast should be applied locally to edges, and how much to the “image” as a whole? What color temperature should be presented to us as white? What is noise in the image, and what is actually a feature of that which we are observing?
The visual computations applied to the raw data coming from our rods and cones are masterful not just because they happen at all, but also because they happen so fast. So fast that we are unaware that the computations are even taking place: when we open our eyes, we just “see”.
Cut to a few years ago when the first blurred details of presbyopia, the reduced elasticity in our eyes’ lenses, were showing up for me. I was doing a fair amount of image editing at the time, restoring damaged photos and such, and I found that on days when I spent a lot of time looking at my display, I was able to read much better than on days when I did less editing. The difference was pronounced.
I wondered what might be going on and, after a little searching, found that by showing contrast patches known as “Gabor Patches”, one could train one’s visual circuits to compensate for our inelastic lenses. It turns out that the fact that vision is software, along with our discoveries about it, means that we have an opportunity to adjust our brain’s visual software to enhance edge contrast of letters making up for the optical fuzziness our lenses can no longer avoid. Apparently I had been performing a crude version of this training by looking at subtle contrast differences in the images I was restoring all day.
My hunt was on: this was an ideal application for an iPhone, and I was sure that someone must be building an app to exploit this part of our nature.
Eventually, I found Glasses Off and, also eventually, they released their app. And, having used it, I am experiencing a reversal in my need for reading glasses. For all but the most challenging circumstances like low light and small print, I can now, again, read without fumbling and donning.
It takes a while: I had to do the 15 minute training 3 or so times a week for several months to notice clear results. But the results are clear. I can now read, without glasses, 90% of what I used to need glasses for, and my progress is still underway as I continue to train. I am super psyched.
Is Civilization cool or what?