My current focus is on 3-D representation on 2-D surfaces. Lots of responses to this challenge were found during the ages, since the first lenticular paintings in the XVI century. Most of them have tried to solve it by sending two different images to the eyes. My hypothesis is that if I find out how the brain transforms the two images, I can reproduce this transformation – so just one image – by software and represent it on a 2-D surface! Is this possible? I am working on it, with fairly good results.