Machines are supposedly doing much of the “intelligent” things for us today. In everyday life, we encounter machine learning on Facebook, which recognizes the portraits on our profiles and sends us advertising for clothing based on how we can be filtered into a category. This situation is no different for people even using the most sophisticated types of machinery to acquire information – words, pictures, and numbers– for their jobs, as would be the case for a financial analyst. Or even real estate development companies take assessment of properties including, for example, demographics.
Machine learning does the empirical work we used to do however without that advantage of human psychology. Algorithms do not function by hard-wired rules to filter information. Instead, they rely on a large dataset of information which is something like a schema or a stereotype. We ask ourselves as Sendhil Mullainathan and Jann Spiess recall in their article “Machine Learning: An Applied Econometric Approach” if there are empirical rules, how can we reconcile them with what we already know about how perception functions in human intelligence.
Machine learning should do the work for us of predicting and parameter estimation. Machine learning can even do those skills requiring decision trees as diverse as science, finance or risk management. Most of this information is quantitative. Trees function by using the form to split in different directions offering decisions about directions, say left or right until the variables reach a terminal ode, and a prediction is returned.
The dominant opinion is trees make perfect observations. However, sometimes that fit may seem perfect whereas it is really an overfit and therefore would serve terribly for prediction. The obstacle we run into is what is called out-of-sample prediction. The solution supposedly is in regulariziaton. An example of this type of filtering is increasing the tree depth so there is more complexity of a function.
Supposedly, we are dealing with a machine mechanism which takes in, sorts through, interprets and analyzes data just as we humans do. Usually we think of human perception as less capable of the fine-tuned objectivity than machines. Mullainathan and Spies claim the key to regularization in finding the deepest complexity of a function is actually called empirical tuning, a metaphor strangely reminiscent of something like the scientific method. The process works by creating more out-of-sample options which are then randomly partitioned and cross-validated.
Namely, the issue to be addressed in the shortcoming of such algorithms devolves down to human perception. Even while many knowledge specialists deal with spreadsheets, what they can learn from human perception can make them more aware of the nature of the data they are assessing. There are certain facilities of the human mind which actually machine learning, and even deep learning, can never achieve.
The basic functions of human perception rely upon logic as well. The brain operates from two sides using perception, both from a cognitive standpoint as well as an affective standpoint. When the two are brought together, they situate human knowledge in between. Inferential reason, closely connected to reflections and memory, stem from a deeply intuitive part of brain. While some intuitions are unreliable because they are based on illogical expectations and cognitive biases, they do propose the potential for a great deal of insight through inferential reason.
In art, that aspect of intuition and inferential reason may transfer to many professions such as science and math. Education in math is no different than any field and should not allow for blind spots of our human perceptions. The skills required of anyone at all should not be reduced to thinking like automatons.
An example of how we retrieve and exercise that innate human capacity is similarly through empiricism. In a way, when we are faced with a visible “function,” our minds work from a physiological standpoint to begin with the most outstanding detail. Nobody, not even the expert, looks at a work of art as a whole at first nalysis. Those last touches are the emphasis of a final statement.
Comparatively, in math or science, in beginning with the final statement, just as in painting, we must regress in the decision making tree by means of intuitive inference. We work from what is coded to eventually what is not within our reach, perhaps inaccessible, but also understood. We know we can get there by thinking expansively in an effort to complete the incompleteness of that first empirical experience, or that final piece of data, such as the overall “big picture” of a work of art.
Pablo Tinio has written about this very same experience in his study of the self-reflective act of viewing called the “mirror model.” An interesting point he made was how we are constantly adapting that final “function” to an overall picture of a whole. Along with that pursuit, our minds also draw from an archive in our memory. The discrete facts we work with then draw back into ideated forms or concepts in our minds. Those embodied experiences are what lend us tacit knowledge. These are the pieces of the equation which show the intuitive element of the equation. We go deeper and deeper into layers of this decision making tree.
The experience of art can teach us to do that job of decision making trees which underlie analogues even more effectively. In the analysis of discrete facts, just as in the perception of a work of art, we can learn to validate the connections technology is too prejudiced to see. When we draw upon that dimension of thought and perception from below, with a more intuitive awareness of the realities we are dealing with, we can indeed prove that we are not blind.