There’s been a lot of discussion about taste and AI in recent times. Some argue that AIs can’t have taste because taste is an inherently human trait that emerges from one’s values and intuition, while others suggest that taste itself is just a refined process of identifying and iterating on past patterns. But is taste autoregressive though? Is it really just an iteration of past patterns?
Throughout our lives, we absorb influences, refine them through exposure and practice, and eventually internalize a sense of what works. This to me sounds a lot like how AI generates content, predicting what comes next based on past data.
So if taste is built on iteration, how is human taste fundamentally different from AI’s? Or is it the same but just a more complex version of the same underlying process?
After all, much of what we consider intuition is also shaped by exposure, personal history, and subconscious pattern recognition. A filmmaker resisting a shot that feels too polished, a writer breaking a grammatical rule for rhythm. You could argue these choices come from a human based intuition or value system. But are we truly breaking from patterns, or are we responding to another deeper, more complex kind of conditioning shaped by past experiences in our lives?
Perhaps the distinction lies in the why then. Aesthetic decisions aren’t just about what looks or sounds right; they are more so expressions of intent and meaning. Not just pattern recognition, but about internal and external resonance. What feels right on a level that isn’t always rational. But what feels right could also just manifest from that deeper layer of pattern recognition. As much as I would like to reject this, I find it very similar.
Maybe the difference then, is in the motive. Human taste doesn’t always aim for optimization, while much of AI today is designed through the lens of capitalism, which moves forward with a goal to grow and optimize. But theoretically, you could design an AI that is not programmed through that lens but to dwell as well. Would it then be the same? So if AI’s training process is also iterative, at what point does iteration become something more, something we might call perspective or intuition? If our sense of taste, like AI’s, is just an echo of what came before, does that mean the difference is only in complexity?
I guess the real question isn’t just whether human taste and AIs differ, or whether taste itself is autoregressive at all. Maybe the question is more about motive after all. If AI operates within an ever narrowing frame of utility, does that mean it will always lack the ability to truly stand still? To choose, not just based on probabilities, but on something less tangible—something we might call intention?
*