What does it mean when machines learn to identify images and objects? Their myriad descriptions display internal complexities that seem to hinder them from identifying objects in a straightforward fashion. The results as seen in Galle’s project “AI object recognition, infrathin” resemble the writings of a deranged art critic, trying to pin an art work using a poetic approach. This type of pattern recognition is more about statistical interference with a body of data then with actual human perception. In addition the descriptions tend to be biased and are apt to refer to the web. This is of course a consequence of feeding the machine learning system with data (images) from online sources. Whenever such a machine learning system has problems defining an object or an image, it tends to categorize the input in quite a debilitating manner. The category “a cat”, for instance pops up more than wanted. Machine learning image recognition is unfiltered, when it sees a black object it will make racial references if that object has the slightest affinity with a human body or a face. This unfiltered state can generate hilarious explanations, to say the least. But these descriptions can also unwillingly become offensively invasive and prejudiced because they lack any context what so ever. The images the machine learning descriptions invoke, seem to have more affinity with memes and therefore with humour, than with actual (scientific) image recognition. The descriptions are fully situated in the collective, the web. And the current web is a place filled with memes, nonsense and humour, among many other things, of course.