Machines learn from our superficial residue
For those who think we're on the cusp of 'Artificial General Intelligence' (something that is purely semantic, and impossible to define from the perspective of the natural sciences, since we don't have that kind of an understanding of intelligence) a gentle reminder: humans are not statistical models. (Although we may not be very good at distinguishing ourselves from such models through a computer interface.)
Also, we should keep in mind that this modeling technique works just as 'well' for non-human languages. So the model 'learns' nothing about humans. It just mimics the superficial residue represented by our artifacts.
A model should not be confused with the reality it models. The map is not the territory. An animation of a breaking glass doesn't contain a broken glass. Nor does a photo or a movie or a simulation. We shouldn't mistake our perceptions for reality.