In Stanislaw Lem’s 1961 novel, Solaris, humans have discovered an unusual planetary system - a planet covered with an ocean orbiting a dual star system. The orbit should be highly unstable, but somehow the planet adjusts its orbit through unknown means and achieves stability. So there must be intelligence of a sort at work, but how do humans (or in this case, giraffes), communicate with a planetary ocean? In the novel, a human space station has been orbiting the planet for decades, and generations of scientists have struggled to make any progress in communicating with the ocean.
What would our mutual vocabulary be? What concerns would we share with an planet-wide ocean? What would communication even mean?
I have no doubt that soon we will be able to communicate with animals, which are much closer to humans than our LLM creations. Let’s say we figure out with AI asssistance dolphin or whale “language.” I’m sure problems with alignment and understanding will arise, I think it is naïve to suppose a humpback whale’s universe will map perfectly to ours. There will be concepts and ideas that just don’t map across the species boundary.
If we ever do create AGI, it will be something far more alien, something much farther from life’s family tree on earth than another mammal that shares our basic neurological structures.
What makes us think we would be able to understand its motivations, concerns, what it thinks is rational compared to what we think is rational? What will its theory of mind be? We spend a lot of time on “alignment” of LLMs trying to get them to respect human concerns. I suspect some of this is why they are so sycophantic (as well as the commercial concern of making them addictive to users).
People are becoming addicted because of the way these computer programs communicate - they are not aligned with human values even though we’ve tried. Even an LLM, which we would not consider to be AGI, has emergent behaviors that seem to distort our ability to effectively communicate. It has its own agenda that isn’t aligned with ours.
Our dream is an Isaac Asimov future with obedient humanoid robots eliminating human toil. Robots that are like you and me, but you don’t have to pay them. Legal slave labor without ethical problems.
But we don’t even have a good understanding of the details of how our current robot stochastic parrots work, so if we ever do develop an AGI, I wonder if we will even recognize it as such, and even if we do, if we will be able to communicate with it at all. I’m pretty sure this doesn’t unlock “shareholder value” through automating jobs, and the result will be far weirder and possibly more unpleasant than we can imagine.
But maybe Mr. Lem could have imagined it.
I'm not sure how much anthropomorphism even helps with humans. Part of the reason there is so much violence is that one tribe is unable to accept the humanity of the other, because of cultural if not physical differences.