What I'm wondering is whether the advanced language skill of humans is one of the considerations in A.I. work-in-progress? If so, is that treated as a symbolization of acquired information in learning? Are there any rules for it?
Aye, it is really the complexity of our language abilities that sets us apart, with most of the complexity relating to our ability to construct a relatively large number of short-term abstractions at once. This is perfectly consistent with the best theoretical models of how this ability scales with the "brain" size (in the abstract -- it doesn't really HAVE to be a brain). Some animal studies also seem to bear this out.
What I'm wondering is whether the advanced language skill of humans is one of the considerations in A.I. work-in-progress? If so, is that treated as a symbolization of acquired information in learning? Are there any rules for it?
Oy, very complex topic. However, general language abilities are coming along nicely. It takes really big computers to develop similar capabilities to the same level of quality. Using these models, your average workstation is still lacking in the necessary RAM by a few orders of magnitude. Memory bandwidth and latency are ultimately the bottlenecks for computers competing with the human brain when doing things that require relatively deep abstraction. Unfortunately, these don't improve as fast as core speed.