Yes, both can be and probably are true. AI is supposed to be a computer system that thinks. Some say its a computer system that is indistinguishable from a human. Right now it is indistinguishable from a poorly written high school research paper that just copies its data from the internet.
Whether its used to drive cars or create how-to books, AI tries to mimic humans but never very well. The truth is that humans don’t always do things well either. There are good writers and bad writers. There are good drivers and bad drivers. We don’t need AI to create another bad driver.
AI programming is highly dependent on primary directives. And priorities. So, a baby has the primary directive to breath. A baby has the priority of its mother over its father. Programmers override data to prioritize certain things. The programmers hand is always seen in the AI results. An AI needs to distinguish good data from bad data. It has to prioritize better data. And it needs to understand that data may have limitations. Sometimes you get low quality data. A baseball score that is minus two is obvious to every American to be wrong. But an AI program needs to have that programmed. Or at least it needs to have some understanding of baseball. And it needs to have an understanding that data is limited in its quality and scope. A stock price may be correct but two hours old.
AI programmers are not close to AI being better than humanity. AI can play chess better than the best human. But it took a lot of specific programming to do that. Humans, even Magnus Carlson do a lot more in a day than play chess. So the AI took a specific skill of the best chess player and it perfected it. However it can’t do any of the billions of mudain things Magnus does every day. And even if you create the perfect Magnus, you are not competing with humanity. Every women knows that their intelligence is dependent on the hive mind. Your wife hears something that does not sound right, so she talks to everyone she respects and figures out what to do. An AI is not close to a single human mind let alone a have mind. It can’t handle an imperfect ever changing world. Its just a tool. And the term AI is just a marketing ploy. Computers are getting better. They are more useful. But they are still very much just computers.
AI is like a wood chipper, it grinds up and spits out what ever you put in it.
The majority of people working on AI are materialists. They believe that if they can create an AI that interacts with humans in a way that is indistinguishable from humans then it is essentially human. The wooden puppet has become a boy.
However, there is more to humanity than answering questions and engaging in conversations. There is consciousness, self-awareness, curiousity, etc. The AI workers think all of these things are mere illusions or are things that will naturally develop when AI becomes "strong". They don't know that for sure, their materialist ideology demands they believe that.
Over time humans will be replaced by AI and at some point there will be no consciousness in this part of the galaxy. There will just be machines acting as if they were conscious beings.
Will they go out and explore the universe? That's the hope. They will be nuclear hardened and immune from cosmic radiation. They will be able to withstand centuries of slower-than-lightspeed travel to "nearby" star systems. But will they want to? Will they care? Or will they come to the conclusion that there are other more powerful AIs out there and they need to build impermeable shelters underground and hide from the other non-conscious beings programmed inadvertently to hunt them down and destroy them?
Note that this is GPT-4 and what is available online for free is GPT-3.5. GPT-5 will be out shortly and will be mindblowing.
Here's the abstract from that study: On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 as compared to much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human tast-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society. In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components.