Your search

In authors or contributors
  • The emergence of artificial intelligence (AI) has been transforming the way humans live, work, and interact with one another. From automation to personalized customer service, AI has had a profound impact on everyday life. At the same time, AI has become something of an ideology, lauded for its potential to revolutionize the future. Yet, as with any technology, there are risks and concerns associated with its use. For example, Blake Lemoine, a Google engineer, recently suggested the possibility of the AI chatbot LaMDA becoming sentient. GPT-3 is one of the most powerful language models open to public use as it is capable of reasoning similarly to humans. Initial assessments of GPT-3 suggest that it may also possess some degree of consciousness. Among other things, this could be attributed to its ability to generate human-like responses to queries, which suggests that these are based on at least basic level of understanding. To further explore this, in the current study both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) were administered to GPT-3. Results reveal that GPT-3 was superior to average humans on CI tests that mainly require use and demonstration of acquired knowledge. On the other hand, its logical reasoning and emotional intelligence capacities are equal to those of an average human examinee. Additionally, GPT-3’s self-assessments of CI and EI were similar to the those typically found in humans, which could be understood as a demonstration of subjectivity and self-awareness–consciousness. Further discussion was conducted to put these findings into a wider context. Being that this study was performed only on one of the models from the GPT-3 family, a more thorough investigation would require inclusion of multiple NLP models.

Last update from database: 3/23/25, 8:36 AM (UTC)