Artificial intelligence is a complex and often confusing field. The experts in this area frequently use specialized terminology to explain their work, which can make it hard for outsiders to fully grasp the concepts. As a result, we often find ourselves using these technical terms in our discussions about the AI industry. To help make sense of it all, we’ve put together a glossary that defines some of the key terms and phrases commonly used in our articles.
This glossary will be updated regularly to include new terms as researchers continue to discover groundbreaking AI methods and identify emerging safety concerns.
AGI (Artificial General Intelligence)
Artificial General Intelligence, or AGI, is a somewhat ambiguous term. It generally refers to AI systems that are more capable than the average human across a wide range of tasks. OpenAI’s CEO, Sam Altman, recently described AGI as being similar to “the equivalent of a median human that you could hire as a co-worker.” OpenAI’s charter defines it as “highly autonomous systems that outperform humans at most economically valuable work.” On the other hand, Google DeepMind has a slightly different view, defining AGI as “AI that’s at least as capable as humans at most cognitive tasks.” There is some confusion around the exact meaning of AGI, even among experts in AI research.
AI Agent
An AI agent is a tool that uses artificial intelligence to perform multiple tasks on your behalf, going beyond what a basic AI chatbot can do. Tasks might include things like managing your expenses, booking travel or reservations, or even writing and maintaining code. The concept of an AI agent is still developing, with infrastructure being built to deliver on its potential. However, the general idea is that it refers to an autonomous system that can use various AI technologies to carry out complex, multi-step tasks.
Chain of Thought
When humans are asked simple questions, we can often answer them quickly without much thought — for example, “which animal is taller, a giraffe or a cat?” But when the question becomes more complex, we may need to break the problem down into smaller steps to arrive at the correct answer. For instance, if a farmer has chickens and cows and together they have 40 heads and 120 legs, you would need to perform a few simple calculations to figure out how many chickens and cows there are.
In the context of AI, “chain-of-thought reasoning” refers to breaking down a problem into smaller steps to improve the accuracy of the final result. While this approach may take more time, it increases the likelihood of arriving at the correct answer, especially in cases involving logic or coding. Chain-of-thought reasoning is built on traditional large language models and optimized using reinforcement learning, making it more effective for solving complex problems.
AI Terminology Explained: A Glossary for the Curious