Artificial intelligence (AI) applies advanced analysis and logic-based techniques to interpret events, and support and automate decisions and actions. Use this guide to understand key AI terms.
Gartner defines artificial intelligence (AI) as applying advanced analysis and logic-based techniques, including machine learning (ML), to interpret events, support and automate decisions, and take actions. This definition is consistent with the current and emerging state of AI technologies and capabilities, and it acknowledges that AI now generally involves probabilistic analysis (combining probability and logic to assign a value to uncertainty).
Other organizations and individuals may use different definitions. There is no single, universally accepted descriptor for artificial intelligence as there is such a wide range of ways in which AI can support, augment and automate human activities, and learn and act independently.
To capture the opportunity of AI as an organization, however, you will need a rigorous AI strategy — for which you need to articulate and agree on a generally accepted definition focused on what you want AI to accomplish.
Leave room for differences of opinion, but make sure that business, IT and data and analytics leaders don’t fundamentally disagree about what AI means to the organization or you will be unable to design a strategy that captures the benefits.
Note that AI technology vendors are also likely to have their own definitions of the term. Ask them to explain how their offerings meet your expectations for how AI will deliver value.
Large language models (LLMs) are text-oriented generative artificial intelligences, and they have been in mainstream headlines since OpenAI’s ChatGPT hit the market in November 2022.
LLMs are trained on large volumes of text, typically billions of words, that are simulated or taken from public or private data collections. This enables them to interpret textual inputs and generate human-like textual outputs. LLMs already help search engines understand a question and formulate an answer.
Breakthroughs in the LLM field have the potential to drastically change the way organizations conduct business, including enabling the automation of tasks previously done by humans, from generating code to answering questions.
Machine learning is a critical technique that enables AI to solve problems. Despite common misperceptions (and misnomers in popular culture), machines do not learn. They store and compute — admittedly in increasingly complex ways. Machine learning solves business problems by using statistical models to extract knowledge and patterns from data.
Machine learning is a purely analytical discipline. It applies mathematical models to data to extract knowledge and find patterns that humans would likely miss. ML also recommends actions, but it does not direct systems to take action without human intervention.
More specifically, machine learning creates an algorithm or statistical formula (referred to as a “model”) that converts a series of data points into a single result. ML algorithms “learn” through “training,” in which they identify patterns and correlations in data and use them to provide new insights and predictions without being explicitly programmed to do so. That said, machine learning is at the core of many successful AI applications, fueling its enormous traction in the market.
Deep learning (DL), a variant of machine learning algorithms, uses multiple layers to solve problems by extracting knowledge from raw data and transforming it at every level. These layers incrementally obtain higher-level features from the raw data, allowing the solution of more complex problems with higher accuracy and less manual tuning.
Organizations often treat ML and DL as the only AI disciplines and ignore other AI approaches, which unnecessarily halts or fails to start AI initiatives when ML-only solutions don’t work.
Current machine learning solutions usually need a large volume of well-labeled data, which makes this approach harder for companies with smaller datasets, poor data quality or budget constraints.
Using ML, including deep learning, to make predictions enables an AI-driven process to automate the selection of the most favorable result, which eliminates the need for a human decision maker.
The majority of use cases in AI today rely on robust and mature techniques that fall into three main categories:
Probabilistic reasoning. These techniques (often generalized as machine learning) extract value from the large amount of data gathered by enterprises. This category includes techniques aimed at unveiling unknown knowledge held within a large amount of data (or dimensions). These techniques reveal unknown knowledge by discovering interesting correlations linked to a particular goal or label within that data. For example, a machine learning technique may involve sifting through a large amount of customer records, identifying certain factors and unveiling how the factors are correlated — allowing the organization to anticipate if those customers are potential churners.
Computational logic. Often referred to as rule-based systems, these techniques use and extend the implicit and explicit know-how of the organization. These techniques are aimed at capturing known knowledge in a structured manner, often in the form of rules. Business people can manipulate these rules, but the technology guarantees the coherence of the rule set. (That is, the technology makes sure that rules do not contradict each other or lead to circular reasoning — which is not that obvious when you are dealing with tens of thousands of rules.) A new series of compliance laws has brought rule-based approaches to the forefront.
The key emerging techniques, in descending order of maturity are:
Natural language processing (NLP). NLP provides intuitive forms of communication between humans and systems. NLP includes computational linguistic techniques (symbolic and subsymbolic) aimed at recognizing, parsing, interpreting, automatically tagging, translating and generating (or summarizing) natural languages.
Knowledge representation. Capabilities such as knowledge graphs or semantic networks aim to facilitate and accelerate access to and analysis of data networks and graphs. Through their representations of knowledge, these mechanisms tend to be more intuitive for specific types of problems. Adoption of knowledge graph techniques has accelerated quickly over the last three years.
Agent-based computing. This is the least mature of the established AI techniques, but it is quickly gaining in popularity. Software agents are persistent, autonomous, goal-oriented programs that act on behalf of users or other programs. Chatbots, for example, are increasingly popular agents.
Two main classes of agent applications are commonly used with existing solutions today:
Task automation agents can be generic (e.g., meeting scheduling assistants in email systems) or more specific (e.g., contract validation softbots for sales automation applications).
The following are among the key terms about AI technologies and techniques that business leaders may need to know:
Adaptive AI allows for model behavior change postdeployment by learning behavioral patterns from past human and machine experience and within runtime environments to adapt more quickly to changing real-world circumstances.
Advanced virtual assistants (AVAs), sometimes called conversational AI agents, process human inputs to execute tasks, deliver predictions and offer decisions. AVAs are powered by a combination of more advanced user interfaces, natural language processing and deep learning techniques enabling decision support and personalization, as well as contextual and domain-specific knowledge.
Artificial general intelligence (AGI) is an anticipated future of AI where it has the capacity to understand or learn any intellectual task that a person can do.
Augmented artificial intelligence is a trend that is also referred to as “intelligent X” and points to systems where AI techniques provide additional and untapped functionality.
ChatGPT is an OpenAI service that incorporates a conversational chatbot with LLM to create content. It was trained on a foundational model of billions of words from multiple sources and then fine-tuned by reinforcement learning from human feedback.
Composite AI refers to the combined application of different AI techniques to improve learning efficiency. It allows organizations to broaden the level of knowledge representations and, ultimately, to solve a wider range of business problems in a more efficient manner.
Computer vision (CV) is a process that can capture, process and analyze real-world images to allow machines to extract meaningful, contextual information from the physical world. CV techniques have technology and infrastructure requirements that differ from traditional ML approaches.
Edge AI refers to the use of AI techniques embedded in Internet of Things (IoT) endpoints, gateways and edge servers, in applications ranging from autonomous vehicles to streaming analytics. It offers the potential to deliver differentiated use cases for digital business.
Generative AI (GenAI) learns about artifacts from data and generates innovative new creations that are similar to but don’t repeat the original. Generative AI has the potential to create new forms of creative content, such as video, and accelerate R&D cycles in fields ranging from medicine to product development. GenAI is taking off as a general-purpose technology that has the potential to drastically alter society through its impact on existing economic and social structures.
Foundation models are large machine learning models that are trained on a broad set of unlabeled data and then adapted to a wide range of applications with fine-tuning.
The IoT comprises the network of physical objects (things) that contain embedded technology to sense or interact with their internal workings and the external environment. This doesn’t include general-purpose devices, such as smartphones. Examples of IoT in action range from smart plugs to driverless vehicles. The IoT relies on a wide range of IT endpoints and gateways to function and data to drive the AI, especially for real-time responses (e.g., for autonomous vehicles).
Natural language technologies (NLT) are systems that analyze emotions and/or personality within text-based communications or surveys to create emotional scoring tools, leveraging technologies and techniques, such as NLT, text analytics, convolutional neural networks and recurrent neural nets.
Predictive analytics is a form of advanced analytics that examines data or content to answer the question, “What is likely to happen?” and is characterized by techniques, such as regression analysis, forecasting, multivariate statistics, pattern matching, predictive modeling and forecasting.
The latest developments in generative AI, including ChatGPT, have suddenly propelled interest in AI — not just as a technology or business tool but as a general product technology. AI is making an impact on society comparable to the advent of the internet, printing press or even electricity. It’s on the verge of reshaping society as a whole.
Among Gartner strategic planning assumptions for AI are that:
By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance.
By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the number and time it takes to operationalize AI models by at least 25%.
By 2027, at least two vendors that provide AI risk management functionality will be acquired by enterprise risk management vendors providing broader functionality.
By 2027, at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation.