Analyst(s):Jonathan Care, Tricia Phillips
Online fraud detection is growing in complexity and demand, and its tools are being used for risk-based authentication and new account fraud prevention. Security and risk management leaders involved in online fraud detection should use machine-learning analytics and cloud-based deployment options.
Increased investment in digital channel fraud prevention drives more fraud to the contact center, with social engineering, sophisticated spoofing and SIM swap schemes evading many legacy strategies, such as validation of static data or reliance on automatic number identification (i.e., caller ID).
Cross-channel behavior analysis is required to identify the most-complex fraud attacks; however, most online fraud detection solutions (including those applying machine learning) are still focused on point solutions for specific channels.
Automated attacks, and the speed with which attackers can modify their techniques to avoid detection, continue to put pressure on rule-based systems. This slows detection of new attacks and increases false positives, as rule libraries expand in breadth and complexity trying to keep up with new fraudulent activity.
Security and risk management leaders responsible for fraud prevention and payment security should:
Align with cross-organizational groups, such as security, identity and access management, credit/underwriting to map out digital, and contact center use case technologies. This will help detect high-risk or anomalous activity and identity which solutions can support multiple use cases to enable greater investment in new capabilities.
Quantify the revenue impact of false positives and poor customer experience due to legacy techniques and policies aimed at reducing fraudulent events. Consider an expanded ROI calculation to increase revenue opportunities, as well as reduce potential fraud losses.
Experiment with unsupervised or semisupervised machine learning for the evaluation of customer activity across multiple channels. This should include data from existing OFD tools, point solutions that detect behavior anomalies and connections between seemingly disconnected fraud attacks, and identify opportunities to reduce false positives.
By 2027, more than 67% of fraud detection and management systems across all sectors will reduce fraud losses, which is an increase from fewer than 4% today, by warning of future problematic individuals. This is based on the observation and measurement of current behaviors.
By 2020, more than 75% of developers and implementers of fraud detection systems using artificial intelligence (AI) will sign up to codes of ethical behavior for their systems to follow, which is an increase from fewer than 1% today.
This document was revised on 5 February 2018. For more information, see the Corrections page .
Security and risk management (SRM) leaders concerned with bringing fraud losses within organizational risk tolerances want to detect fraud occurrences in as near to real time as possible. To that end, they have adopted techniques that focus on transaction monitoring, as well as detect abuses of the consumer experience (CX) and the identification process by fraudsters.
The online fraud detection (OFD) market is composed of vendors that provide products or services that help an organization detect fraud that occurs over the web, mobile or other telephony channels (i.e., call center, interactive voice recognition [IVR]) by performing one or both of these functions:
Running background processes that are transparent to users. These use hundreds to thousands of contextual attributes and data points — e.g., geolocation, device characteristics, user behavior, navigations and transaction activity — to determine the likelihood of fraudulent users or transactions (see "Take a New Approach to Establishing and Sustaining Trust in Digital Identities" ). This is done by comparing collected contextual event information to expected behavior using advanced analytics, statistical algorithms or rules that define "abnormal" behavior and activities.
Corroborating a user's identity. This is done by comparing:
Incoming identity information
Contextual attributes (as described above) and reconciling them against available external or internal identity information
OFD systems typically return alerts and results (such as scores with supporting data) to fraud operations teams. This enables the enterprise to take appropriate follow-up action:
Suspending the transaction, if the actual behavior is out of the range of what's expected, or if the user appears suspect
Conducting further manual review and investigation of the transaction and user, as warranted
Triggering automated identity proofing, user authentication and/or transaction verification to further determine the legitimacy of the user or transaction
No. 1 and No. 3 above have the potential to support full automation, whereas No. 2 remains a human-executed exception activity.
OFD applies mainly to three use cases:
Detecting account takeover — this may occur when user account credentials are stolen (e.g., via malware-based attacks) or there is an unauthorized transaction, with a stolen or fictitious identity
Detecting new account fraud when a fraudster sets up a new account
Detecting the use of a stolen financial account (e.g., a stolen credit card), when making a purchase or moving money from one account to another
In the three use cases listed above, fraud can result from:
An automated bot targeting a limited number of accounts
An automated script engaged in a massive attack against hundreds, thousands or more accounts
An individual human conducting a manual attack
A combination of human and automated scripts executing targeted or mass attacks
OFD vendors detect online fraud as transactions and interactions occur, in real time or near-real time. They provide solutions for web, mobile or telephony channels. As the sophistication of attacks continues to evolve, so too have the tools, technologies and strategies that detect and prevent fraudulent activity.
OFD products often integrate with identity proofing and substantiation (IdPS) tools as a means of increasing the trust assurance of a particular interaction or to meet new account information gathering for underwriting or regulatory requirements. In particular, the orchestration and analytics capabilities that have been a hallmark of OFD tools are experiencing creative applications in new use cases, aside from those outlined above. Many fraud hubs have integrated with IdPS tools, and some double as an "Identity Hub" for dynamic, risk-based identity proofing, substantiation and corroboration use cases. However, these use cases are excluded from this Market Guide, and will be addressed in forthcoming research.
The OFD market has continued to evolve in 2017, expanding to offer new capabilities to SRM leaders. Identification through static data and simple, rule-based assessment systems has given way to advanced analytics systems measuring anomalous transactional and interaction anomalies, as well as passive behavioral biometrics. Further up the application stack is the protection of the user journey and user interfaces, as well as the detection and prevention of abuse of business logic.
Investment in this area has increased significantly, with many new startups entering the market, and a focus on fraud analysis and detection.
Source: Gartner (February 2018)
Entity relationship graph technologies are more accurately detecting identity attacks, such as synthetic identity fraud. In addition, there is a move away from one-off fraud detection methods (for example, at enrollment) to a model of continuous risk assessment, based on transactional and nontransactional attributes of the customer interaction.
As techniques become more sophisticated, SRM leaders are able to move beyond hindsight-based methods, which detect historic patterns of fraud, and attempt to prevent these patterns from reoccurring. The trend is toward methods that provide insight and action-oriented intelligence as to the risk of each customer interaction. Gartner predicts that the market will evolve to the point at which detection and prediction of incipient problematic behavior will be possible, based on identifiable precursor activities.
Gartner observes that the most successful fraud detection and prevention strategies make use of rules and machine-learning techniques in their implementations. Traditional approaches to identifying fraud have been heavily rule-based. This means that hard and fast rules for flagging a transaction as fraudulent have to be established manually and in advance. However, this system isn't flexible. It inevitably results in an arms race between the seller's fraud detection system and criminals finding ways to circumnavigate these rules, as well as significant operational challenges to create, manage and retire rules. Outdated, and overly broad rules have a detrimental impact on the CX of legitimate users, resulting in lost revenue through high numbers of false positives.
Gartner has observed the increasing need for rapid and complex risk decisions in financial institutions and enterprise-scale merchants. Competition is fierce in an increasingly fragmented and digitized economy. Customers demand immediacy and innovation in service, including new payment types and channels. As part of this new ecosystem, there are waves of new and evolving fraud. To support the rapid decision processes required, organizations are turning to machine learning to gain the ability to make rapid, effective risk decisions. However, with increased numbers of machine-learning systems with nondeterministic decision paths, clients are demanding explanations, as well as decisions. Three common reasons for this business requirement are:
To control the machine — A model that explains its logic empowers SRM leaders to adapt the model to evolving fraud patterns with more speed and accuracy. For example, if a model can explain the onset of a new fraud pattern, then SRM leaders proactively stop a new attack.
To audit the machine — Financial institutions and large merchants operate in highly regulated environments. These organizations need to provide trails of explanations for compliance, to demonstrate that the basis for their decisions is lawful and ethical.
To trust the machine — A system is only as powerful as the decisions we entrust it to make. How can we trust that the machine is finding the delicate balance between good risk management and good CX? Can we trust the machine to such an extent that we begin to learn from it, and improve as humans?
Gartner sees two methods of art developing to achieve this. The first is to ensure that each model incorporates a capability to explain and also has a loop that provides feedback on the quality of the explanation. 1 The second method is to develop two systems — one that makes decisions and another that takes the input from the first system and generates an explanation. 2
Source: Gartner (February 2018)
Unsupervised and semisupervised models are used primarily to identify anomalies (outliers). Then supervised models can be used to determine which of these anomalies are likely to be fraudulent and which are just unusual. When building a machine-learning model suite for fraud detection, identify bad activity and ensure that genuinely good transactions go through (see "Don't Treat Your Customer Like a Criminal" ).
An increasingly common application of these technologies involves:
Using unsupervised models to find clusters and anomalies and discover new features
Manually reviewing and labeling them
Training a supervised model using the labels
Machine learning is being used at many levels in the online fraud detection market. Some solutions are designed to run alongside existing capabilities, offline or in near-real time, taking in structured and unstructured data to identify anomalies legacy tools are slow to discover. Some are designed to provide a score and information codes that can be used by a real-time policy and decision engine. Machine learning is also implemented in solutions such as device assessment, passive behavioral biometrics, bot detection, phone printing and voice biometrics.
Historically, predictive, supervised models could take three to 12 months to build, test and deploy. However, with advances in computing power, availability of data and a dramatic increase in the practical application of advanced analytics and AI approaches, it's hard to find an OFD solution that doesn't include machine learning as part of its capabilities.
One of the largest issues enterprises face is score fatigue. This is an inability to use all of the features, scores, information and risk codes returned by the multiple point solutions that may be implemented to stay ahead of fraudsters. The value of a flexible orchestration and decision platform is significant to these enterprises, because it allows them to configure, test and analyze the effectiveness of new solutions. It also enables them to determine which scores or risk indicators should be weighed heavily on their own or in combination with other solution's scores and risk scores, together with the context of individual customers and the actions they're executing.
Many of these fraud hubs offer integration, orchestration, decisioning and case management capabilities, but have not yet crossed into a true central analytics solution. This is because they don't ingest all of the third-party data and build models based on the specific data points, scores and risk factors returned by those orchestrated partner solutions.
There is a significant difference between fraud detection systems that directly use machine-learning systems and those that are essentially static, rule-based systems that may have used a machine-learning optimizer. Characteristics of the former type include flexibility in response to new fraud attack patterns. The latter type benefits from keeping a human element in the change control process, which makes it more resistant to skillfully crafted attacks that try to poison the model — essentially teaching it that fraud is okay.
An growing area of inquiry involves automated attacks, malware-based attacks enabling remote access Trojans (RATs) in the browser, man-in-the-middle (MitM) attacks, the injection of fraudulent advertisements and login screens, and malicious manipulations of the user interface. Bot-based credential stuffing attacks plague large and midsize enterprise financial services organizations, as well as healthcare, insurance and retail enterprises. In many organizations, the prevention of these type of attacks falls to application security teams. However, as the mitigation strategies directly affect the business and CX, the most forward-leaning fraud prevention leaders are including this as part of their end-to-end customer account and transaction protection mandates.
The fraud detection and prevention marketplace is expected to grow significantly by 2022. The market is being driven by:
An increase in online transactions over mobile and web channels. Notably there's growth in online financial services using the mobile channel as the primary point of interaction. Despite rapid growth in mobile payment applications, other channels remain important, with the web portal being mirrored by the growth of API channels driven by initiatives such as Payment Services Directive (PSD2) in Europe
Europay, Mastercard and Visa (EMV) card payments have driven fraudsters to move away from card counterfeiting toward more-sophisticated attacks in online commerce.
Low awareness in midsize enterprises in regard to solution capabilities remains a significant challenge in the fight against fraud. Midsize enterprises are seen by fraudsters as an easy entry point into the corporate supply chain.
The Asia/Pacific (APAC) region has historically suffered from unsophisticated solutions, but with growing awareness from SRM leaders in the area, increasingly sophisticated systems are being implemented as part of the growth in maturity.
During the past year, a staggering number of identities have been exposed due to significant data breaches. Some of the more damaging breaches have included:
143 million customer records lost from Equifax
26 million exposed from the U.K. National Health Service (NHS)
1.3 billion through a breach of River City Media
200 million from the Motor Vehicles Department in Kerala, India
198 million from Deep Root Analytics (a media firm contracted by the U.S. Republican National Committee)
57 million records (both customer and driver) from Uber
Since 2013, nearly 10 billion data records have been exposed. Hence, SRM leaders must consider that they're operating with datasets that have been compromised and already tainted. Furthermore, the era of data protection, and, as a corollary, data privacy is behind us.
In a post-data-privacy world, approaches such as Gartner's continuous adaptive risk and trust assessment (CARTA; see "Use a CARTA Strategic Approach to Embrace Digital Business Opportunities in an Era of Advanced Threats" ) become increasingly important for bringing fraud risk within tolerance. CARTA calls for risk and trust to become continuously adaptive, and this is directly applicable to online fraud. We can change the architectural paradigm and replace the construct of session-based authentication with a view of customer interaction as a stream of discrete transactions, each of which can be assessed to determine whether it lies within organizational risk tolerance.
The market for OFD solutions is dynamic, and solution providers constantly add new capabilities through product enhancements, partnerships and acquisitions. In many cases, it is difficult to draw distinct boundaries between categories of solutions. This is largely because a good fraud platform or fraud analytics solution must be flexible enough to support multiple use cases and business models. Often the question of whether something is built for purpose, or is merely capable of achieving an outcome with creative configurations is important.
It is not necessary for every organization to purchase and integrate tools that possess every capability described in the capability model for fraud detection. Many fraud platforms and fraud analytics solutions contain different capabilities. It's often more efficient to choose a fraud platform or analytics solution that addresses the primary fraud use cases and is "good enough" to meet the needs of less-common use cases. For example, an enterprise without a significant bot problem that wants to determine human versus nonhuman traffic may find its fraud platform capable of categorizing and rejecting or reviewing nonhuman traffic. However, if a significant automated attack issue touches a fraud use case (credential stuffing, card testing, etc.), adding a purpose-built bot mitigation solution to the fraud platform is likely to be justified.
Capabilities do not equal solution types. Behavioral analysis, for example, can be used in a multitude of use cases, including bot detection, transaction monitoring, new account fraud and account take-over. The algorithms and rules used to assess the event will be unique to the use case, but the technology capability is similar. For this reason, the representative vendors for this Market Guide for Online Fraud Detection are broken into solution categories. A general description of these categories and the type of features and capabilities included follows. There is significant overlap in these categories, and, from quarter to quarter, a solution provider may enhance its product's capabilities or change its messaging emphasis from one to another:
Static, data-based identification
Rule-based risk assessment
User interface protection
Continuous risk assessment
These are typically machine-learning-based solutions supporting behavior analytics and anomaly detection. They're used largely for real-time or near-real-time decision making or batch scoring of an event, or providing scores, features and risk factors for another system to use in a decision. These platforms should be capable of analyzing activity for both online and offline activity (e.g., contact center interactions, point of sale transactions and ATM transactions). These solutions do not typically rely on a manual rules library; however, business policies can be supported. Although they can be used for investigation, they often lack complex case management and regulated event reporting suitable for the enterprise. Hence they're used as an input into a fraud operations tool, fraud hub or enterprise fraud platform. This often includes capabilities No. 4, 5 and 7 above, and can receive data from a wide variety of solution providers and data sources.
This fraud platform includes orchestration capabilities to call and receive third-party input and provides analytics for risk scoring. It also provides a rule or policy engine for decisioning and a native case management system. Predictive statistical modeling is usually included; however, more-advanced machine learning is less common. Typically, fraud hubs or platforms include capability No. 2, 5 and 7 and an orchestration capability that allows client configurations to request and receive data from third-party providers, with capabilities No. 1, 3, 4 and 5. Several solutions in this space often natively support their own endpoint and behavior biometrics, as well as native entity relationship analysis.
This category includes channel-specific fraud detection and prevention tools aimed at detecting spoofed numbers or high-risk phone numbers, analytics and negative listing capabilities for voice and audio signals and, sometimes, IVR abuse detection. These solutions sometimes support user authentication use cases as well and integrate into CRMs. Some fraud hubs and fraud analytics solutions can ingest output from these channel-specific solutions to enable cross-channel behavior analytics. They can include capabilities No. 2, 3 and 5.
These solutions evaluate and score the device attributes and/or reputation or user interactions (passive behavior biometrics) to detect high-risk activities related to account takeover or new account fraud activity. They can often detect nonhuman activity, but they cannot mitigate it by redirecting or blocking the automated traffic. The focus of these solutions is primarily to identify good or low-risk users and known fraudulent devices or devices demonstrating components that have been associated with fraud. These capabilities are core capabilities of some vendors provided as representative fraud platform or fraud analytics solutions. The solutions will be mentioned in both categories if device and behavior analysis is available as a discrete service.
These solutions detect malware on a user's machine; detect and mitigate bot attacks (credential stuffing, content scraping, automated ad fraud, card testing, etc.); detect remote access events from RATs, as well as MitM, man in the browser (MitB) and other exploits on the user interface. These solutions can include capabilities No. 2, 3, 5 and 6.
The vendors listed in this Market Guide do not imply an exhaustive list. This section is intended to provide more understanding of the market and its offerings.
Brighterion , a Mastercard company
Accertify , an American Express company
Cybersource , a Visa Company
Easy Solutions , a Cyxtera Technologies Business
Call Center Fraud Prevention
Contact Solutions , a Verint Company
Endpoint and Behavior Biometrics Analysis
Kaspersky (see Note 1)
NuData Security , a Mastercard Company
Automated, Remote and Malware Attack Protection
Source: Gartner (February 2018)
As solutions become more complex and make use of nondeterministic technologies, such as machine learning, a key determinant of a fraud system's effectiveness will be breadth and richness of data supplied to the system. Therefore, Gartner recommends that a proof of concept (POC) be used to determine how a vendor system will perform, given the data available within the organization, rather than traditional evaluations of system features and benefits
Although other solutions such as device fingerprinting and knowledge-based validation are also present in the OFD market, the ongoing arms race against the fraudsters mean that these techniques are frequently circumvented. In the battle to ensure that identity assertions are valid, analytical models that make use of behavioral attributes in transactional and nontransactional datasets are increasingly used.
Gartner recommends using multiple channels when the risk of an identity assertion exceeds risk tolerance. For example, when an identity assertion on a web portal is suspect, corroborate the identity of the customer via mobile push to confirm that he or she is in possession of a previously enrolled device. Also, capture location and other signals, to check additional passive behavioral biometric traits (e.g., gait, gestures and handling), or to exploit native or third-party active biometric modes (e.g., fingerprint, face or voice). In addition, the use of an alternative channel gives the opportunity to gather additional biometric markers (such as device grip, gait or gesture patterns), which might otherwise not be available
1 The Machines Learn But We Don't , by Roger Peng
2 Google's research chief questions value of "Explainable AI," Peter Norwig, Director of AI at Google
In early September 2017, the U.S. government ordered all federal agencies to remove Kaspersky Lab's software from their systems. This action occurred after several media reports, citing unnamed intelligence sources, claimed that Kaspersky's software was being used by the Russian government to access sensitive information. Although the U.S. government has not given any official explanation for the ban, Kaspersky Lab vehemently refutes the unsubstantiated claims and is seeking an appeal in U.S. federal court on the ban (Department of Homeland Security Binding Operational Directive 17-01). Gartner clients, especially those who work closely with U.S. federal agencies, should continue to monitor this situation for updates. Kaspersky has commenced legal action against the U.S. government and has denied all allegations against it.