Carmen — a seasoned product manager at a large enterprise — is struggling to manage the increasing complexity and range of data being captured by endpoint devices. She has considered equipping her organization’s products with AI edge capabilities, but is faced with an overwhelming amount of choices about how to do so. Regardless of the direction she chooses, she knows that achieving business success will require careful planning and well-managed execution.
AI-enabled edge devices bring a raft of new business opportunities for product managers
Edge AI, or AI at the edge, is the use of AI techniques embedded in Internet of Things (IoT) endpoints, gateways and other devices computing data at the point of use.
“AI-enabled edge devices bring a raft of new business opportunities for product managers developing physical devices, software applications or services,” says Alan Priestley, VP Analyst, Gartner.
By 2023, more than 50% of enterprise-generated data will be created and processed outside the data center or cloud, up from less than 10% in 2019. “This means product managers must evaluate the topology of the planned solution and leverage the appropriate technologies, such as semiconductors or cloud services, to place the AI functionality in the optimal location to meet their design goals,” says Priestley.
Read more:Immersive Technologies Are Moving Closer to the Edge of AI
Address these 5 questions when planning an AI-enabled product portfolio
Gartner outlines the five critical questions that must be addressed when planning an AI-enabled product portfolio. Underpinning all these decisions are the foundational aspects of privacy, security and longevity — foundational requirements often forgotten but with profound implications if they fail.
- What type and volume of data will be captured and analyzed? The type and volume of data to be captured and analyzed, along with the need for low latency or real-time analysis, will have a major impact on where the AI processing must take place. Assess data volume and storage requirements by factoring in decision latency and communications availability. Reduce data transfer and communications requirements by leveraging local analytics.
- What is the maturity of the proposed AI technology? Product managers must understand the maturity of the AI technology they are proposing to use. One of the major items to consider is whether a prebuilt or pretrained AI model exists that can be used to accelerate product development, and leverage them whenever possible.
- Where should the AI algorithms be executed? For many product managers, the choice will be between deploying AI at the endpoint, in an edge computer, in a data center or via the cloud. The decision of where to locate AI functionality will be driven by the analytics and processing resources required, along with the available communications bandwidth for data transfer.
- What is required to execute the proposed AI algorithms? As part of the process of developing an AI model, product managers must determine the technology needed to deploy the model for use. Evaluate the best deployment technology by carefully assessing the impact of leveraging the same technology used for AI model development and training versus the use of alternate custom-designed AI chips.
- What are the options for executing AI algorithms at the edge or IoT endpoint? When analyzing new data, AI-based algorithms can be executed across a wider range of technology, like edge computing (central processing units, graphics processing units) and IoT endpoints (microcontroller unit, application processors). Form factor and power budgets will impact the type of processing chips that can be used, but will ensure the proper support and adaptability is in place when making the final decision.
Read more:Gartner Predicts the Future of AI Technologies