Published: 20 July 2023
Summary
Power consumed by global information and communication technologies is becoming a major concern. And emerging large language model applications, such as generative AI, require massive compute resources. For a sustainable future, product leaders must look for high-efficiency compute solutions.
Included in Full Research
Overview
Key Findings
Processor speed improvements are limitedby slower memory creating a “memory wall,” with memory latency and bandwidth impacting system performance.
Current semiconductor materials have reached their limits in enabling further power/performance scaling for computing solutions.
Energy consumption for new software demands, compute heavy and emerging generative AI applications, primarily training, will be the bottleneck for future deployment.
Recommendations
Product leaders responsible for evaluating emerging technologies and trends for their next-generation technology products and devices based on microelectronics must:
Strategize product performance improvement while reducing energy by ensuring long-term roadmaps to evaluate/incorporate new memory-centric architecture paradigms, such as in-memory or near-memory compute.
Maximize product performance
Clients can log in to view the entire
document.