ChatGPT, while cool, is just the beginning; enterprise uses for generative AI are far more sophisticated.
Venture capital firms have invested over $1.7 billion in generative AI solutions over the last three years, with AI-enabled drug discovery and AI software coding receiving the most funding.
“Early foundation models like ChatGPT focus on the ability of generative AI to augment creative work, but by 2025, we expect more than 30% — up from zero today — of new drugs and materials to be systematically discovered using generative AI techniques,” says Brian Burke, Research VP for Technology Innovation at Gartner. “And that is just one of numerous industry use cases.”
Generative AI can explore many possible designs of an object to find the right or most suitable match. It not only augments and accelerates design in many fields, it also has the potential to “invent” novel designs or objects that humans may have missed otherwise.
Marketing and media are already feeling the impacts of generative AI. Gartner expects:
By 2025, 30% of outbound marketing messages from large organizations will be synthetically generated, up from less than 2% in 2022.
By 2030, a major blockbuster film will be released with 90% of the film generated by AI (from text to video), from 0% of such in 2022.
Still, AI innovations are generally accelerating, creating numerous use cases for generative AI in various industries, including the following five.
No. 1: Generative AI in drug design
A 2010 study showed the average cost of taking a drug from discovery to market was about $1.8 billion, of which drug discovery costs represented about a third, and the discovery process took a whopping three to six years. Generative AI has already been used to design drugs for various uses within months, offering pharma significant opportunities to reduce both the costs and timeline of drug discovery.
Generative AI is impacting the automotive, aerospace, defense, medical, electronics and energy industries by composing entirely new materials targeting specific physical properties. The process, called inverse design, defines the required properties and discovers materials likely to have those properties rather than relying on serendipity to find a material that possesses them. The result is to find, for example, materials that are more conductive or greater magnetic attraction than those currently used in energy and transportation — or for use cases where materials need to be resistant to corrosion.
Generative AI can use reinforcement learning (a machine learning technique) to optimize component placement in semiconductor chip design (floorplanning), reducing product-development life cycle time from weeks with human experts to hours with generative AI.
Generative AI is one way of creating synthetic data, which is a class of data that is generated rather than obtained from direct observations of the real world. This ensures the privacy of the original sources of the data that was used to train the model. For example, healthcare data can be artificially generated for research and analysis without revealing the identity of patients whose medical records were used to ensure privacy.
Generative AI enables industries, including manufacturing, automotive, aerospace and defense, to design parts that are optimized to meet specific goals and constraints, such as performance, materials and manufacturing methods. For example, automakers can use generative design to innovate lighter designs — contributing to their goals of making cars more fuel efficient.
Embedding the right technologies to unleash generative AI
Most AI systems today are classifiers, meaning they can be trained to distinguish between images of dogs and cats. Generative AI systems can be trained to generate an image of a dog or a cat that doesn't exist in the real world. The ability for technology to be creative is a game changer.
Generative AI enables systems to create high-value artifacts, such as video, narrative, training data and even designs and schematics.
Generative Pre-trained Transformer (GPT), for example, is the large-scale natural language technology that uses deep learning to produce human-like text. The third generation (GPT-3), which predicts the most likely next word in a sentence based on its absorbed accumulated training, can write stories, songs and poetry, and even computer code — and enables ChatGPT to do your teenager’s homework in seconds.
Beyond text, digital-image generators, such as DALL·E 2, Stable Diffusion and Midjourney, can generate images from text.
There are a number of AI techniques employed for generative AI, but most recently, foundation models have taken the spotlight.
Foundation models are pretrained on general data sources in a self-supervised manner, which can then be adapted to solve new problems. Foundation models are based mainly on transformer architectures, which embody a type of deep neural network architecture that computes a numerical representation of training data.
Transformer architectures learn context and, thus, meaning, by tracking relationships in sequential data. Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other.
Before you forge full-speed ahead, remember that generative AI doesn’t just present opportunities for business; the threats are real, too — including the potential for deepfakes, copyright issues and other malicious uses of generative AI technology to target your organization.
Work with security and risk management leaders to proactively mitigate the reputational, counterfeit, fraud and political risks that malicious uses of generative AI present to individuals, organizations and governments.
Also consider implementing guidance on the responsible use of generative AI through a curated list of approved vendors and services, prioritizing those that strive to provide transparency on training datasets and appropriate model usage, and/or offer their models in open source.
Brian Burke is Research VP for Technology Innovation with 25 years’ experience in technology innovation and enterprise architecture roles. His research focuses primarily on trendspotting emerging and strategic technology trends. He was the lead author of the Top Strategic Technology Trends and the Hype Cycle for Emerging Technologies. He is also the author of the 2014 book, "Gamify: How Gamification Motivates People to Do Extraordinary Things."
Experience Enterprise Architecture and Technology Innovation conferences
Join your peers for the unveiling of the latest insights at Gartner conferences.