Bidirectional brain machine interface, generative artificial intelligence (AI) and DNA computing are a few examples of the technologies highlighted on the Gartner Hype Cycle for Emerging Technologies, 2020. Although each of these may sound like a plotline from the latest Hollywood blockbuster, Gartner experts expect these emerging technologies and their corresponding trends to have a transformational impact on business in the next five to 10 years.
Kasey Panetta, Gartner Senior Content Marketing Manager, interviews Gartner experts to talk through the process of forming the Emerging Technologies Hype Cycle and related technologies.
This interview was conducted during a two-part podcast series. Both podcast episodes are available below; the transcript that follows has been edited for clarity and length.
Episode 1 (15 mins):
Listen to podcast: Gartner Hype Cycle for Emerging Technologies, 2020: Part 1
- Brian Burke, Research VP, on the Hype Cycle (00:50)
- Yefim Natis, Distinguished VP Analyst, on composable enterprises (6:53)
- Avivah Litan, Distinguished VP Analyst, on authenticated provenance (9:00)
Episode 2 (30 mins):
Listen to podcast: Gartner Hype Cycle for Emerging Technologies, 2020: Part 2
The Emerging Technologies Hype Cycle Explained – Brian Burke
What is the Gartner Emerging Technologies Hype Cycle, and what makes it different from other Hype Cycles?
The Hype Cycle for Emerging Technologies is unique among Gartner Hype Cycles because we really look at all of the technologies on all of the Hype Cycles. So that's 1,700 technology profiles. And then we distill that down into a set of 30 or so technology profiles that we believe will be most impactful for organizations over the next five to 10 years.
Read more: 6 Trends on the Gartner Hype Cycle for the Digital Workplace, 2020
And how do you get from 1,700 technologies down to a list of 30?
It takes a couple of months, but we start by looking at all the technology profiles that we're creating and we create a shortlist of technologies that we believe will be the most impactful. We go from about 1,700 to about 150, and then we have a broader group of analysts who actually vote on those technology profiles. The top 30 are selected during the voting process.
We also have an algorithm that's applied to the scoring, which basically considers whether a technology is new to all Hype Cycles. If so, that technology will get a few points extra. If the technology existed on any of the previous year’s Hype Cycles, it loses some points.
This is to combat the fact that in the past we had technologies that hung around on the Hype Cycle for years and years. For example, smart dust, which was a technology that was on the Hype Cycle as a perennial favorite for six years. This approach ensures that we're having a fresher view on the Hype Cycle. This is especially important given that we have limited real estate.
What are this year's trends?
- Composite architectures
- Algorithmic trust
- Beyond silicon
- Formative AI
- Digital me
Composite architectures: Composable enterprises – Yefim Natis
What are composite architectures and why do they matter?
A composite architecture is made up of packaged business capabilities, built on a flexible data fabric. Basically this enables an enterprise to respond really rapidly to changing business needs.
The ultimate benefit of composable thinking, composable architecture, composable enterprise technology is that their organization unifies resources. Composite enterprises bring business expertise and technology expertise together to reengineer decision making and establish the policies and the structures of their organizations from a focus on stability to focus on agility and continuous change.
And why is this technology featured on the Hype Cycle?
Every organization today is seeking greater resilience, greater responsiveness to change, greater ability to integrate, and greater involvement of business and of IT together in making strategic, technology and business decisions. Composable enterprise promises to significantly improve each one of these capabilities of a modern enterprise. So it's no surprise that composable enterprise generates a lot of interest, a lot of hype, promise and investment from vendors and, increasingly, from users as well.
Algorithmic trust: Authenticated provenance – Avivah Litan
What is authenticated provenance?
Authenticated provenance is part of algorithmic trust. Basically what it does is authenticate the origin of something. Algorithmic trust applies to the whole life cycle. Authenticated provenance asks how do you know something is real and valid when it is created? You can use many different methods to authenticate provenance.
One method is humans. You can have regulators go and look at the wheat field and say, ‘Yes, this is definitely organic wheat’, but that doesn't scale very well. The second way is to use AI models and have one that distinguishes organic wheat from nonorganic wheat by looking at the different composition and biology or DNA of the wheat itself.
The third way you can tell that something is authentic is through certifying at the point of origin, using some technique that's relevant for that domain. So let's take a pharmaceutical, a drug that's manufactured in a plant. As soon as it's signed off by the QA process in the factory, that data is locked in, and now you have a record of that pharmaceutical drug provenance that you can track until the time someone takes the drug.
This feels really relevant to the current state of the world. Is that why it's featured this year?
The reason this technology is featured now is because it's so needed in our digital world. You can't trust anything anymore. And I know that sounds very extreme, but it's actually true. There's so much ability to insert fakes and counterfeits into processes, whether it's manufacturing or content, that we need to be able to trust the source and trust the provenance. There's also a bigger demand from consumers to know that things are trustworthy, so the need for an authenticated provenance is stronger today than it's ever been in our history.
Beyond silicon: DNA computing and storage – Nick Heudecker
What is DNA computing, and how does it work?
DNA computing plays into the beyond silicon trend because it introduces a brand-new computing substrate instead of using silicon. It use molecules and the reactions between those molecules to not just store data, but give you a new way to process it as well.
Storing data in DNA sounds hopelessly complex, but the technologies are well-established and understood. First, the digital content is compressed and mapped to the four nucleotides in DNA (adenine, thymine, guanine and cytosine, or “ATGC”). Because there are four nucleotides, each nucleotide can represent two digital bits. These nucleotide codes are used to create matching synthetic DNA, which is then replicated and stored in DNA strands. Those strands are then “amplified,” or copied millions of times, to make reading the data easier when material is extracted from its storage container.
When the data needs to be read, the opposite process occurs. The DNA strands are prepared and sequenced back into nucleotide codes, which are then converted back into digital content.
From a resiliency and storage density perspective, nothing beats DNA. Properly stored, DNA can last for at least 500 years. And a gram of DNA can store over 200PB of data
With digital data represented as DNA, the next step is introducing a processing mechanism to create a full DNA computing environment. While it is still a highly experimental domain in DNA computing, enzymatic processing is gaining prominence.
Enzymatic processing uses enzymes, which are proteins that act as catalysts, to perform a logical operation on a collection of DNA. This mechanism is inspired by how DNA is replicated and error-checked in organisms. Custom-designed enzymes can take the form of “logic gates” that process data and create new DNA strands as output, which can then be read by a DNA sequencer. Recent experiments have used enzymatic processing to perform machine learning over data represented as DNA.
From a resiliency and storage density perspective, nothing beats DNA. Properly stored, DNA can last for at least 500 years. And a gram of DNA can store over 200PB of data. Another advantage of DNA is it's never going to go out of style. We are made from it. Unlike other technologies that might be fads or become incredibly difficult to maintain, DNA is pretty straightforward. And the technologies that synthesize it and the technologies that sequence it are well- understood and falling in price every day, making it much more approachable.
How might this be used today?
You might see DNA computing in any industry that has a massive amount of data. A good example is CERN with the Large Hadron Collider. They collect petabytes of data every year. Storing that in magnetic tape is incredibly expensive. It takes a lot of room and they can only store it for about 10 years before they have to move it to fresh tape. Other use cases include storing national archives, scientific endeavors producing large amounts of data like astronomy, or industries like oil and gas.
But that's only half the story — you also have to be able to process that data. And this is one of the real advantages of DNA computing. You can have millions of copies of a given dataset, and you can replicate it very cheaply. Once you have that data represented millions of times, you can introduce enzymes into that pool of DNA strands, and using enzymatic reactions, it will do whatever kind of computing you might want to do. Viable DNA processing is several years away, but the possibilities are fascinating.
Where is technology in terms of market adoption?
DNA computing is at a very early stage. We've seen some early investments from large and small technology vendors. A lot of research is happening at universities, but it is very early. I think we'll see DNA storage as a viable option within three to five years, likely in a cloud infrastructure scenario. And then DNA computing will take longer to develop. I predict that's going to happen within eight to 10 years.
Formative AI: Generative AI – Svetlana Sicular
What is generative AI?
Generative AI is not a single technology, but it's a variety of machine learning methods that learn a representation of artifacts from data and use the data to generate brand-new, completely original, realistic artifacts. Those artifacts preserve a likeness to the training data, but they don't repeat it. It can produce novel content such as images, video music, speech, text and even materials, and all of this can be produced in combination. It can improve or alter existing content and it can create new data elements or data itself.
What are the downsides of generative AI?
Generative AI has gained a partly negative reputation because of deep fakes. If AI can generate a face, text or video, it could be used to compromise someone for political or blackmailing purposes. We’ve already seen the first case of a generated voice being used to embezzle money. A voice of a CEO was generated and used to request the quick transfer of a large sum of money. But we cannot negate the pluses, such as generative technology being used to predict how some conditions, like arthritis, will develop in the next three years.