What’s driving the rise of edge computing today?
Sort by:
I agree with you, there is a whole new industry of computing and tech that will stem from edge.
Today edge is defined to protect an organization, agency or individual from any unwelcome entrance, such as cybersecurity threats. The technology today is so vast, and impressive with open standards, APIs and everything else. And thinking back to the days of early Apple technology, the ability to copy and paste things from one device to another or from one platform to another, was the very first aspect of compatibility, from a business perspective.
I have done a lot with edge computing, from both a performance and protection perspective. I now see it as more of a protective solution than something that was designed to prevent profitability loss because I developed a system, wrote it in proprietary code and it only runs on this platform. The whole idea was that I could now develop something on a Sun Microsystems box or an HP-UX, or something else, and be able to emulate the same outcomes on different platforms. That really extended the edge farther and farther.
If you remember the first iPhone, it had all these cool things but you couldn't copy and paste from one application to another. It’s funny when you think about it. That's user experience, so somebody missed something.
And yet Apple invented the copy and paste.

There is a brand new, massive market opening up around edge computing. And those forerunners that are planting a flag will open up and make quite a bit of profit on that side of it. Digital infrastructure is a utility—it's a capability that's being enabled but right now there are 7 million data centers. They have 104 gigawatts of capacity built and they're consuming 594 terawatt hours of energy. The prediction is that there will be another 10 gigawatts in each category, edge and core data centers, over the next three years—net new capacity. Think about the data generated and all these sensors in the 100+ billion devices as that starts to open up.
The prediction used to be that 44 zettabytes of data would be created in 2020 but it was actually 66 zettabytes. It was also predicted that it would be 175 zettabytes by 2025; now that's become 260 zettabytes, extrapolating from the growth rate. And that's before 5G and edge deployments—before all the software engineers suddenly realize how much money can be made by doing stuff immediately for people right next to them.
These are machine to machine conversations happening at high speeds with lots of data. But there's no way that we can afford to transmit all that data back, which means that a massive amount of it will be read, processed, dropped, trended, etc., before it goes back. And they say over half that data will get dropped.