What factors constrain edge computing? What factors enable/empower it?

1.4k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
VP of Partnerships and Strategic Advisor in Software5 years ago

Here’s an awkward Sci-Fi reference, in Altered Carbon, they talk about needlecast. Presumably a faster than light communication broadcast. They're still bounded. The amount of data that they need to send has grown so large that even if you had a faster way to transmit it, you still wouldn't have enough bandwidth to transmit it. So, there's always latencies in the system. The processors in mobile devices like smart watches have become orders of magnitude better than anything that we had 10 years ago, for example, in a desktop machine, a laptop, or even a mainframe for that matter. But the fact of the matter is that the problem expands to take up all available resources in that processor, no matter how great it is. You may have storage limitations, or you may have a battery limitation, or you may have a bandwidth limitation on the watch. Edge computing is a lot trickier than “where are my workloads located?” It's a lot trickier than “what's the security of the workloads?” It's like understanding what the envelope is that you can operate that workload in. At the edge, it becomes constrained by so many different factors. That’s another argument for making it ubiquitous and making it spread out as much as possible. Nobody talks about the cross-platform highly ubiquitous scheduler that would have to be available on all platforms that would recognize power limitations, bandwidth limitations, network latency issues, availability of storage. All of these things, compute power, memory availability. Nobody really talks about that, except for the researchers at UC Berkeley, who are doing things like, InferLine. They're actually thinking about this stuff, and they've done a cross-platform scheduler for edge computing. They'd done it as an AI training scheduler so that they could do sectional training of ML on different devices to kind of speed it all up. So they could do inference at the edge, close to the edge and in the core, and they could move those workloads around. It's really a super interesting project. It's a couple of years old now.

COO5 years ago

The two things I believe in are economics and physics, and I think those two things will drive a huge amount of edge computing work, which is going to just completely change the balance of powers right away just by the makeup of where all the different computer storage and networking pieces live. And physics being the thing that makes you not be able to send it to a central data center and back fast enough.

Lightbulb on1 circle icon1 Reply
no title5 years ago

It's interesting that you mentioned physics because the maximum speed of a signal inside of a carrier medium, be it copper or fiber, I think it was something like 132 miles in a millisecond with no extra hardware in the mix, means that even with edge computing and distributed computing of any sort, I think you're always going to have workloads that are going to really require that low latency because of interactions between programs or things in a data center. My favorite example is Los Alamos National Laboratories, when their folks say something like, “we simulate big explosions, so we don't have to make them.” And you think about the amount of computing power that's actually needed to do that. And then you start thinking about the latency and then I immediately start thinking of the Cray X1 where they had down to the millimeter measurements for the circuitry.