What is edge computing?
Edge computing, as defined by us, is about putting workloads as near to the edge as feasible, where data is created and actions are made.
So, let us consider it for a moment.
What is the source of data?
We frequently conceive of data as sitting in the cloud, where we can do analytics and AI operations on it, but that's not where the data was created in the first place.
We, as humans, produce the data in our world, in the contexts in which we function, and in the places where we work.
It originates from our encounters with the devices we utilize while completing different jobs.
It's produced as a consequence of our usage of the equipment, and it comes from the device itself.
So, let's delve a little deeper into this.
If we want to take advantage of the edge and put burden there, we must first consider what data will be sent back to the cloud.
And when we talk about clouds, let's talk about both private and public clouds, rather than distinguishing between the two, because where we put that data, and where we end up processing it for things like aggregate analytics and trend analysis, is still likely to happen in the cloud, in the hybrid cloud.
Now it appears that network providers are also considering the world of networking, the services they offer, and how they may integrate workloads into the network.
That's how they refer to it, in a sense.
The term edge is frequently used by network providers to refer to their own network.
5G allows us to communicate into work areas such as the factory floor, distribution centers, warehouses, retail stores, banks, and hotels, to mention a few. We can inject computing capacity into those environments and communicate with it using 5G networks.
There are two types of edge computing capabilities that we frequently see in these settings.
One type of server is referred to as an edge server.
You can conceive of an edge server as a piece of IT equipment.
It could be a half rack with four or eight blades, or an industrial PC, but it's a piece of hardware designed to handle IT duties.
Another site where we may do work in the edge, in on
-
premiseplaces, is in what we call edge devices.
An edge device is intriguing because it is, first and foremost, a piece of equipment that was designed with a specific purpose in mind.
It could be an assembly machine, a turbine engine, a robot, or a vehicle.
They were designed to accomplish those functions first and foremost.
They just happened to have compute capacity on it, and in fact, we've seen over the last few years that many of the pieces of equipment we had previously, which we refer to as IoT devices, have grown up, with more and more compute capability added to them.
Let's look at a car as an example.
In today's world, the average automobile has 50 CPUs.
Almost all new industrial equipment comes with computing capacity built in.
And here's the kicker: these machines are being opened up.
They frequently use Linux.
They allow us to install containerized workloads onto these devices, which means we can now do stuff that we couldn't do before.
Let's imagine you have a video camera integrated into an assembly machine that makes parts.
Maybe it's creating metal boxes of some sort, on which you could mount a camera.
You can attach an analysis to the camera that examines the quality of the machine's output.
Many of these operating environments now contain edge servers, which is quite prevalent.
Remember, these are pieces of IT equipment, so it might be a half rack on a factory floor that is now being used to model production processes or monitor for production optimization, and whether manufacturing is being conducted as efficiently and with as much yield as we desire.
The same scenario might happen at a distribution center when it comes to managing all of the conveyor belts, stackers, sorters, and other items used in a distribution center.
As a result, they are sites where work can be done.
Edge servers, on the other hand, as IT equipment, are frequently significantly larger.
So, if we're going to run a container on his workload that we're attempting to manage into these areas, we'll typically run that container on a Docker runtime without the benefits that Kubernetes provides.
Whereas on an edge server, we not only have the capacity to run Kubernetes, but we also have the need, the need to get elastic scale, high availability, and continuous availability out of the workloads that are deployed on these edge servers, because they are, after all, used on behalf of many of these edge devices.
So, with that in mind, we can begin to consider what occurs in these situations and how we can manage them.
How can we ensure that the appropriate workloads are assigned to the appropriate locations at the appropriate times?
First and foremost, we consider what we've accomplished in the cloud.
We all know how vital it is to create workloads as containers in the cloud.
This is something we built for scaling, efficiency, and consistency, and it's something that virtually all public cloud providers, and certainly most private cloud providers, now support with Kubernetes running in the cloud.
We can take the same technique and apply it to packaging workloads and managing distribution in edge computing scenarios.
Second, because these things are frequently developed for usage in hybrid cloud scenarios where we've established hybrid cloud management, we can start to reuse those principles as a technique for handling container distribution to these edge locations.
However, there are a number of issues.
One of them is simply considering the volumes and numbers of devices available.
We believe that there are roughly 15 billion edge devices on the market today, which will expand to about 55 billion by 2022, and 150 billion by 2025, according to some projections.
But, if that's the case, it means that every company will have to handle tens of thousands, hundreds of thousands, or even millions of devices from their central operations.
We need management solutions that can distribute workloads to these locations on a large scale without the need for human administrators to go out and allocate those workloads to individual devices.
We also have a diversity problem.
These gadgets come in a variety of shapes and sizes.
Finally, there is the question of security.
These devices on the edge reside outside the bounds of the IT data center in these contexts.
They lack the physical security, uniformity, and consistency that we seek for in hybrid cloud settings when certifying security.
We must now consider how to ensure that workloads are not manipulated with before being distributed to these systems.
How can we ensure that if the machine is tampered with, we will be able to notice, respond to, and rectify the situation?
We must ensure that the data associated with these workloads is adequately safeguarded, not only because it may be transported back into the network, through the network, and into the cloud, but also because the movement itself is a point of vulnerability.
We've really lowered the ability for others to uncover attacks on that data if we can move workloads to the edge and prevent having to transport sensitive data back to other locations.
So, all of these factors combined will, on the one hand, stifle the use of edge computing, but on the other, provide an opportunity for vendors to introduce management controls that can handle the diversity and dynamism, the ability to protect data in the right places at the right time, and finally, the ability to create an ecosystem, which is just as important as everything else.
To summarize, it is critical that we understand that the world of edge computing is expanding.
This is going to get a lot bigger.
It will have the same influence in the enterprise computing world that mobile phones did in the consumer computing world.
If you consider the changes that have occurred as a result of mobile phones, you can expect just as much change in enterprise computing as a result of edge computing as we have seen in mobile computing over the last ten years.
As a result, this is a world that is expanding.
This is a world full of fascinating complexity, but one that, if we can solve these problems, will provide huge value to our clients.
Thank you for taking the time to read this edge computing blog.
If you enjoyed it, please like and subscribe so we can bring you more.
No comments:
Post a Comment
If You Have Any Doubts, please Let Me Know