CIO News Hubb
Advertisement
  • Home
  • News
  • Operations CIO
  • Visionary CIO
  • IT Management
  • Information Security
  • Contact
No Result
View All Result
  • Home
  • News
  • Operations CIO
  • Visionary CIO
  • IT Management
  • Information Security
  • Contact
No Result
View All Result
CIO News Hubb
No Result
View All Result
Home Operations CIO

Don’t force your IT applications into edge computing

admin by admin
April 26, 2022
in Operations CIO
585
SHARES
3.2k
VIEWS
Share on FacebookShare on Twitter


To evaluate edge tools and applications appropriately, IT professionals must examine what distinguishes edge computing from other types of computing. The edge provides computing activity somewhere close to the point of activity. This flies in the face of modern trends to consolidate that computing power in the data center or the cloud to create positive economy at scale.

The justification for scrapping the standard concentration principle is latency. Edge computing targets real-time missions, wherein the application must be tightly coupled to real-world events and actions — that’s why it has to be close.

The most critical term in edge applications is control loop — the workflow from the point where a real-time event signals the need for an action, through to the determination of the proper action to take, and back to control elements that perform the necessary steps. This cycle synchronizes with a process, such as an assembly line or warehousing, in which it is critical that there is no delay between event and control reaction. The control loop must be short and introduce as little latency as possible. Edge placement accomplishes part of that task — in part via network optimization, and in part by the combination of application design and tool selection.

Balancing latency with urgency

Traditional applications — transactional applications — are almost always constrained by human reaction time or database activity in terms of performance and response time. Edge computing’s real-time applications have no such constraints. The performance of network connections and edge applications determines response time; everything that introduces latency to the system must undergo optimization, or the application’s mission is at risk.

This is a major shift in the way developers think about applications. Breaking large, monolithic applications into dozens of microservices, using a service mesh for message exchange and similar modern application design strategies could prove fatal in a real-time application.

The application practices and tools appropriate to edge computing reflect the fact that the edge is likely to be positioned in a rack, or room, close to the processes being controlled, rather than in a data center or in the cloud. Edge platforms are not designed for general-purpose computing, so traditional OSes and middleware are not optimal. Embedded control or real-time OS- and process-specific middleware form the basis for the common edge system. The hardware is usually specialized, so the edge is unlike either the data center or the cloud.

Holdups on edge adoption

The need for point-of-activity hosting makes it likely that a given edge location cannot be backed up by any resource located elsewhere without introducing more latency than the activity can accept. That mere fact reduces the benefit of virtualization — in any form — significantly. And in place of higher availability through rapid redeployment in case of failure — or scaling if there’s a significant change in load size — the edge system must rely on redundancy to improve availability and be engineered for peak loads rather than for scalability. That has a major influence on the tools and platforms used.

The edge is, in a process sense, an island.

Containers, which facilitate both software portability and easy scaling and redeployment, are less valuable at the edge and can generate unnecessary — or prohibitive — latency. Orchestration tools like Kubernetes are also redundant where there are neither containers to deploy nor pools of resources on which to deploy them.

Are more applications moving to the edge?

The edge is, in a process sense, an island. Expect to design programs differently for edge operation, but traditional programming languages work if the platform software supports them.

Edge applications monitor the control loop and enforce a latency budget, but edge applications require maintenance and management in parallel with cloud and data center applications. Container strategies might not be applicable — let alone helpful — where there are no containers or resource pools. Monitoring tools specific to edge OSes are better at managing latency in edge control loops, but monitoring the latency budget through a complete transaction can require data integration from multiple sources.

Where enterprises want orchestration and automation across their entire IoT infrastructure flow, from edge through cloud to data center, there are versions of Kubernetes designed to support bare-metal clusters. These combine with multi-domain Kubernetes tools, such as Anthos or Istio, to unify operation. Alternatively, enterprises could use a non-container-centric DevOps tool, like Ansible, Chef or Puppet. However, as edge applications are likely to be less dynamic than cloud applications, it might be easier to manage them separately with the OS tools available.

The edge isn’t taking over

A wholesale shift to edge computing is far from certain. While web GUI processes are increasingly latency-sensitive, they still support human reaction time and are so tightly linked to the internet and public cloud services that it’s unlikely most will move to the edge.

IoT applications are the primary source of edge computing. The industrial, manufacturing and transportation applications that would justify the edge most easily are a small part of most enterprises’ application inventory. The edge is different and likely to be challenging in terms of tools and practices, but it’s not so large, or growing so fast, as to disrupt IT operations. Think of edge computing as an extension of real-time activity rather than as another place to host applications.



Source link

Previous Post

What is a Data Center?

Next Post

Unpacking the challenges for SAP and the broader enterprise market

Related Posts

Operations CIO

Cisco, CNCF leader urges corporate open source contributions

by admin
May 22, 2022
Operations CIO

What skills will ITOps professionals need going forward?

by admin
May 21, 2022
Operations CIO

9 managed Kubernetes services to consider

by admin
May 20, 2022
Operations CIO

How to Design and Build a Data Center

by admin
May 20, 2022
Operations CIO

Use Dapr on Kubernetes to build microservices in production

by admin
May 19, 2022
Next Post

Unpacking the challenges for SAP and the broader enterprise market

Leave Comment

Recommended

Government has no plans to review controversial court rules on computer evidence

May 23, 2022

Reimagining the cities of the future in Finland

May 23, 2022

The longlist of the UK’s influential tech leaders

May 23, 2022

Did the Conti ransomware crew orchestrate its own demise?

May 23, 2022

Understanding attack paths is a question of training

May 23, 2022

How large companies can be ‘sharks’ that devour startups in their way – I-CIO

May 23, 2022

© 2022 CIO News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy.

Navigate Site

  • Home
  • News
  • Operations CIO
  • Visionary CIO
  • IT Management
  • Information Security
  • Contact

Newsletter Sign Up

No Result
View All Result
  • Home
  • News
  • Operations CIO
  • Visionary CIO
  • IT Management
  • Information Security
  • Contact

© 2022 JNews - Premium WordPress news & magazine theme by Jegtheme.