Living On the Edge: The Next Step in the Evolution of Cloud

    We often see a pendulum motion in the world of IT. In the 1950s, mainframes became available to large corporations and universities. These mainframes were enormous in every sense of the word. Not only were they physically huge, but they were unfathomably expensive. Multiple users would therefore access the mainframe via ‘dumb terminals’. These terminals had no compute capabilities and acted solely as stations to facilitate access to the mainframes. You could argue that this was in fact Cloud 1.0.

    As Moore’s Law steamed ahead and chips became smaller and more powerful, the pendulum swung and dumb terminals were replaced with personal computers.

    Then, as high-speed Internet became commonplace and virtualisation technology advanced, we saw the pendulum swing once again; this time the heavy lifting was done in the cloud and the web browser became the dumb terminal. Users could suddenly access infinitely powerful compute resources from virtually any device with an Internet connection.

    And now, we are starting to see the pendulum swing once again. This time however, we don’t need intelligence brought to our dumb terminals; we need intelligence brought to our dumb ‘things’.

    As we enter the age of the Internet of Things, cloud computing has become the glue that binds this exponentially increasing world of devices together; but in a world of machine-to-machine communications, the cloud model is starting to show some cracks.
     

    What is edge computing?

    Edge computing is still a relatively new concept. Also referred to as fog computing, cloudlets or distributed computing, depending on which vendor you talk to, edge computing effectively pushes the compute function to the edge of the network. Rather than pumping all of the data back up to the cloud for analysis, the computing is handled much closer to the devices generating and responding to the data. This computing might be done on routers, switches or even the ‘things’ themselves.

    A growing number of enterprise-class gateway devices now have edge computing capabilities as standard, and there are a number of middleware platforms emerging, allowing organisations to take advantage of edge computing.
     

    What’s wrong with the cloud?

    In the world of IoT, decision windows are shrinking and actions need to be near instantaneous. Unfortunately, the cloud computing paradigm was not designed with real-time in mind.

    There are two over-riding factors that dictate the viability of many IoT applications - speed (latency) and capacity (bandwidth).

    Simply put, bandwidth, or throughput, is the measure of the average amount of data that can be transferred through the network in a given period of time.

    Bandwidth has, of course, drastically increased over the years, but it hasn’t followed the same exponential trends we have seen with Moore’s Law and it remains a bottleneck in the cloud technology stack.

    Edge computing addresses this bottleneck by reducing the amount of data pushed back to the core network. The reality is that most of the data being generated by IoT isn’t needed – it’s just noise. Conducting analytics at the edge of the network allows enterprises to effectively filter that noise, sending only relevant data back to the cloud.

    In the context of a network, latency is the delay before a transfer of data begins following an instruction for its transfer.

    Data transfer rates in fibre optic cable are roughly 5 microseconds per kilometre. Latency is measured by a round trip time so this is closer to 10 microseconds per kilometre. The latency between California and the Netherlands is about 150 milliseconds.

    Amazon has found that a 100-millisecond delay can lead to a 1% decline in sales, but generally speaking, a small amount of latency isn’t an issue for human-facing applications.

    However, as we move to a world of machine-to-machine, these latency rates become more of an issue. An autonomous car, for example, needs to make instantaneous decisions based on real-time variables. The data can’t be uploaded to a cloud in another country – analytics must happen in real-time, in a local environment.
     

    Security and privacy

    Many organisations struggle to fully leverage the power of cloud computing because of regulatory constraints. With GDPR on the near horizon and data sovereignty becoming a key concern for many businesses, public cloud can present a number of challenges.

    Edge computing can circumnavigate many of these regulatory hurdles by storing and processing sensitive data at a local level and only transferring its metadata to the cloud.

    With sensitive data being stored at the edge, there is also less chance of it being intercepted in transit. Edge computing can even be implemented as a security measure in its own right.  The IEEE published a paper which suggested that ‘decoy data’ might be stored at the edge of the network, preventing an attacker from being able to distinguish legitimate sensitive information from the false data.
     

    Living on the edge

    When you think about it, edge computing is a common-sense step in the evolution of IoT and cloud. By processing data closer to the source, the limitations of bandwidth, issues of latency and concerns over data security can all be addressed.

    Find out more about preparing for GDPR