Accelerating applications with edge caching

Introduction

Quick responses are essential to creating rich user experiences. In fact, research by Portent suggests that your business can double its revenue by simply bringing down page load times to under a second. On the flip side, a delay of even a millisecond can sometimes lead to devastating consequences. For instance, any interruption or delay in communication with an autonomous vehicle’s IoT sensors could result in accidents. So how can we deliver the minimal application response times that users today have come to expect, be it for streaming a media-rich web application or running a complex banking application? In this paper, we look at one possible solution to the problem of delayed application response times: caching. The first section takes a detailed look at caching — how it can make applications respond faster, various use cases, and benefits. Next, we cover the different kinds of caches, from basic private caches to distributed caches. However, traditional caching cannot always guarantee quick load times, especially if the client and cache are geographically apart by thousands of miles. This is where edge caching shines. The third section of this paper deals with edge caching and how it can deliver the reliability and performance your business needs, no matter where you are located.

Power of caching

All application processes, from industrial robots following a fixed set of instructions to streaming movies and data-heavy content on a OTT platform, can be broadly divided into three stages:

Stage 1: Receive client request
Stage 2: Process client request 
Stage 3: Return server response

While this entire process is quite complex, the second stage is where the bulk of compute resources are utilized. The application server requires additional data from the database and performs multiple computations to prepare a response. The database performance, server computation time and time taken for the response to travel back to the client, all impact the application response time. Now imagine, if not only you but thousands of users make requests to the application simultaneously. Pretty soon, your database can become a bottleneck leading to increased response times. So, how can you solve this problem and improve user experience? With caching. 

In very simple terms, a cache is a hardware or software component that stores data temporarily. A cache acts as an intermediary between your application’s servers and visitors, and usually stores commonly-requested responses and other resources. So, every time a client makes a request, they get some pre-stored or partial response from the cache while the server processes the request. Therefore, instead of communicating with multiple data sources, the application server can rely on the cache for fetching frequently used information. Thus, caching not only speeds up your page load times but also frees up your application servers to handle more requests. 

Types of cache data

Application developers need to store different kinds of data in the cache during a user's session. Some of the common use cases for a cache are:

Templates and semi-rendered responses

Most website UIs typically create responses by adding new data to pre-existing HTML templates. Even though the data is dynamic, the static parts can be stored in a cache to reduce the cost of frequent use.

Session data

When building responsive apps, it's vital to cache user session data, such as user name, location and other identifying information. If you persist your session data on one backend server, only that server can process the subsequent user request. On the other hand, if the session data is cached, any of your backend servers can process the subsequent request without losing the user state.

Localization and internationalization data

Web applications provide localized variants to accommodate global audiences. For example, a hello message may read as Hola for Spanish visitors. This information is stored externally and can be cached for efficiency.

API responses

Modern applications use APIs as communication tools between individual components. Storing the API responses in a cache improves the app's performance by freeing the server from making repeated API calls. 

Configuration settings data

Web applications use configuration data typically stored in mechanical disks to make runtime and bootstrapping decisions. Disk access can be extremely slow and this data can instead be cached to reduce latency, thereby boosting website performance.

Reusable results

Some apps like video streaming websites require resource-intensive results that can be reused. For instance, trailer clips on OTT platforms can be cached since the same trailer shows up for any user.

App objects

Any reusable app-generated object can be stored in cache memory. A user profile is a good example of a regularly stored object that derives data from several sources. To generate a profile view, the server might pull data like images, metadata and geolocation from different databases. Caching these different sources of information can significantly reduce processing time.

Benefits of caching

Caching can provide multiple benefits like:

Increases business revenue

By reducing page load times to less than 1 second you can reduce bounce rates and increase the time users spend on your website. Caching can increase conversions by as much as 20-25%, thereby boosting profitability. Moreover, reduced network lag can reduce support calls and customer complaints, saving on customer service expenditure. 

Reduces bandwidth costs

Every incoming website request consumes network bandwidth. Caching can save you bandwidth costs by reducing the amount of data an origin server needs to send to a client.

Handles usage spikes

Too many visitors accessing your website at a time can make it crash. You can avoid such disastrous business scenarios by using a cache server to share the workload. This frees up your application servers to handle more requests, enabling you to manage seasonal spikes without drop in quality of service. 

Enhances website security

Distributed Denial of Service (DDoS) attacks attempt to take down applications by sending large amounts of fake traffic to a website. Caching handles such traffic surges by distributing the load between several intermediate servers and reducing the impact on the application server. This gives you some time to plan a response and shut down the attack.

Enriches user experience

Caching increases your website responsiveness by reducing page load times. Users have a seamless site experience and access critical functionality as if the app code resides on their local machine. Errors due to network lag are avoided and users enjoy real-time updates. 

Now that you’ve understood why your applications should use a cache, let’s proceed to the next question. Where does a cache store data? Does it store data on your local machine? Or does it utilize the application server itself? Or somewhere else altogether? In the next section, we talk about the different kinds of caches to answer these questions.

The traditional caches

Web applications cache data in two major ways: 

  1. Client-side caching and
  2. Server-side caching. 

Client-side caching

Client-side caching is implemented at the browser level. Browsers like Chrome, Safari, and Mozilla Firefox create cache folders on every device that you install them. Website designers can mark website pages or elements within a page as suitable for caching. After first visit, the browser will store these pages and elements in the cache folder. In subsequent visits, this data is loaded directly from the user’s device.

While client side caching delivers the fastest page load times, it has limited practical application. Since most modern websites are dynamic with regularly updated content, only a very small percentage of data is typically stored in the browser cache. Also, the user’s device configurations and capacity impact the size limit of the cache. Website developers have limited control on client side caching because in most cases they cannot influence the device and browser configurations of their visitors.

Server-side caching

Server-side caching is an umbrella term for several strategies that  web application developers use to create caching systems at their end. They use different technologies to create cache stores directly on the backend server or on a separate server that the backend server can communicate with. Once the backend server receives data from another data source, it stores a copy in a server-side cache. For next use, it takes the data directly from the cache instead of multiple data sources. 

Similarly, the server also stores commonly requested responses in its cache. Instead of processing data again and again for every request, the server directly sends the cached response to the client. Server side ca includes several types of caches that attempt to reduce the time taken to access the backend database servers, create the response to requests and deliver the request to clients. We give some types of server-side caching below.

Private or local cache

A private cache exists within the memory of an application and stores frequently accessed information from the database. With the cache tightly coupled with the application process itself, data can be accessed with low latency and hence allow for fast access. Developers typically use embedded libraries and programming constructs to create a private cache.

While a private cache offers fast data access, there are two disadvantages to using it. Firstly, the cache uses the same RAM resources as the application process. This means that the cache is always competing with the application process for server resources. With the cache storing only temporary data, and RAM on the server being limited, application processes are prioritized for RAM access. Moreover, the frequent deallocation and reallocation of memory in a cache adds to the strain on server resources. Secondly, private caches are expensive to scale. As the application begins to handle more traffic, every instance of the application has to be teamed up with more RAM and compute resources for caching. 

Remote or side cache

A remote cache trades speed for dedicated server resources to handle a client request. It can be located either on the application server but outside the application process, or on a separate server altogether. If you place the remote cache on the application server, you still get the fast access speed of a private cache. But now you also avoid memory contention since the cache isn’t tied to the application process anymore. Also, when multiple application instances run on the same server, each instance can share the same remote cache, reducing the total server resources required. If you choose to place the remote cache on a separate server, it still wouldn’t significantly impact your page load times. This is because the time it takes for your application to communicate with the remote cache  is much lesser than the time it takes for the server to process a request.

Distributed cache

While a remote cache solves for the memory contention issue of a private cache, it is still challenging to scale. The resources on your cache server limit the extent to which you can scale. This is where a distributed cache can help by pooling the RAM resources from several different servers to provide larger capacity and increased processing power. It grows beyond the resource limits of a single server and can service multiple application servers at a time. A distributed cache is especially useful for applications with high data volume and load. There are several other advantages of using a distributed cache system::

  • Since there aren’t multiple cache servers, cache data duplication reduces and external data requests go down to a minimum.
  • True statelessness can be maintained  by storing state data between requests.
  • A distributed cache is resilient to server restarts and app deployments.

Without a doubt, traditional caching techniques have greatly improved user experience. However, it might still not be enough to guarantee the less-than-a-second response times that modern applications need to provide. For instance, imagine that your application and cache server are located in North America, but your website visitors ping you from Europe or Asia. They will continue to experience lag because cache data still has to travel halfway around the world to reach them. In such geo-distributed scenarios, an edge cache can help. In the next chapter, we present edge caching as a reliable and convenient solution to power modern applications.

Using edge caching to build modern applications  

Edge caching can help speed up response times even when the application server and client are geographically separated by thousands of miles. Edge caching is essentially distributed caching that brings the cache geographically closer to the end-users rather than the application server. Typically, an edge cache is centrally located to a large population and covers an entire country or several states of a large country. Thus by serving only a geographical subset of user requests, the edge cache delivers request responses efficiently and more cost-effectively.

Edge caching vs other types of caching

The primary difference between edge caches and traditional caches is that the client directly accesses the cache data without relying on the application server. For instance, while processing a user request, the application server makes different API calls and stores JSON objects in private, remote or distributed caches. Using the cached data, it then processes the client request and sends the response back to the client. With edge caching though, the client request directly goes to the edge cache. Only if the edge cache cannot handle the client request on its own, does the request go to the application server. Thus, while traditional caches store various kinds of data for faster application processing, edge caches store the application processing results themselves for faster client access.

Benefits of edge caching

Reliable performance for rich media content

Several web applications require the real-time distribution of rich media content. For example, a sports event often has to stream real-time gameplay from various camera feeds. You can store live or video on demand (VOD) content on edge caches at locations where it is needed the most, offering your users fast, predictable performance and immersive engagement.

Reduces latencies for massive amounts of data 

Edge caches can reduce latencies for use cases where large amounts of local data are required in real-time. For example, autonomous vehicles need to access local geographical databases to identify street names, geographical features and local traffic conditions. Similarly, real-time information has to be transmitted to a local region quite urgently in case of a natural disaster. In such cases, even small latencies can impact user experience adversely.

Lowers costs of serverless applications  

While building applications, modern cloud providers manage server infrastructure provisioning so that you don't have to worry about any underlying server configurations. Although these serverless applications offer several benefits, they also limit your ability to improve your application performance using many of the server-side caching strategies. Edge caching gives you performance control for serverless applications. Since most cloud services are pay-per-use, edge caching increases cost savings by cutting out the unnecessary round trips to the origin server.

Meets local regulatory requirements

Data privacy and protection laws vary from region to region. It is crucial for businesses that handle sensitive data (like banks) to meet regulatory compliance in every country they operate. Edge caching provides the infrastructure to efficiently manage these regional regulatory differences. International organizations can continue to use global application servers by bounding sensitive data to their local edge cache.

 

Comments

Popular posts from this blog

The computer program with a will

ThoughtWorks Placement Process

Life's lessons from engineering