Monday, July 21, 2025 – Last week, I continued my work by implementing the in-memory cache for the query I built earlier. I started by drafting an initial structure based on what Mr. Peter previously explained. Later, he provided an example using Microsoft’s IDistributedCache.
To avoid repeating cache logic across the codebase, I encapsulated the caching functionality into a reusable cache service. This service allows caching data by simply passing a unique key and the data list. The data is serialized and stored in memory. To retrieve the cache, we just pass the same key.
I also implemented a time-based expiration using DistributedCacheEntryOptions, so that cached items are automatically removed after a set duration. However, as Mr. Peter pointed out, we must also guard against unbounded cache growth. So I added a cache size limit to prevent excessive memory usage.
To handle cases where the cache reaches its size limit, I implemented a cache eviction policy to remove the oldest entry before adding a new one. Since standard in-memory cache doesn’t track insertion order, I built a registry system to track the order of cached items. This registry makes it possible to identify and remove the oldest entry when needed.
By the end of the week, I had fully implemented the caching system. The last remaining task was to ensure that all related cache entries are cleared when the underlying data changes (specifically when the quantity of a cached entity is updated). I created a method to clear all relevant caches and, with Mr. Peter’s help, integrated this method into the necessary update flows.
