Sync.Pool — an essential for Go applications

LOGIQ’s observability stack is predominantly written in Go. As a team, we love the simplicity of the language and the tooling that comes with it. Go has a built-in garbage collector, making it easy to write programs without worrying about memory for the most part. However, as you would expect, nothing comes for free.

There are numerous scenarios where relying purely on language and runtime capabilities without considering what happens beneath the covers lands you in trouble. This article explores a model that can significantly improve overall performance and memory usage while reducing the amount of runtime the system spends on garbage collection.  Go’s sync package provides an implementation of the Pool type. Here’s how Go describes the Pool type in their documentation:

“A Pool is a set of temporary objects that may be individually saved and retrieved.

Any item stored in the Pool may be removed automatically at any time without notification. If the Pool holds the only reference when this happens, the item might be deallocated.”

Let’s see how this works with a real-world example. We used sync.Pool for some of our frequently allocated objects. In cases where reuse of the objects was obvious (such as when log data comes in), we store data in an incoming object and persist it to the data store. This scenario is a great candidate for optimization as in steady-state, there should be a fixed number of required objects, which would be proportional to the ingest rate. 
Let’s try out a Pool implementation. To visualize the pool’s effectiveness, we track the allocation using Prometheus counters using the get, put, and new labels.

var freeLocalReceiverPartitionPGPool = sync.Pool{
   New: func() interface{} {
   client.PoolStatsCountCollector.WithLabelValues("LocalReceiverPartitionPG", "new").Inc()
      return new(LocalReceiverPartitionPG)
   },
}func GetLocalReceiverPartitionPGFromPool() *LocalReceiverPartitionPG {
   client.PoolStatsCountCollector.WithLabelValues("LocalReceiverPartitionPG", "get").Inc()
   return freeLocalReceiverPartitionPGPool.Get().(*LocalReceiverPartitionPG)
}func FreeLocalReceiverPartitionPG(lrpg *LocalReceiverPartitionPG) {
   lrpg.Reset()
   client.PoolStatsCountCollector.WithLabelValues("LocalReceiverPartitionPG", "put").Inc()
   freeLocalReceiverPartitionPGPool.Put(lrpg)
}

We can now plot this in LOGIQ’s UI. Pool usage statistics are vital in keeping an eye on various subsystems’ memory usage. A built-in Pool usage widget is available in all LOGIQ deployments. The following image shows how LOGIQ’s UI visualizes pool usage:

LOGIQ dashboard Pool statistics visualisation

In the visualization above, we can see the number of times the application requested an object. In this scenario, the allocations are roughly 12% (45,429) of the actual object access (363,177), which amounts to an 87% reduction in heap allocations! The allocations exhibit a phenomenal improvement with reduced heap usage, improved latency, and drastically reduced garbage collection CPU cycles – all made possible by using sync.Pool

Pool’s purpose is to cache allocated but unused items for later reuse, thereby relieving pressure on the garbage collector. In essence, Pool makes it easy to build efficient and thread-safe free lists. However, it is not suitable for all free lists.

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on reddit
Share on email

Leave a Reply

Your email address will not be published. Required fields are marked *

Eliminate Cost Per GB And Retention Limits

More insights.

More affordable. 

Less hassle.