Reigning in high-volume data sources at scale

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on reddit
Share on email

Reigning in high-volume data sources at scale

Gaining business intelligence by sifting through TBs of system-generated data is challenging. Imagine doing this at scale while also having to converge data from disparate and distributed data sources so that your customers, employees, and partners have the data they need precisely when they need it. When dealing with enormous amounts of data, most observability, monitoring, and data management tools suffer from what we call the SCATTR problem. 

  • Scale: Infinite scale is hard to achieve without decoupling compute and storage. Achieving infinite scale means sacrificing speed and TCO.
  • Convergence: Most platforms can barely deal with scattered data, often leading to low quality of data being processed and analyzed, thereby producing poor correlation. 
  • Agility: Most platforms depend on manual querying and correlation. They also spend a lot of time re-indexing and re-hydrating old data, leading to delays in finding root causes of the “needle in the haystack” type of issues. 
  • Trust: Data is not always owned by or stored in the customer’s storage layer, creating GRC issues. 
  • TCO: Infrastructure and licensing costs for most platforms are always at a premium. The storage tax charged by these platforms is ever-increasing due to data growth. 
  • Retention: Most platforms only provide limited data retention by default with manually-managed tiering. Data retention periods are almost always never controlled at the storage layer.  

One of our customers faced the SCATTR problem with their observability tool and was looking for a replacement. This article takes you through how LOGIQ helped this company reign in their high-volume data sources and managed their data better at scale. 

About the company

The client provides one of the world’s leading Internet of Things (IoT) platforms delivering industry-leading device management and application enablement. Their platform enables product manufacturers, service providers, and enterprises to connect devices to any application and achieve digital transformation. Their platform is available as a managed cloud Platform-as-a-Service (PaaS) with unrivaled flexibility and modularity that promotes rapid changes to practically any point of their platform at any time. 

They are also experts in building enterprise-grade cloud software for IoT and automation enablement.

Industry/vertical  

  • Internet of Things (IoT)

Their challenges

Owing to their increasing popularity and demand, our client was acquiring new customers at a rapid pace. But as their customer base grew, so did their requirements for better data management at scale, longer data retention, and affordable storage costs. Their systems consistently generated 3 TB of log data per day, which was impossible to ingest using a SaaS solution without significantly increasing their IT spend. They also wanted to own their data to achieve better security, governance, and compliance and ideally wanted a SaaS-like experience with a PaaS solution. 

The client evaluated Elasticsearch but soon discovered that the raw cost of disk storage that Elasticsearch consumes would quickly exceed their current SaaS spend. They also realized that they’d need to build significant expertise in Elasticsearch to run the solution in-house. Elasticsearch also posed a challenge to have flexibility with data retention on-demand without backing a costly storage project. The client then switched to a SaaS solution named LogEntries. However, LogEntries could not scale beyond 400 GB/day due to the high storage costs involved. 

The client was also looking to open their log data to a key OEM partner while restricting access to logs for the OEM. They need RBAC capabilities embedded in their log management system to help them control and govern log access within a single platform. 

With their current log management stack not meeting their data management needs nor being financially reasonable, they were looking for a single platform that would do it all. 

Their requirements

  • Log data aggregation, storage, and management at scale
  • Log aggregation and unification from multiple sources
  • Enhanced ingestion rates with the ability to manage surges and spikes
  • Reasonable log storage costs and longer retention
  • Enterprise-grade RBAC to control data access across internal and partner teams

The LOGIQ advantage

Our client soon discovered LOGIQ and found it was the exact type of data management platform they were looking for. When asked to choose between LOGIQ SaaS and PaaS, our client decided to go down the PaaS route so that they’d be able to exercise closer management and control over their LOGIQ instance. With LOGIQ PaaS, our clients could also exercise better compliance by keeping all data and associated systems within their own cloud accounts. 

LOGIQ provided flexibility with data retention on-demand with an easy 1-click experience using cloud provider lifecycle policy management on their S3 compatible bucket. LOGIQ’s built-in RBAC control at a namespace level allowed developer and OEM teams to co-exist on the same platform while accessing the data they needed. LOGIQ provided a separate cluster for managing OEM data while allowing multi-cluster management of customer clusters via a single user interface.

How LOGIQ helped

We deployed LOGIQ PaaS within their existing Oracle Cloud ecosystem. A dedicated support team from LOGIQ manages the infrastructure needed for LOGIQ PaaS within the client’s existing Oracle Cloud accounts. We first set up SSO by leveraging the client’s Okta setup. Our native integrations with Prometheus and AWS Athena helped quickly set up log ingestion from these sources. 

We also set up alerts within LOGIQ and integrated them with the client’s Opsgenie ITOM so that alerts generated within LOGIQ are forwarded instantly to Opsgenie for further action and automation. 

The results

Our client instantly witnessed tremendous performance gains in comparison to the previous solutions they used. They went from being limited to ingesting and analyzing 400 GB/day on their old system to consistently consuming 3 TB/day with LOGIQ without any limitations. Since LOGIQ enables the use of S3 as the primary storage layer, they could break away from the 7-day limit for data retention and retain data for as long as they liked at 1/4th the cost. Our integrations helped reel in disparate sources and converge all of their data in one platform. Our integration with Opsgenie helped add dimensionality to the issues and failures they usually were alerted to. LOGIQ’s RBAC capabilities also helped our client ensure that the right teams had the right level of access to logs and helped internal and partner teams analyze the data they needed to identify and debug issues and threats. 

Key Statistics

  • 4X more data unified per month
  • 3 billion logs ingested per day
  • Peak load of 160 GB/h
  • 100% uptime over 90 days
  • 50% TCO reduction
  • 8x ROI improvement
  • #ZeroStorageTax with 1-click extension of retention duration

Conclusion

Due to the limitations put forth by their current log management and analytics stack, our customers were finding it very difficult to converge, manage, and analyze their log data at scale. It was also difficult for them to provide controlled access to data to their external partner and vendor teams. By switching to LOGIQ, our customers were not only able to solve their data convergence, management, and access problems, but they were also able to do it at scale, in real-time, at a fraction of the costs they were initially paying.

If your log management platform suffers from the SCATTR problem, you should switch to LOGIQ. LOGIQ is the world’s only unified data platform for real-time monitoring, observability, log aggregation, and analytics with an infinite storage scale without zero storage tax. LOGIQ ships with a host of integrations and tooling that lets you exercise cross-platform, real-time monitoring, observability, and analysis, threat and bug forensics, and process automation – all while leveraging built-in robust security measures, promoting cross-team collaboration, and maintaining regulatory compliance. The use of object storage also means that neither do we dictate how much you can store and for how long, nor do we force you to favor logging specific components of your environment over others – you get to log everything and store and manage all your data on your terms. 

Getting started with LOGIQ is easy and inexpensive. With SaaS and PaaS plans starting as low as $0.33 per GB per month, LOGIQ can meet any budget. If you’d like to try out LOGIQ, sign up for a 14-day free trial of LOGIQ SaaS, or deploy the free-forever LOGIQ PaaS Community Edition on any infrastructure of your choice. 

Related articles

The LOGIQ blog

Let’s keep this a friendly and includisve space: A few ground rules: be respectful, stay on topic, and no spam, please.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More insights. More affordable.
Less hassle.