As being data-driven has become one of the guiding philosophies of many organizations, the sheer number and scale of different analytics solutions has grown exponentially. Data is now everywhere. It’s used by every team, and informs every decision they make.
With the use of data exploding, its prevalence across business systems has expanded. While analytics might once have been a specialized function, with dedicated tooling, it’s now central to every area of the business. Data assets are consumed across a vast array of different systems – from BI solutions, to operational SaaS applications, and beyond.
Unfortunately, this sea change in the importance of data and data teams has exacerbated the challenges users face in discovering and accessing the appropriate data assets.
Data asset management is a new solution category adopted by teams for managing the chaos of data assets spread across tools. A key component of data asset management is the analytics hub, a single place where users can access data, analytics, and reporting regardless of its source.
In this article, we’ll explore in detail how an analytics hub can enable you to effectively manage your data assets while also containing the always-growing data chaos.
We covered the impact of entropy in depth in this previous post, but in brief, a fractured data environment can cause a wide array of issues for any organization:
Left unchecked, data entropy will become a persistent problem for your organization. Fortunately, a dedicated analytics hub can help.
A dedicated analytics hub is a single place to access and consume data. You no longer have to navigate multiple platforms to find the data you need.
One of the most common methods for dealing with data entropy: create a folder of certified assets. This can be a smart move for individual stakeholders, helping them to identify what assets to use. The problem: it doesn’t enable discovery of assets across multiple systems or platforms. Users can see what assets are certified within a given tool, but they must already be using that tool to find them.
Another solution to organizing and presenting your data in a user-friendly way: including links to certified assets in an intranet. This tends to work somewhat better than native folders when it comes to handling data across multiple systems. But it also gives your team yet another system to maintain, contributing to the sprawl that creates data entropy in the first place. Intranets also have a tendency to decontextualize your data, keeping knowledge and the data it pertains to separated from each other.
Part of the value of these tools is lowering the friction from asking every incremental question, so you can ultimately derive insight from your data, making it as easy as possible. The problem with lowering the friction, though, is you end up with a proliferation of logic. Every single permutation of every single question you get ends up with a new asset, a new dashboard.
- Jamie Davidson, Co-Founder, Omni
Some teams opt to maintain a data catalog to keep track of data lineage, metadata, and the underlying tables that power their data assets. Data catalogs can be helpful for larger data teams. However, data catalogs generally target technical people, and focus on addressing data sprawl within your data warehouse. They can be very useful for your data team, and for anyone else who uses SQL – but they don’t solve the broader data disorder that plagues organizations.
No single approach can solve the challenge of data entropy on its own, and each of these approaches has tradeoffs and drawbacks.
Moreover, none of these approaches democratizes the management of analytics assets to data consumers across various functions.
This is evidenced by the measures data consumers take to create some sense of order, one of the most common being the creation of bookmarks for their most-used assets. Beyond the redundant nature of this type of self-contained organization, this behavior speaks more broadly to the fact that none of the hand-rolled solutions truly solve the problem consumers face in understanding what data they should look at, and what they should trust.
You start to go into larger companies, not only is it an issue of getting data from this kind of centralized core out into the far regions of the universe. They might actually have five data warehouses, and there might not even be just one centralized core that's having issues getting to the periphery of the business. There might be five decentralized central nodes.
- Scott Breitenother, Founder & CEO, Brooklyn Data Co.
The varied approaches teams have taken are a testament to the persistence and challenge of entropy in data management. But what if there was a solution that consisted of more than just putting a band-aid over entropy, but instead, was purpose-built to address its root cause?
Asset sprawl creates an environment where users are operating out of different systems, and needing to develop context on their own about every data asset they’re using. Given the demands of the modern workplace, there will always be multiple places where users consume and analyze data.
But with an analytics hub, all of the different assets across these systems are consolidated in a single place. No more navigating between different browser tabs and folders full of bookmarks, tracking down links from an intranet, or searching wildly within a system for an asset that might not even exist there.
Part of what gives an analytics hub its ability to address entropy is the simple fact that it was made for this purpose. Any other system will suffer either from having been repurposed from a different end goal, or simply from being a more general and therefore less focused tool. As a result, you'll always be swimming upstream as you create your library of assets. With an analytics hub, oversight of the entire ecosystem falls to the analytics team, rather than to other teams with different priorities — and where there is adequate oversight, there is less room for entropy to take root.
On the data consumer’s side, when they navigate to the analytics hub, they know they’re looking for data, not anything else. They can maintain focus as they get all the data they need, without the added noise of an intranet that covers the entire organization.
Analytics teams can speed up the time-to-value for their data hub by letting the hub build itself. While some level of flexibility in what assets are added is obviously necessary, automation takes much of the grunt work off your plate.
For example, Workstream’s API integrations with popular tools like Looker, Tableau, and others allow you to automatically import all assets from those systems. More broadly, your workspace auto-populates with commonly-used analytics assets – regardless of their systems of origin – without any action taken by users.
From there, users know that there’s just a single place they need to visit to access their own data as well as team data. And your data team has a much simpler task of curating and categorizing whatever is already there, rather than starting from scratch.
For the analytics hub to be valuable, it has to sit on top of the tools you already use — in short, it needs to be integrated with the modern data stack. A solution without direct integration will just create yet another layer of sprawl, and further potential for confusion. With an integrated approach, all of the assets already exist in your environment, and they only need light curation rather than separate maintenance.
The distinction will be immediately evident to data consumers as well, as they will experience a seamless transition between the analytics hub and their native tools.
Once assets have been brought into the analytics hub, users not only see all their analytics assets in one place, they also have the ability to search across these assets in order to find what they need. This is far preferable to a user needing to know exactly which system the data is native to before they can search, or searching an intranet that might contain a large quantity of unrelated information.
The end goal: users have a single access plane for analytics. Their entry point will no longer be a haphazard collection of bookmarks, a poorly maintained intranet, or even their primary BI tool. It will be a single hub with everything they need.
One of the hard truths about entropy in any part of the business is that no solution can remove disorder completely. Any system of any size or complexity will have some level of disorder – but as data professionals, it’s our job to make sense out of this chaos. Having an analytics hub in place gives you and your team the best chance of success, and keeps the disorder to a background hum rather than a roar.
In our next article, we'll go one step further down the path of containing entropy, with a discussion of how teams can use data asset analytics to better understand how their users are interacting with their data, and use these insights to drive improvements in how they supply data to stakeholders.
Receive regular updates about Workstream, and on our research into the past, present and future of how teams make decisions.