Big data is everywhere. Now widely accepted as the “currency of the new economy”, its scope extends far beyond business. The state itself is one of the largest producers and owners of data. In this context, Data Veracity takes on a new dimension.

“Data derives its value not out of its quantity, but out of its trustworthiness”

Data derives its value not out of its quantity, but out of its trustworthiness. In our increasingly data-driven economy, the consequences of bad data being used by a company or public service cannot be underestimated. How to regulate, legislate and protect our data is occupying the minds of decision-makers at national and European level. Data Veracity is also identified as one of the five trends in Accenture’s 2018 Technology Vision.

Determining the accuracy and trustworthiness of data is one of the toughest challenges private and public organizations are confronted with. Scrutinizing and verifying raw unfiltered data takes time, money and an understanding of the different types of biases, noise and abnormalities that may occur. The reality of utilizing massive datasets makes it highly likely that they will bear at least some degree of addition (the corruption of data by the introduction of false data into an existing data set) or falsification (the corruption of existing data).

“Determining the accuracy and trustworthiness of data is one of the toughest challenges private and public organizations are confronted with…’’

When a lack of Data Veracity leads to poor government decision-making

Data generated by public organizations comprises information on the population, city infrastructure, national and regional government operations, and much more.  Yet despite having copious amounts of data within its grasp, public institutions often lack the proper know-how to guarantee the veracity of their data, and thereby come up short in tapping into its full potential.

There are many instances where data-driven governments or public institutions have gotten it wrong. A case in point would be the US Federal Reserve, which in the years leading up to the global financial recession of 2008 could not foresee the collapse of the housing market, despite basing their monetary policy decisions on data-driven predictive models. The same could be said for the credit rating agencies at the time (in particular Moody's Investors Service, Standard & Poor's, and Fitch Ratings), which gave AAA ratings to mortgage-backed securities that eventually turned out to be junk-bonds. These ratings were nevertheless determined by means of mathematical models based on large amounts of financial data.

Closer to home, the EU and European Central Bank’s policies during the early years of the European Sovereign debt crisis left much to be desired, as its contractionary measures seemed theoretically sound, but exacerbated the indebtedness of many of its Member States. The EU’s rescue program for Greece is a case in point: financial data pointed in favor of austerity measures. Fiscal discipline was namely thought to be is a prerequisite for growth. Yet, according to a report by the International Labour Organization, these policies have contributed to increases in poverty and social exclusion in Greece in the years following the implementation of the austerity measures.

The exact determination of why these data-driven models have failed goes beyond the scope of this article, but if anything, these cases exemplify how big data can lead to big mistakes when handled incorrectly.

Like businesses, public services need to invest in Data Veracity

As members of and contributors to the data-driven ecosystem, public sector organizations should spend at least as much resources as their private sector counterparts on the verification of the information they possess, as they would spend on gathering the information in the first place. As Paul Daugherty (Accenture CTIO) puts it: “Invest in your data and strive to create data sets that will drive your business going forward.”

Although many public sector organizations do not have the time or resources to ensure the required levels of Data Veracity are attained, they can learn from the work already done in the private sector, working with experts in the field who can leverage insights from decades of work to give Data Veracity its proper consideration in the public domain.

The EU is also taking steps in the right direction. The introduction of the European legislation INSPIRE is a case in point. The aim of INSPIRE is to create an infrastructure for sharing spatial information between public authorities in Europe. Data collected by the EU Member States needs to be comparable across borders for decision makers in the domains of health, security and transport.

INSPIRE defines common standards for several spatial data themes (including population distribution, energy resources, transport networks and natural risk zones) so that governments can share and use data more easily. These standards are designed to ensure that the authenticity of collected data is guaranteed. It goes without saying that determining the accuracy and trustworthiness of this data is paramount. 

Moreover, in recognition of the importance of data and its increasing market value, the EU has taken up the goal of developing an EU Digital Single Market and has most recently introduced the General Data Protection Regulation (GDPR) as an overarching legal framework. The truth remains that regardless of sector, it is only through Data Veracity that big data can deliver its true potential.

Want to know more about how Accenture can help your organization master Data Veracity? Feel free to contact us for a chat!