Data value creation

Many companies have launched Big Data projects and risen to the challenges resulting from them. For good reason, too: no matter the activity sector, there are numerous applications, with important implications.

If each Big Data project is different, the final objective is always to create value from data.


What is data value creation ?

Big Data

As the name indicates, data value creation consists of creating value from the recovered and potentially processed data, i.e. gaining an economic and competitive advantage.

Data value creation is thus the main challenge of all Big Data projects. It can lead to different types of gains depending on companies and their activities, such as:

  • Facilitating and accelerating decision-making
  • Improving or enriching services
  • Creating new services and products
  • Optimising activities and processes
  • Gaining economic and competitive advantages
  • Reinforcing client knowledge and relationships
  • Selling data (raw or transformed)

How does one create value from data ?

One can depict the data value creation process in four main steps, regardless of the technical architecture of systems to implement:


Big Data

In the vast majority of cases, the indispensable prerequisite for all approaches to data value creation is to have clearly defined objectives, or the value that one wishes to draw from it. This first step gives the direction to follow for the whole process:

  1. Manage sources for both internal and external data and inflows. This first step tends to require major work on watching over the data sources in order to identify those that will be usable and relevant concerning the company’s objectives. Systems of source detection and management, of real-time data acquisition, etc. are typical tools to implement in order to ensure this first step.
  2. Make the data available and usable in real time, no matter their format. The Data Lake is a storage framework that makes it possible to stock, in addition to structured and semi-structured data, raw data in their native format and for an undetermined duration of time. Thus, it offers companies the possibility to possess a single storage repository for the entirety of their data.
    Please note that this approach deviates considerably from the Extract, Transform and Load (ETL) principle, highly used in Business Intelligence, which has become too time-consuming with the increase in volume and data types.
  3. Bring out and highlight, whether automatically or not, information and trends giving response elements to the problem identified beforehand. This step thus involves implementing research, manipulation and analysis tools to allow for access to, cross-checking and even visualising of heterogeneous data, etc.
  4. Report on and inform the correct actor about the obtained conclusions after data processing :
  • In the form of adapted representations and reports that allow human recipients to have an easy understanding and appropriation of the information necessary for their decision-making . Many tools for advanced representations or report generation (automatic or not) allow for this information sharing.
  • In an automatic or semi-automatic fashion, to human agents or automatons, to allow for the real-time adaptation of processes and actions (regularisation, optimisation, correction, etc.). For example, we can consider the real-time information reports of hundreds of sensors and connected objects (IoT) present in the factory of the future, with production performance improvement as an objective.


Let's talk about your projects together