AI focus shifts to Small and Wide Data

Since computers have started handling large, unstructured datasets, data scientists and mathematicians have run multiple models on large data to come to a conclusion. Thus, big data is quite familiar and much more preferred by companies and analysts alike. But now, Big Data is set to move out of the spotlight with more and more organizations shifting towards small and wide data.

Need for Small and Wide Data

As per Gartner, by 2025, nearly 70 per cent of organizations will shift their focus from big to small and wide data. This will make the ML and DL models less ‘data-hungry’. Small data is a collection of sourced datasets within organizations. It is sometimes based on individual problem-solving examples. To implement meaningful result-oriented strategies, sometime, we need to reduce the volume of data.

Benefits of Small and Wide Data

Some potential applications where small and wide data come handy are demand forecasting in retail, real-time behavioral and emotional intelligence in customer service applied to hyper-personalization. There could be a balance between ML techniques and people’s domain expertise, their own experience and inputs. For example, in few shot learning, the machine needs to predict outcomes whether or not similar data was present in the training set. Transfer Learning using small and wide data would reduce the machine’s requirement for humongous amount of data. On the other hand, analysts and coders can validate the results based on their understanding. As data scientists are free from tedious works like cleaning, data wrangling, they can focus on either developing existing concepts or acquiring new skills.


Small and wide data is more human and we can be leverage for effective analysis and decision-making. Even if big data is available, the time and process to implement supervised model is tedious.

On top of that, with each passing day, with increased detailing, decision-making is becoming more complex. If the volume of data is less, then it is easy to manage.

The overall growth depends not only on automation, but also on human factor. Usually data scientists, analysts, coders work with large chunks of data. This approach of working with less data in itself will be a learning experience for many. The outputs rendered will be much more comprehensible due to its machine-human collaboration.

End Notes

On the constant drive towards analyzing Big Data, many organizations have chunks of small and wide data which go underutilized. If these small-data techniques are a success, then it’s accuracy, efficiency can be added upon the existing business functions. Working with a small, comprehensive data can help make the competitive difference for many organizations.


Facebooktwitterredditpinterestlinkedin

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top