All About Augmented Analytics and Augmented Data
We already know what data analytics and data management mean. Data analytics is a set of techniques which are used to analyse the data and find patterns into it. For example, if a company’s sales data for last five years is provided, data analysis will provide us insights like which year had how much sales, if the sale grew from year to year, if the sales exceeded the expenses, what is the profit margin and other more complicated trends.
Data management comes into picture when we need to store the same data (here, sales data for past five years) for analysis. Protecting this data, validating it and making it easily available for analysis also fall under the purview of data management.
Now, when we prefix both of these terms by ‘augmented’ it simply means an advanced version of both. In the advanced stage what we focus on is more of automation and less of the human factor. In other words augmented analytics is a process to automate finding relevant data using machine learning and natural language processing so that data exploration and discovery do not keep data scientists engaged. Instead they can focus more on the complicated algorithms and specialized problems. This will get insights faster and remove bias. Likewise, data management uses machine learning and artificial intelligence to make the data storage and integration processes “self-tuning” and “self-configuring”.
To further explain this we should keep in mind that every business nowadays is collecting data; social sites, emails, online news, podcasts, blogs – all are generating massive amounts of data be it structured (in tabular form and having relation between them) or unstructured ( having no relation in the data). When we try to handle this large amount of data, to find relevant trends or points of action it is very likely that human error and bias creeps into the picture. Also the sheer volume makes it time consuming and laborious. What augmented analytics proposes here, is that all these activities be done by tools and data scientists intervene for the intricate problems that arise. Similarly activities like labelling the data, granulizing it, classifying them in certain groups, integrating them from multiple sources, etc. should be done using ML to reduce time and human error in augmented data management.
These are just concepts which have been coined by Gartner and slowly being incorporated by most BI vendors
In order to grasp these concepts better we must understand the terms given here:
Machine Learning: It is the part of Artificial Intelligence in which the system learns and improves by experience without implicitly programming it to do so.
Natural Language Processing (NLP): This is when computers understand and process language like a human being would. For example, if we ask the question ‘What’s up?’ to someone, he will mostly reply stating what his current activity is or ‘Not much’ or something similar to that. But when we try asking the same question to a computer, we get the meaning and usage of the term ‘up’. NLP gives ability to an automated system to answer like a person.
Natural Language Generation (NLG): This is when Artificial intelligence generates a narrative or story from a provided dataset. It may be in spoken or written form.
Smart Data Discovery: Smart data discovery means using tools which provide a simple drag and drop interface to use complicated analytics and statistical techniques in simplistic ways. So insights can be drawn from data, or advanced analytics can be performed by business users themselves without the need for data scientists.
Augmented Data Preparation: This again is the process of cleansing and moulding the data using joins, hierarchies, typecasts, AI, ML to make the data easy-to-analyse for business users. This also aims at reducing the dependency on data scientists.
Citizen Data Scientist: This is a new upcoming term to define those people who work in the field of analytics without having an explicit background in statistics or data analytics.
Gartner predicts that by 2020:
• Citizen data scientists will become 5 times more sought after than normal data scientists as these tools will make data analysis that much easier than before.
• More than 90% of BI platforms will have NLP and AI incorporated in them.
• 50% of analytics questions will be easily available in the internet or through other sources.
• The BI tools having features of augmented analysis will be twice as important as the ones that don’t
To be frank even if these predictions do not come true by 2020, this wave of BI disruption will very soon be knocking on our doors. So we as analytic companies should try out our own litmus tests to validate the authenticity or relevance of augmented analysis. For example if we already have a process in place where we manually cleanse the data or pick out some trends in it, we should set up an automated process alongside to verify if it is indeed faster, more accurate and without bias.
For all those who are buying such BI tools in the future, should ensure they have the abilities like –
a. Recommending the best possible way to visualize certain data.
b. Having Natural Language Processing features
c. Having Natural Language Generation features
d. Generating common insights into the data and suggesting how to dig deeper into specific analysis results
e. Prediction capabilities to segregate the outliers and forecast trends in data.
Head- Technology & Product - NeenOpal Analytics @Samatri Pal
Major Challenges Faced By Retail Outlets
April 07, 2020
Application of AI for Search Engine Optimization (SEO)
February 29, 2020
Market Basket Analaysis
December 27, 2019