The impact of AI and IoT on data creation and analytics

By TheWAY - 7월 01, 2018

The impact of AI and IoT on data creation and analytics

The impact of AI and IoT on data creation and analytics


In a region with about 40% of the world’s population, there is a lot of potential for artificial intelligence development in Asia. However, there are two challenges that need to be tackled, according to Mike Potter, Chief Technology Officer at Qlik.
First, the massive amounts of unstructured data created by connected devices call for new ways of smarter analytics – by 2019, 40% of this data will be analysed directly at the edge and not in a central data warehouse (IDC). At the same time, while the volume of data increases, many employees don’t have the skills required to incorporate data into their job.
Second, the relative scarcity of traditional data (e.g. credit card scores) in many Asian markets means they need to make sense of alternative data (e.g. mobile phone usage) and unstructured data. Potter believes that AI will play a big role in achieving this, opening the door to many new insights, and therefore new products and services in the region.
Networks Asia recently conducted an email interview with Potter to find out more about AI and its impact on data creation and analytics. He also discussed some critical things that organisations should know before applying AI and machine learning.
Are we looking at data creation in the wrong way? How important has data generated from IoT, M2M communication or machine-generated data become and how should we be dealing with it?
Mike PotterWe are in a technological age of data hoarding, in which we focus too much on capturing all possible elements of data, as opposed to asking ourselves why we are collecting that data.
For example, IoT is an important source of data, but a lot of times we are too fixated on atomic-level events as opposed to patterns that are important for gaining insight into analytics. The simplest example is an IoT system that collects all the heartbeats from a person. How insightful is it to analyse heartbeats versus heart attack patterns, arrhythmia or heart diseases?
We really need to think first about what the organisation needs and can only create a data collection plan afterwards. Applying analytics can then help transforming that data into actionable insight.
What are the differences between IoT, AI and ML? Do enterprises understand what AI means for now?
IoT is really about collecting data at the edge. Often, that data is sensor-oriented, which means it is very atomic in nature. We will therefore increasingly rely on AI and machine learning to apply algorithms and heuristics to these vast amounts of data to describe an insight, create a prediction or assess a scenario.
However, most analytical products don't know how to leverage AI in the most effective way, making it hard for organisations to apply AI. Simply running an algorithm on a data set and relying on the heuristics to try to get the answer right is not enough.
At Qlik, we believe that the more effective way of turning raw data into insights is combining intelligent algorithms with human interaction. The human factor provides an element of experience, intuition and knowledge that complements the data, allowing the system to learn. We call this augmented intelligence and have this implemented in our latest product release. Our Cognitive Engine in Qlik, leverages insights from the data loaded into the platform and combines them with best practices for data visualisation. Essentially, all that needs to be done is to drag and drop objects or measures onto your canvas and smart visualisation recommendations will appear. By doing that, Qlik Cognitive Engine enables the user to focus on discovering insights.
We used to look at Big Data when it came to HPC (High Performance Computing) and intelligence from HPC. How is this changing?
The definition of big data is evolving. For a number of years, it's always been about volume, velocity, variety and veracity. A lot of times that applies to data regardless of size, quantity and shape. However, what we need to do is to assess the latency and the relevancy of data to the needs of an organisation. You may need to consume large volumes and varieties of data to satisfy that. That data may be at different velocities. This means the notion of big data is disappearing and, in the future, we will just call it data.
When it comes to IoT, machine learning and m2m communications, are we looking at the development of deeper learning or augmented intelligence that we can use for better machine learning? How far off are we from automated intelligence or artificial intelligence?
Currently, we're seeing two schools of thought. One believes that we can achieve a level of algorithmic accuracy in which true AI can occur, while the other thinks that you will always need a participation and input or context from one or more humans. In terms of a conclusion, only time will tell.
However, given the importance of the role of data as a tool, at Qlik we believe that by introducing the user as the human element into the machine learning and AI mix, we have the ability to create this notion of a collective intelligence – which actually provides the highest value from an insights perspective.
What about generating information from data? Are we making more or better sense from what data we have or are we fumbling in the dark? How can enterprises take better advantage of the data they are generating?
One of the greatest opportunities we have is to formulate raw data into patterns which represent insights and information, and in turn create a value from analytics that is much higher than just the original data itself. I foresee major advancements happening in the industry around this ability.
Just look at bringing analytics to the edge. Rather than aggregating all of the base data and sending it to a central data centre for analysis – which would take a long time and require immense computing power – we will see analytics taking place at all points during the value chain, from the edge to the central storage. This filters unnecessary data and instead focuses on the data that really supports the problem you're trying to understand.
I recently met with a customer who uses drones to monitor the state of its agricultural fields. They wanted to turn the images being collected into analytical patterns. Rather than collecting all this information with the drone for future processing, we helped them to pre-process and select images with patterns directly at the point of capture – basically bringing analytics to the drone – and collect further analytics in a more centralised way.
This is only possible because our new Qlik Cognitive Engine – which is an AI framework – can now run on new platforms like Linux, including tiny devices such as a Raspberry Pie. This opens up opportunities for our customers to do edge analytics and apply analytics in new, real-time use cases.
How can businesses ensure data sits as a positive asset in their balance sheet? What costs can impact a business if they have a loose data management framework compared to a tight one? What other intangible impacts can it have?
I think the biggest thing that most organisations need to think about when it comes to data is their strategy. You have to know where the data is, where it comes from, what latency it has, and most importantly what problems it is solving.
A lot of organisations build analytics from the bottom up and collect data in hopes that they can find an insight, but more often than not, they end up hoarding the data. The best organisations leverage data along with a top-down view where they decide what the strategic intentions are, how to measure them and how they will look for the data to satisfy those requirements.
The other thing to think about is your organisation’s data literacy strategy. Most organisations don't understand the level of data literacy their employees have. The truth is, based on our APAC Data Literacy survey which interviewed more than 5,000 employees in the region, 80% of workers in APAC are still not confident in their ability to read, work with, understand and argue with data. Both employers and employees need to take ownership and be more proactive in bridging this skills gap. On top of that, we need to build a community across academics and even NGOs to elevate the issue.
How much of what we’re hearing from vendors around AI/AR and machine learning (ML) is hype and how much is reality? When does one go from ML to AI? We’ve been hearing about reactive IT for a while now, what is different now? Who is leading in this area?
There has been much hope that AI approaches are designed to replace the human decision-making process, which is an entirely wrong mindset to have. With the influx of data, people need the right support to understand and challenge data to make the right decisions. By ensuring that machine intelligence works with – rather than replaces – human analysis, we can ensure a multiplier effect where statistical insights and intuition work together.
The future of augmented intelligence is bright – it consists of people becoming more data literate, while at the same time, machine learning and AI capabilities will provide guidance and assistance via technology. Merely relying on a machine learning system to replace or create data literacy is an idea destined to fail.
What is the difference between true AI and ML and rules-based engines? How about narrow-focus AI? How automated can we make systems that are machine learning capable—are we ready to embrace automated IT?
I think what one needs to recognise is that a lot of the algorithms and approaches for AI and machine learning are fast being commoditised. They are in the public domain, they are available for everybody and what matters is how those approaches are being used within a cognitive strategy. What we need to do is use these cognitive strategies to help give users a better experience. That's part of our augmented intelligence approach.
In terms of how far we are from embracing this in an automated way, I think that it would be fundamentally measured by user satisfaction. From our observation, customers are still in the curiosity stage. What's interesting about machine learning and AI is that it was born out of a hope - that since IT organisations aren’t sure what the data is going to be used for, they can automate that process without having to figure it out. So, in many respects, a lot of organisations hope it's a quick fix to what is a fundamentally flawed data strategy.
AI is not purely ‘artificial’—the ideal approach amplifies human intuition with machine intelligence to broaden analysis and drive new insight. As we progress in AI and machine learning, and see what it really can do, organisations who succeed will be those who know how to use it more effectively. Machine learning and AI will start to automate the mundane aspects of analytics so that you can then take your analytical thinking to the next-level.
What are some critical things that organisations should know before applying AI and ML? What needs to be in place or replaced?
An organisation needs a data strategy, a data literacy strategy and a culture of data trust where data is used as a tool and not a weapon. And before they can start employing techniques to help assist users to be more effective in their jobs, all of those things must be in place.
The single biggest problem we’re currently seeing is the disconnect between what the business needs and how IT sees the world. IT is often geared towards technology governance and data collection, while the business is geared towards solving a problem. On the one hand, IT is hoping to satisfy business needs by providing a technology solution on top of data, but on the other hand, business expectations can't be met using algorithms, and a lot of times they need data that is not captured by IT.
What organisations need to be able to do is create a system in which those two worlds can merge, and then apply a layer of cognitive intelligence on top of that as an enabler, so users can be more productive.  We need to remember that AI is only as smart as the data it's looking at. That's the single fundamental problem—that even in supervised and unsupervised learning systems, it's only as good as the training set. Where an AI system can truly start to evolve is where it's continually getting feedback and corrections.
AI will only truly be artificial when it can extrapolate in the absence of data, and right now it can only interpret data. And so once that realisation sets in—that there's no free lunch, there's no magic fix and AI is limited to the inputs it receives— only then will organisations be able to full fully leverage the benefits of AI.
What skills are needed for the AI age?
Data skills are set to become more crucial with the arrival of disruptive technologies. With the proliferation of AI, for instance, the future of work is changing– and some jobs, in some markets, are changing faster than others.
According to our recent Data Literacy survey, there is still not enough being done to support workers with training and education initiatives that accelerate data literacy skills in organisations across APAC: four out of five workers don’t think they have had adequate training to be data literate. But they are definitely keen to learn—over two thirds are willing to invest more time and energy into improving their data skillset, if given the chance.

source: https://www.networksasia.net/article/impact-ai-and-iot-data-creation-and-analytics.1529848014

  • Share:

You Might Also Like

0 개의 댓글