How Satellites and Big Data Are Predicting the Behavior of Hurricanes and Other Natural Disasters
Leveraging machine learning could help diminish the damages of storms and wildfires
Hurricane Harvey unexpectedly flooded large parts of Houston despite abating wind speeds. (Karl Spencer/iStock)
On Friday afternoons, Caitlin Kontgis and some of the other scientists at Descartes Labs convene in their Los Alamos, New Mexico, office and get down to work on a grassroots project that’s not part of their jobs: watching hurricanes from above, and seeing if they can figure out what the storms will do.
They acquire data from GOES, the Geostationary Operational Environmental Satellite operated by NOAA and NASA, which records images of the Western Hemisphere every five minutes. That’s about how long it takes the team to process each image through a deep learning algorithm that detects the eye of a hurricane and centers the image processor over that. Then, they incorporate synthetic aperture data, which uses long-wave radar to see through clouds, and can discern water beneath based on reflectivity. That, in turn, can show almost real-time flooding, tracked over days, of cities in the path of hurricanes.
“The goal of these projects … is really to get data into the hands of first responders and people who are making decisions and can help,” says Kontgis, lead applied scientist at Descartes.
Hurricane Harvey, for example, unexpectedly flooded large parts of Houston despite abating wind speeds. That storm inspired Descartes scientists to build the program they now use, though they were too late to apply that data to recovery efforts. While Descartes Labs has been in touch with FEMA and other organizations, there’s no official use for the data they’re collating.
The work with hurricanes is not part of Descartes’ main business, which consists of using similar machine learning to assess food supply chains, real estate and more. For example, Descartes can look at satellite data of agriculture in Brazil, Argentina, and China, and make predictions on global corn yields and prices. Or it can assess construction rates and estimate land value. But the group can leverage the same technology to examine hurricanes and other natural disasters, and plans to incorporate additional information to the algorithm in the future, like hurricane size, wind speed, and even land elevation to better predict flooding.
Descartes is just one of numerous agencies, companies and research groups trying to leverage big data and machine learning on hurricane prediction, safety and awareness. Success could mean diminished damages — economic and human — in the face of worsening climate-induced storms, or at least increased options to mitigate those damages.
Predicting where a hurricane will go is a well-established perspective, says Amy McGovern, a professor of computer science at the University of Oklahoma. McGovern studies the use of AI in decision making about thunderstorms and tornadoes, but not hurricanes, for that reason. But she says there are still a lot of factors in hurricanes that are difficult to predict. Where they’ll land may be predictable, but what will happen once they gets there is another story; hurricanes are well known for fizzling out or ramping up just prior to landfall.
Even with neural networks, large-scale models all make use of certain assumptions, thanks to a finite amount of data they can incorporate and a nearly infinite number of potential types of input. “This makes it all a challenge for AI,” says McGovern. “The models are definitely not perfect. The models are all at different scales, They’re available at different time resolutions. They all have different biases. Another challenge is just the sheer overwhelming amount of data.”
That’s one of the reasons so many scientists are looking to AI to help understand all that data. Even NOAA is getting on board. They’re the ones who operate the GOES satellites, so they’re inundated with data too.
So far, NOAA scientists are using deep learning as a way to understand what data they can obtain from their images, especially now that the new GOES-16 can sense 16 different spectral bands, each providing a different glimpse into weather patterns, resulting in an order of magnitude more data than the previous satellite. “The processing of the satellite data can be significantly faster when you apply deep learning to it,” says Jebb Stewart, informatics and visualization chief at NOAA. “It allows us to look at it. There’s a fire hose of information… when the model is creating these forecasts, we have a different type of information problem, being able to process that to make sense of it for forecasts.”
NOAA is training its computers to pick out hurricanes from its satellite imagery, and eventually will combine that with other layers of data to improve probabilistic forecasts, which will help the Navy, commercial shipping companies, oil rigs and many other industries make better decisions about their operations.
NASA, too, is using deep learning, to estimate the real-time intensity of tropical storms, developing algorithmic rules that recognize patterns in the visible and infrared spectrums. The agency’s web-based tool lets users see images and wind speed predictions for live and historic hurricanes based on GOES data.
Once we can expect computers to reliably spot hurricanes, we need a way to translate that to something people can understand. There’s a lot more information available than just wind speed, and making sense of it can help us understand all the other ways hurricanes affect communities. Hussam Mahmoud, associate professor of civil and environmental engineering at Colorado State University, has looked extensively at the factors that make some hurricanes more disastrous than others. Primary among them, he says, are where those storms make landfall, and what, or who, is waiting for them when they get there. It’s not surprising to suggest that a hurricane that strikes a city will do more damage than one that hits an unoccupied coast, but one that hits an area prepared with sea walls and other mitigating factors will have a diminished impact as well.
Once you know what sort of damage to expect, you can be better prepared for the challenges to cities, like crowding in hospitals and school shutdowns, and you can be more certain whether evacuation is necessary. But then there’s the problem of communication: Currently, hurricanes are described by their wind speed, placed in categories from 1 through 5. But wind speed is only one predictor of damage. Mahmoud and his collaborators published a study last year in Frontiers in Built Environment about an assessment called the Hurricane Impact Level.
“We wanted to do something where we can communicate the risk in a better way, that includes the different possibilities that this hazard might bring,” says Mahmoud. “The storm surge would be very important, how much precipitation you have is very important, and how much wind speed.”
The project incorporates data from recent storms — wind speed, storm surge and precipitation, but also location and population — and applies a neural network to them. Then it can train itself, estimating, for example, if a hurricane should make landfall in X location, with wind speed Y, storm surge Z, etc., the damage would probably be of a particular level, expressed in economic cost. It compares inputs from NOAA records, census data and other sources from real storms, and gives a damage level that is similar to what occurred in those storms. Mahmoud’s team tried it for real, and over the last two years, the model has given accurate estimates for hurricanes that made landfall.
“If we can do that, maybe then we can, first of all, understand the magnitude of the damage that we’re about to experience because of a hurricane, and … use it to issue evacuation orders, which have been one of the main issues with hurricane mitigation and response,” says Mahmoud.
Mahmoud’s proposed system hasn’t been rolled out yet, but he’s in talks with The Weather Channel, which he calls early stage, but promising.
The Weather Company (The Weather Channel’s parent company) is already using its subsidiary IBM’s PAIRS Geoscope big data platform to forecast power outages and thus prepare better disaster response in the wake of hurricanes. The inputs for the system come not just from weather satellites, but from utility network models and power outage history. These predictions, too, will benefit from adding more and more sources of data, including soil moisture, which can help predict tree falls.
The amount of data available is growing extremely fast, and so is our ability to process it, an arms race pointing to a future of expanding accuracy and probabilistic hurricane forecasting that will help storm preparedness around the world.
Descartes Labs has another project in the works, too, unrelated to hurricanes except that it leverages similar technology on another natural disaster — wildfires. When California’s Camp Fire broke out in early November, a twitter bot called @wildfiresignal sprang to life. Built by the same team from Descartes, @wildfiresignal prowls data every six hours from GOES-16 for smoke plumes and tweets side-by-side optical and infrared images of the fire. Infrared information can show the heat of the fire, which can help visualize its location just as the blaze is beginning, or at night when smoke is hard to see. This could help firefighters or residents plan escape routes as the fire approaches them, but, as with the hurricane project, collaborations with firefighters or national forests are preliminary.
“If we could have an alert system globally where you knew when a fire started within ten minutes after it started, that would be spectacular,” says Descartes CEO Mark Johnson. “We’re still probably a ways away from that, but that’s the ultimate goal.”
A|I: THE AI TIMES – SHOULD CANADA WELCOME MORE ROBOTS INTO ITS WORKFORCE?
The AI Times is a weekly newsletter covering the biggest AI, machine learning, big data, and automation news from around the globe. If you want to read A|I before anyone else, make sure to subscribe using the form at the bottom of this page.
The Swedish automotive company has signed a deal with Norwegian mine Brønnøy Kalk to transport limestone along five kilometers of roads and tunnels from the mine to a nearby port using self-driving trucks.
Silk Lab’s focus on privacy may have been a draw for Apple: the startup says its algorithm only sends “key” video moments to the cloud, instead of a constant stream, and that it anonymizes data on its Silk Intelligence platform.
Canada should do more to welcome robots into its workforce as it lags behind East Asia in automation, according to a new report from a technology think tank.
Researchers have pinpointed a part of the human brain responsible for “on-the-fly” decision-making. According to the findings published in JNeurosci, the anterior cingulate cortex integrates disparate information about the desirability and amount of an option to inform choice.
The two seem complementary, with Deepcore focused on starting new ventures and investing in AI companies more generally, while Zeroth operates Asia’s first accelerator program targeted at AI and machine learning startups.
The new DiDi Labs in Toronto, which will focus on research into intelligent driving and artificial intelligence, will be led by Jun YU, Senior Vice President of DiDi and chair of DiDi’s Product & Design Committee.
Wluper’s “conversational AI” is initially targeting navigation products with what it describes as “goal-driven dialogue” technology that is designed to have more natural conversations to help with various navigation tasks.
A New Supercomputer Is the World’s Fastest Brain-Mimicking Machine The computer has one million processors and 1,200 interconnected circuit boards
Scientists just activated the world’s biggest “brain”: a supercomputer with a million processing cores and 1,200 interconnected circuit boards that together operate like a human brain.
Ten years in the making, it is the world’s largest neuromorphic computer—a type of computer that mimics the firing of neurons—scientists announced on Nov. 2.
Dubbed Spiking Neural Network Architecture, or SpiNNaker, the computer powerhouse is located at the University of Manchester in the United Kingdom, and it “rethinks the way conventional computers work,” project member Steve Furber, a professor of computer engineering at the University of Manchester, said in a statement. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]
But SpiNNaker doesn’t just “think” like a brain. It creates models of the neurons in human brains, and it simulates more neurons in real time than any other computer on Earth, according to the statement.
“Its primary task is to support partial brain models: for example, models of cortex, of basal ganglia, or multiple regions expressed typically as networks of spiking [or firing] neurons,” Furber told Live Science in an email.
DOUBLE THE PROCESSORS
Since April 2016, SpiNNaker has been simulating neuron activity using 500,000 core processors, but the upgraded machine has twice that capacity, Furber explained. With the support of the European Union’s Human Brain Project—an effort to construct a virtual human brain—SpiNNaker will continue to enable scientists to create detailed brain models. But now it has the capacity to perform 200 quadrillion actions simultaneously, university representatives reported in the statement.
While some other computers may rival SpiNNaker in the number of processors they contain, what sets this platform apart is the infrastructure connecting those processors. In the human brain, 100 billion neurons simultaneously fire and transmit signals to thousands of destinations. SpiNNaker’s architecture supports an exceptional level of communication among its processors, behaving much like a brain’s neural network does, Furber explained.
“Conventional supercomputers have connectivity mechanisms that are much less well suited to real-time brain modeling,” he said. “SpiNNaker is, I believe, capable of modeling larger spiking neural networks in biological real time than any other machine.”
MIND OVER MATTER
Previously, when SpiNNaker was operating with only 500,000 processors, it modeled 80,000 neurons in the cortex, the brain region that moderates data from the senses. Another SpiNNaker simulation of the basal ganglia, a brain area affected by Parkinson’s disease, hints at the computer’s potential as a tool for studying brain disorders, according to the statement.
SpiNNaker can also control a mobile robot called SpOmnibot, which uses the computer to interpret data from the robot’s vision sensors and make navigation choices in real time, university representatives said.
With all its computing power and brain-like capabilities, how close is SpiNNaker to behaving like a real human brain? For now, exactly simulating a human brain is simply not possible, Furber said. An advanced machine such as SpiNNaker can still manage only a fraction of the communication performed by a human brain, and supercomputers have a long way to go before they can think for themselves, Furber wrote in the email.
“Even with a million processors, we can only approach 1 percent of the scale of the human brain, and that’s with a lot of simplifying assumptions,” he said.
However, SpiNNaker could mimic the function of a mouse brain, which is 1,000 times smaller than a human brain, Furber added.
“If a mouse thinks mouse-sized thoughts and all that is required is enough neurons wired together in the right structure (which is itself a debatable point), then maybe we can now reach that level of thinking in a model running on SpiNNaker,” he said.
Copyright 2018 LIVESCIENCE.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Artificial Intelligence: The Potential And The Reality
A look at AI and machine learning in the UK today
Artificial intelligence is a prime contender for this year's big tech buzzphrase. All of a sudden it's everywhere. But its history goes back a long way, as far as the 1940s and 1950s in fact. At that time an intense interest in human psychology in general and human intelligence in particular led scientists to speculate whether human learning could be simulated using machines. They were confident it could.
One of AI′s early movers and shakers was Marvin Minsky, whose work includes the first randomly wired neural network learning machine which he built in 1951. Confidence was running high. In 1967 Minsky predicted that "within a generation... the problem of creating 'artificial intelligence' will substantially be solved."
But he was wrong about that. Not only had Minsky and others in the field underestimated the complexity of simulating human-like intelligence, but the computers of the day were simply not up to the job. Even a generation later with the advent of supercomputers in the 80sand 90s, most attempts at AI resulted in disappointment.
Over time, though, the problem of inadequate computing power was largely solved by Moore's Law, and in addition to the arrival of much more powerful machines, vast volumes of data became available for training algorithms, and that started to change the game. For certain tasks at least, AI began to overtake humans in complex parallel processing tasks.
In 1997, IBM's Deep Blue beat chess grandmaster Garry Kasparov, pitting raw number crunching computing power against human genius. Was this AI? Many argued that Big Blue's victory was more about brute force than intelligence, but it was a sign of things to come. Twenty years later, this time using deep neural networking techniques, Google's Alpha Go surprised many by vanquishing champion Lee Sedol at the much more computationally challenging game of Go. And this certainly was a victory for AI.
These were impressive accomplishments, no doubt, but something of the grand early vision had been lost. Much of today's AI, including these examples, is narrowly focused on specific tasks rather than being generally applicable.
But things are beginning to change. The volume of data available to tech giants like Google, Amazon and Facebook and China′s Tencent and Alibaba together with the newer neural networking models and open-source frameworks mean that data scientists are now able to begin joining the dots between the different AI domains. So a more general purpose AI is at least starting to emerge.
Definition creep
In the last few years AI has certainly turned a corner, but the task of understanding how far it has gone is not helped by the fact that the phrase has come to mean pretty much anything that IT marketers want it to mean. You can now buy all sorts of gizmos labelled 'powered by AI', including cameras and even an AI-powered suitcase.
Even the word 'intelligence' is open to many interpretations. But this simple definition has found traction in the field of AI:
Intelligence is ′goal directed adaptive behaviour′ (Steinberg & Salter, 1982).
But rather than pondering the semantics of AI, a more practical viewpoint is to consider how machines that adapt their behaviour might be useful. What do we want to use AI for? What information do we want it to give us and how do we want to act on it? What are we using our data for currently? Are we looking backwards in time to see what has happened, as in classic business intelligence, or are do we want to use models to foresee what is likely to happen in the manner of predictive analytics so we can maybe automate the next step? Predictive analytics is very closely aligned with machine learning (ML).
We asked 200 Computing readers from organisations across all sectors and sizes, all of whom are involved in AI in some way, to give us a rough split between BI and predictive analytics at their firm. The results were: 70 per cent BI, 30 per cent predictive analytics (which seems on the high side in favour of the predictive side of things).
How does that translate into maturity with AI? Well only eight per cent said they have implemented AI and machine learning in production but another 25 per cent were experimenting with pilot studies. Remember, this is an audience that is interested in AI, so even these numbers almost certainly overestimate the true number of AI deployments. So for most it's early days, but there is definitely a head of steam brewing behind AI and ML learning and for those with the right use case, early days could mean early opportunities.
So what are the use cases where AI is a good fit?
Currently, the main driver is making existing processes better and more efficient - exactly as you'd expect with any new technology.
At the top we have business intelligence and analytics. Potentially AI can help businesses move from descriptive through predictive and ultimately prescriptive analytics, where machines take actions without first consulting their human masters.
Then there are the various types of process automation. Typically this means handing over repetitive on-screen tasks to so-called soft bots that are able to quickly learn what is required of them.
Much of the focus of that activity is on the customer, providing better customer experience - by learning what they like and giving them more of it - and better customer service, by improving the responsiveness of the organisation using chat bots for example.
Then there's cybersecurity. Machine learning systems can be trained in what is normal and to recognise abnormal behaviour on the network and either alert those in charge or, increasingly, act on the causes of the anomaly themselves.
About a quarter of our respondents had introduced or were looking to introduce robotic process automation (RPA).
RPA is often considered the most straightforward type of AI in that it doesn't usually require vast quantities of training data. Also, its use cases are the easiest to identify.
Software robots can respond to emails, process transactions, and watch for events 24 hours a day seven days a week. They are good at things humans are not good at, namely simple, standardised repetitive tasks which they can do at great speed and with low rates of error. Unlike some humans, soft bots are low-maintenance. They generally require little integration, and - also unlike some humans - they can be relied upon to do the right thing time after time after time.
The reasons for deploying RPA? No surprises really. Cost reduction, improved productivity, lower risk through human error were the main ones. Slightly less obvious is improved data quality and accuracy. Because bots can be relied on to do the same thing in the same way time after time a very useful side-on effect can be an improvement in the quality of the company's core data.
So far so good, unless you are one of the people who might be put out of a job by a soft bot of course. In which case you might take comfort from a recent report by McKinsey that found that many RPA rollouts have failed to deliver, being more complex to implement than anticipated due to the unpredictable nature of many business processes, unexpected side effects resulting from automation and variable data quality. Soft bots it seems aren't necessarily so low-maintenance after all.
Indeed, half of those that have gone ahead with RPA in our survey said they'd experienced more integration problems than expected.
As with Marvin Minsky's over-optimism, it pays to note that even with relatively simple AI, achieving the desired outcomes can be far more complicated than might first appear.
AI by sector
Our respondents came from a variety of sectors so we asked about AI use cases in a few particular areas.
In the finance and insurance sector we found some early interest in intelligent anti-fraud systems that look at suspicious patterns of behaviour.
Another one was actuarial modelling, pulling in all sorts of data from many different sources in order to quantify the probability that a property will be prone to flooding or damaged by fire or subsidence.
AI-based techniques could also speed the introduction of individualised insurance on demand, something a lot of insurers are looking at.
The health sector is another that's often mentioned in conjuction with AI. Already there are some specialised precision operations that are performed or assisted by robots. However, robotic surgery was a little way down the list of priorities for our medically focused AI interviewees.
Automated diagnostics from medical imaging - identifying tumours from x-rays and scans through pattern recognition - was the top one.
That was followed by patient monitoring, both in hospitals and outside. Trials are already under way in many locations in which elderly people's flats are fitted with sensors and systems that learn their behaviour - what time they get up, how many times the fill the kettle or flush the toilet - so that the alert can be raised if the pattern changes. Perhaps they have had a fall are at risk of dehydration or cannot get out of bed.
Drug discovery is another area where a lot of hope is being pinned on AI. Certainly the pharmaceutical companies are chucking a lot of money at it. For example, Pfizer is using IBM Watson to power its search for immuno-oncology drugs.
The largest proportion of current use cases in our research were found among those in manufacturing, logistics and agriculture. This is where big data and the internet of things intersect, where ever increasing volumes of data must be processed, often at the edge of networks, and the results acted upon by autonomous devices.
We're already familiar with automated production lines and warehouses staffed with robots and autonomous vehicles, but what about agriculture where the water and nutritional needs of vineyards are tended to automatically or crops monitored by drones?
Then we have the emergence of smart grids where power generated through renewables is automatically sent to where it's needed in the most efficient way reducing the need for baseload generation.
How about being able to buy things with your face? That's something that's already being rolled out in South Korea and China and will surely turn up in the UK sooner rather than later. Indeed, some British supermarkets are reported to be rolling out age-checking cameras to vet people wanting to buy alcohol and cigarettes next year.
Among our retail respondents most attention is currently being placed on personalised marketing and advertising however.
Grunt work
So those are some of the main uses for AI and machine learning mentioned in our research. Most are early-stage projects though. So what are the hurdles?
Well, a lot of it is in the grunt work. Collecting, cleaning, de-duplicating, reformatting, serialising and verifying data was the time and effort consuming task most mentioned by our respondents. Without good data, the 'intelligence' part of AI just won′t happen.
The second most mentioned practical difficulty is training the models. With machine learning this can take weeks or months of iteration, tweaking the parameters to eliminate bias and error and to cover gaps in the data.
While models may start life in IT, at some stage they need to make contact with the real world of production engineers and end users. Interdisciplinary and cross-departmental collaboration is another mountain that must be climbed.
Other familiar bottlenecks were mentioned too, including integrating with legacy systems, a shortage of skills - and gaining acceptance from those who might fear the introduction of AI and what it could mean for their jobs and livelihoods.
The bigger picture
Indeed, 18 per cent of those we asked thought that their sector would see a net loss of jobs because of AI. On the other hand, 20 per cent thought AI would create more jobs than it displaces. Numbers were fairly low for both though, suggesting a degree of uncertainly about what the future will bring.
Indeed, the overall impact of AI is all but impossible to predict. For certain professions though, such as lorry drivers and warehouse staff, that estimation is easier than others. Change will be profound and it will arrive at a rate that will make it hard to adapt to. Twenty-nine percent of our respondents felt that insufficient thought has been given to the ethical side, the effects of AI on society.
However, more saw a happy alliance between humans and technology, with AI enabling people to work more efficiently, although one third said that for these efficiencies to be realised a radical restructuring of the workplace will be necessary.
On the question of the potential for organisational advancement, 12 per cent of our respondents said that AI is already helping to differentiate their company from the competition; a further 29 per cent predicted that would be the case in three years' time. Certainly many are seeing the potential of machine learning′s ability to drive efficiencies and as the basis of new products and services.
There will always be a need for human qualities though, and it will be a long time before machines can emulate empathy. The largest number agreed with the statement: ‘AI is not appropriate for all jobs and probably never will be'.
Justified or not, AI is generating a degree of trepidation. It's easy to scoff about the fears of Terminator-style killer robots, but what is happening right now in China with the Social Credit system in which citizens are given a ranking that depends on not just their behaviour but also that of their friends and family and social media connections, shows that fictional dystopias like the Black Mirror episode Nosedive are already uncomfortably close to reality.
In the main our respondents were optimistic though, with 10 per cent believing society will be 'much better' as a result of increased automation and 54 per cent saying 'better'. Eight per cent said it will be 'worse', while a pessimistic two per cent are presumably going off grid and digging drone-proof bunkers in the back garden. Nine per cent didn't provide an opinion.
In conclusion
The main conclusion is that AI is hard. It's difficult to understand, challenging to implement and tricky to integrate into existing systems. For most deploying AI it's early days and most efforts are at the experimental stage.
In the main, AI is still narrow and task-specific. It is an additional capability that can be bolted onto existing processes, rather than a new product that can be bought off the shelf.
That said, ML and AI are already having a big impact in multiple areas. The easy availability of algorithms and frameworks and new IoT data sources mean that things are changing fast and progress will likely continue to accelerate. Those companies are able to make something of it now are getting in at the ground level.
While AI has already shown promise in automating certain tasks, it doesn't seem likely that it will replace flesh-and-blood clinicians anytime soon.
Artificial intelligence is coming to healthcare. In fact, in areas such as radiology and cancer detection, it's already here in places, and is poised to become ever more prevalent in the industry. Which naturally raises a question for nurses and physicians: Is AI coming for my job?
Well, probably not. At least according to experts we interviewed for our Focus on Artificial Intelligence.
That said, both AI and machine learning are in a prime position to alter clinical workflows and physician training. And with the market growing the way it is, implementation is inevitable. A recent Accenture report estimated that the AI health market will hit $6.6 billion by 2021. That's up from $600 million in 2014.
Artificial intelligence and machine learning algorithms tend to rely on large quantities of data to be effective, and that data needs human hands to collect it and human eyes to analyze it. And since AI in healthcare is currently utilized mainly to aggregate and organize data -- looking for trends and patterns and making recommendations -- a human component is very much needed.
So physicians and nurses don't have to worry. Probably. At least for now.
WHAT THE EXPERTS THINK
PeriGen CEO Matthew Sappern puts no stock in the theory that clinicians' jobs are in jeopardy. Instead, he looks at AI more as an empowerment tool.
"I think it does things that are really imperative that are not necessarily what nurses can do," he said. "These tools are not so great where reasoning and empathy are required. You teach them to do something, and they will do it over and over and over again, period. They're good tools to provide perspective, but it's all about the provider or nurse who's making sense of that information."
In many ways, said Sappern, AI can help nurses focus more on the actual job of nursing, and focus more on the abstract things that can truly impact patient care. And it has the potential to increase their confidence, as they can report back to the doctor with hard stats instead of vagaries. Used wisely and it can be a boon to fact-based clinical observation.
Jvion Chief Product Officer Dr. John Showalter was equally dismissive of claims that jobs are in jeopardy. The hype is scary, he said. The reality is not.
"There are great benefits that do amazing things for patients," said Showalter. "When you come in and improve the scoring for falls, for example, and you understand what needs to be done to prevent falls, that's ready for prime time today.
"There absolutely places where AI is ready to go today, and then there's a whole bunch of AI hype that's really scary, so sorting out the AI that's ready to help patients and the hype can be really difficult for leadership.
Sappern and Showalter's opinions mirror the conclusions of an article appearing this year in The Conversation analyzing the potential effect of AI, or lack thereof, on high-skilled jobs.
USING AI AS A TOOL
In the analysis the author notes that innovations in various industrial revolutions have always created new jobs even as they've taken old jobs away; what makes the AI revolution different is that it has the potential to affect white-collar jobs.
No need for alarms to go off, though, at least not initially, since AI in healthcare would primarily affect lower-skilled office work, like data processing. Though highly trained professionals could also be affected, the switch so far seems to be happening in a way that shows AI to be a tool more so than a threat, as professionals can now learn how to benefit from its powerful predictive powers.
In some cases, the technology could be used to help fill the physician shortage that is even now gripping many parts of the country, and is expected to get worse.
Eldon Richards, chief technology officer at Recondo Technologies, said AI is now addressing a lot of repetitive tasks that a human might do today.
"If reviewing the ethics of a decision, or complex data or one-off decision, AI is not good at those today," said Richards. "Ai is very far off when it comes to those capabilities. The mundane, routine things we do, like typing in a word processor, AI us simplifying those things for us, so now we're shifting our focus from these simple tasks to things that require a little more training. I certainly do not see unemployment going up."
That sentiment is echoed by Mary Sun, AI researcher at First Derm and medical student at Mount Sinai Medical Center.
"People see it as a job replacement thing and I think that's a pretty flawed way to look at it," she said. "In many other industries, like when I was in commercial tech, it's viewed much more as an augmentation, and piece of mind, and double checking and making sure that you're involving patterns that one doctor cannot possibly see.
"As one doctor, you can't possibly see a million patients across your lifetime. But medicine, at least diagnosis, is all in the pattern recognition. So I think it's going to be very exciting when we find ways to augment our diagnoses and make them a lot more robust."
Carlo Perez, CEO of Swift Medical, feels similarly, viewing AI as augmentative tool. While it may alter the role of a doctor somewhat, it won't replace them entirely.
"What we feel is the doctor will transition into someone who understands how to wield data science, who understands how to use these tools," said Perez. "Hopefully someone will not need to truly understand AI, but will understand their relationship to it. Which is, 'I can utilize these tools, I understand these tools, and I understand how to utilize them in partnership to make better decisions.'"
HIT Think How does Google want to apply artificial intelligence in healthcare?
Recently, DeepMind's leaders announced its healthcare team will be combined into Google to help them become the "AI-powered assistant for nurses and doctors everywhere."
Observers say the move is part of a broader effort within Google to boost collaboration and communication among health projects.
In a statement, DeepMind leaders said the company has "made major advances in health care in AI research," including advances related to "detecting eye disease more quickly and accurately than experts; planning cancer radiotherapy treatment in seconds rather than hours; and working to detect patient deterioration from electronic records."
One of its most prominent projects is Streams, an app developed with the United Kingdom's National Health Service that clinicians can use to detect health issues, such as kidney failure. The app enables nurses and physicians to view patients' vital signs, test results and medical histories in one place.
In a statement, DeepMind leaders said its healthcare team will move to Google in an effort to expand Streams' reach. The leaders said they want Streams to become "an AI-powered assistant for nurses and doctors everywhere."
What are the prospects for Google’s AI initiatives in healthcare?
Artificial intelligence and machine learning have demonstrated their promise on a broad range of healthcare decision-making tasks, ranging from better readmissions predictions, financial forecasting and acute clinical diagnoses.
Alphabet's DeepMind group has pushed frontiers in AI at a dizzying pace, tackling everything from chess to automatically diagnosing eye disease. Therefore, its recent announcement that some of the more practically focused teams within DeepMind Health will move to Google and focus on commercializing AI capabilities has provoked strong reactions within the UK and the US.
In the UK, many NHS patients are deeply skeptical of data sharing outside of their local health trusts, partly dating back to the NHS's failed Care.data national exchange, which they terminated in 2016. Smaller scandals, including DeepMind's collaboration with Royal Free NHS Trust, a London-area system, have made headlines and further entrenched a belief that all sharing with commercial interests is dangerous—regardless of the details.
By contrast, US health systems routinely share data with analytics firms and other vendors within a framework permitted under HIPAA. Vendors are required to sign business associate agreements (BAAs) with the healthcare organization, which limit the scope and permitted uses of data. HIPAA places sharp restrictions on selling data, but does allow commercialization of insights and predictive models distilled from it.
But besides the patient data privacy issues with the NHS, there is another business angle to this story. Google likely wants greater control of DeepMind because it needs to make a more immediate impact in the highly competitive AI market, where they're facing constant competition from the other big tech vendors such as Microsoft, Apple and Amazon.
There are a range of AI capabilities, from narrowly focused "task AI," to "general AI" or more flexible intelligence that closely resembles human intelligence. During the last few years, the healthcare industry has seen a growth in examples of task-oriented decisions driven by AI, particularly for administrative processes. On the other hand, clinical decisions driven by a more general AI is still a concept mostly out of reach.
DeepMind essentially falls into the latter category—a research company that has a long-term view to create general AI. It has a team loaded with talent that works on a variety of projects with great potential, however years into its acquisition, it has not focused much on commercializing its products on a large scale.
If they can do this commercialization successfully, and translate DeepMind's advances into scalable decision support for patients and providers, there is tremendous potential. Even narrowly focused assistive technologies can be tremendously helpful and free up overwhelmed decision makers to focus on other areas of care.
In the same way Google Maps now offers super-human navigation skills to anyone with a smartphone, we're optimistic that the judicious use of AI will help providers deliver better, more consistent, and more efficient care than a purely human system can.