DistribuTech 2018: Big Data, Artificial Intelligence and ‘Digital Twins’
By TheWAY - 2월 13, 2018
DistribuTech 2018: Big Data, Artificial Intelligence and ‘Digital Twins’
Distributed energy is bringing changes to the power grid that only machine learning can solve.
What can massive computing power and ubiquitous data do for the future power grid?
The answers to this question were lurking around every corner at this year’s massive DistribuTech utility and energy trade show in San Antonio, Texas, where the phrases “machine learning,” “artificial intelligence” and “digital twin” were being tossed back and forth between vendors and customers with abandon.
These big-data buzzwords have been part of the DistribuTech lexicon for some time. And over the years, it’s been possible to trace the progress of some of these promised technology breakthroughs, both in real-world performance improvements and the new solutions being created for problems that used to take utilities months or years to tackle.
For instance, let’s take the concept of a “digital twin." A digital twin is a simulation of a turbine, engine or some other highly complex device, rendered out of real data, and run through millions of different scenarios to try to gain insight into how its real-world equivalent will perform its best, or when it will fail.
Now expand that concept to encompass the power grid -- a massive machine in its own right, built from a combination of old and new technologies, with its own flows of data to gather, clean up, digest, analyze and model, amid an ever-changing resource mix. Even contemplating this computational task has until recently been beyond practical reach for most utilities.
But according to Jim Walsh, CEO of GE Grid Software Solutions, the company’s investments into its industrial internet and Predix big-data platform are yielding these kinds of applications today, at least for a select number of as-yet unnamed utilities, in the form of a “network digital twin.”
“We’ve got a lot of those acronym-laden operational systems,” he said, referring to core platforms like geographic information systems (GIS), distribution management systems (DMS) and energy management systems (EMS). “What I’m hearing most often from customers is, 'We’ve cracked the code on generating data. We’ve got sensors galore; we’ve got data streams galore. Our problem is, We’re using 3 percent of it...to execute our operational systems, and the rest of it has flown off.'”
The idea to capture more of this data is a grand one -- a model of a utility’s electric delivery system down to equipment and wiring, that can simulate events, network changes, electrical flows, and other phenomena in real time. That requires a lot of front-end work to collect and analyze data across multiple systems, to find the gaps and errors between grid data and grid realities, and otherwise clean up and standardize what’s going into the digital model.
“In order for any of this to be valid, you have to have clean, accurate data,” Walsh said.
But when this work is complete, the network digital twin is meant to provide an accurate model for utilities to run massive simulations on to help solve all kinds of problems. And because the platform will be constantly updating itself with new data, both from outside and from the simulations it’s running over and over, it’s expected to yield insights that may quite difficult for humans to notice.
“What you’re trying to do with machine learning is capture scenarios over and over and over again in real time, and ultimately that’s making the mode smarter than you could ever hope to be,” he said.
GE's Predix, Siemens' MindSphere, ABB's Ability Ellipse
Applications can range from predicting and preventing equipment failures or informing split-second grid decisions, to planning out the optimal investments and policies to integrate the rising number of rooftop solar systems and plug-in electric vehicles being bought by utility customers, Walsh said.
He wouldn’t provide any details on how GE was testing its network digital twin or with which utility customers. But it’s noteworthy that GE’s flagship utility customer for Predix, Exelon, extended that relationship in October to cover its regulated wires utilities as well as its generation assets.
Walsh wouldn't discuss how the platform was being priced, either. “The way we work with our utility customers today is, there’s got to be a relatively robust return on investment,” he said. “But to me, the utilities that can figure out how to use even 10 percent of the data they’re generating to create benefits for their customers are going to be the ones that succeed.”
As for how GE’s data science approach can yield unexpected returns, Bryan Friedhan, a senior software engineer with Predix, described how GE is working with customer Exelon: “A lot of the analytics we’re doing with Exelon are after five or six discrete use cases. But as we’ve gone into the details of how we’ll deliver those, we’re seeing different patterns of things we didn’t anticipate to see, which has helped us define the approach -- maybe there’s a derivative vein of value.”
GE’s Predix was just one of the big data platforms being pitched by grid giants at DistribuTech. Siemens was busy promoting its own version, MindSphere, at this week’s conference. Meanwhile, grid rival ABB announced its own new platform, the Ability Ellipse, positioned as a unifying platform for its suite of workforce and asset management software platform.
Matt Zafuto, global business development head for ABB Enterprise Software, noted that customers using the platform, such as AEP and FirstEnergy, have already credited it for helping them identify and prevent the failure of several high-voltage transformers, yielding millions of dollars in the first year of operation.
The AIs growing along the grid edge
It’s still not clear whether any utilities beyond the largest and most advanced are ready to start investing in these kinds of capabilities. The Department of Energy’s Modern Distribution Grid Advanced Technology Maturity Assessment (PDF) found that most of the more advanced analytics capabilities being promised by these types of platforms are still in the operational demonstration phases, with a relatively small number of early commercial-scale deployments.
Meanwhile, one doesn’t have to be a multinational corporation to talk about digital twins and artificial intelligence. The fundamental data science behind GE or Siemens’ new platform is also available to smaller companies and startups, as is the computing capacity to make use of it, through cloud providers like Amazon Web Services and Microsoft Azure.
Many of the companies working on aggregating distributed energy resources DERs are also investing in machine learning and artificial intelligence. Larsh Johnson, chief technology officer at behind-the-meter battery startup Stem, noted that the company’s recent $80 million investment was led by Activate Capital, a growth equity firm with an interest in companies applying artificial intelligence to new industries and uses.
“The early team was working down this path of machine learning and data science investments back in the company as early as 2009,” he said. That makes sense, given that Stem’s business model of tapping batteries to reduce building demand charges was predicated on using a very expensive tool to capture very time- and condition-sensitive revenues.
“If you go back to when batteries were $1,000 per kilowatt-hour, you had to be surgically precise about how you were going to deploy...if you were going to save the customer [money] on that non-coincident peak,” said Johnson.
As batteries have come down in price, that razor-thin margin for error has expanded somewhat, “but you still don’t want to squander that battery. You’re looking for how [to] dispatch that energy in a way that best utilizes the capacity you have for the best economic benefit. We have a strong data science team; that’s their focus.”
For that reason, being able to forecast a building’s "characteristic behavior” is critical, Johnson said. That’s led to Stem developing building energy models that are in many respects similar to the digital twins that GE has designed for jet engines, locomotives and power grids.
“We like to think about how our artificial intelligence solution is enabling this kind of flexibility,” he said. And not just for energy storage -- "to change the way systems operate, re-prioritize the value stack, figure out the market opportunities.”
Enbala, another startup that’s balancing disparate distributed energy assets to benefit energy customers alongside utility and grid operators, had a chance to describe its own machine-learning efforts at this week’s DistribuTech. The Vancouver, Canada-based startup, which last year became ABB’s preferred vendor of distributed energy resource management software, updated its software with new bidding strategies built on more complete tariff structures, as well as energy storage cost and economic optimization algorithms.
To manage the optimizations that are possible with these more advanced and complex data, Enbala has turned to its own version of a “digital twin,” said Michael Ratliff, executive vice president of products for the company.
The primary goal of Enbala’s application of the concept is to maintain a tighter relationship between the models and the real-world performance of different assets in its portfolios, he said. Every load or device will start to “drift” from its expected performance over time, and while some will do so in predictable ways, others with more variables present a “leaky model” that’s harder to maintain. With machine learning, “the model can just get better on its own.”
--
Grid Edge Innovation Summit is the leading future energy conference that will examine the energy customer of tomorrow and how new innovative business models are quickly emerging. Join us as we bring together the most forward thinking and prominent members of the energy ecosystem and as our research team explores the future of the market
.
.
source: https://www.greentechmedia.com/articles/read/distributech-2018-big-data-artificial-intelligence-digital-twins#gs.V5ffmOE
0 개의 댓글