The Li-MITLESS ENERGY Podcast: Distributed Intelligence on the Grid Edge

The Li-MITLESS ENERGY Podcast: Distributed Intelligence on the Grid Edge

Mark Spieler,  Global Head of Energy at NVIDIA, joins host Dr. Denis for this episode of the Li-MITLESS ENERGY Podcast. Together, they dive into NVIDIA ‘s tenure in the computing industry and how it plans to leverage its innovations in the energy industry. Spieler emphasizes the importance of leveraging technologies like GPUs and AI to balance grid demands, optimize energy production and storage, and ensure reliability and resilience. This episode provides a comprehensive overview of NVIDIA’s journey into the energy sector, its technological innovations, and its vision for empowering energy transformation through advanced computing solutions.

NVIDIA’s Role in the Future of Renewable Energy and Grid Edge Technologies

Inside perspective of a power transmitter

Since 1993, NVIDIA has been at the forefront of technological innovation. The company’s founders started it to create 3D graphics for gaming and multimedia uses. By 1999, NVIDIA had invented the world’s first graphics processing unit (GPU), which revolutionized the computing industry. The next two decades brought parallel processing capabilities, modern artificial intelligence (AI), the first GPU capable of real-time ray tracing, and the NVIDIA omniverse. While these advancements have encouraged major leaps forward in the gaming and computing industries, NVIDIA is striving to use this technology on a wider scale to help solve the energy crisis. The potential for more renewable energy and the application of grid-edge technologies would lead to a more sustainable future.

In this episode of the Li-MITLESS ENERGY Podcast, host Dr. Denis Phares interviews Mark Spieler, Global Head of Energy at NVIDIA, during the IEEE Conference on Grid Edge Technologies in San Diego, California. The conversation explores NVIDIA’s evolution from a graphics card company to an accelerated computing platform and its role in energy and grid edge technologies. Spieler discusses how NVIDIA’s parallel processing capabilities have expanded beyond gaming to tackle complex energy challenges, such as seismic processing in oil and gas exploration, renewable energy optimization, and grid management.

Listen to the full episode or watch the recording on our YouTube channel and be sure to keep up with NVIDA’s latest technology on LinkedIn, YouTube, Twitter, and Facebook

Podcast Transcript

Denis Phares  0:15 

Welcome to The Li-MITLESS ENERGY Podcast. We’re here in San Diego, California, for the IEEE Conference on Grid Edge Technologies. And I’m here with Mark Spieler, Global Head of Energy at NVIDIA. Welcome to the podcast. I love NVIDIA, it’s a great company. And the Grid Edge Conference is a unique conference because now we’re talking about basically distributed storage distributed, smart grid. NVIDIA, obviously, is going to play a large part of that. Maybe we can talk about, first of all, NVIDIA as a company, and then how you think it’s going to play into the grid edge.

Mark Spieler  0:55 

Sure. Thank you for having me, first of all. It’s great to be here. Really excited about being at this conference. My father and brother are both engineers. I’m not the engineer in the family, but I think I’m the first one presenting at an engineering conference.

Denis Phares  1:08

You sound like an engineer.

Mark Spieler  1:09

Right. I play one on TV. So yeah, I’ve been with NVIDIA for about three and a half years. My background has always been at the intersection of energy and technology, primarily in oil and gas. But when I came to NVIDIA, the goal was how do we grow our energy footprint? How do we grow the work that we’re doing across energy?

Denis Phares  1:31 

At what point was NVIDIA interested in energy, because you think of it as a GPU company, a graphical card company?

Mark Spieler  1:38 

So, NVIDIA is a accelerated computing platform company started in the early 90s. Our founder and CEO is still at the helm, Jensen. And basically, what we evolved from was a company that made really good graphics cards to a company who has built software stacks to take advantage of the cores on a graphics card to do all kinds of math problems, not just graphics-related but also scientific computing and now AI. And so, when you think about a GPU, most people aren’t familiar with it, they think about CPUs; Intel, AMD, and those folks. They’re very serial in nature. And although they have multiple cores now, they can run a few things in parallel. We have over 10,000 cores, I think over 13,000 cores on our biggest GPU.

Denis Phares  2:31 

So, 10,000 parallel computations on a single GPU.

Mark Spieler  2:35 

That’s right. And so, when it comes to changing the colors of a pixel, or shading a pixel, that’s what allows you to get the real-time visual aspects that reflect real-life simulation.

Denis Phares  2:47 

So, the whole point was to, actually, in very rapid real-time changing pixels to be able to emulate real life in a gaming situation, it’s really gaming that it started, right?

Mark Spieler  3:03 

Absolutely. And when you think about a video game versus a video, you watch TV, it’s all pre-recorded, you know what the color of the next pixel is going to be. But when you’re in a video game, you don’t know if you’re going to look up and see blue, or look down and see green, or what color, what shading, what reflection of light needs to be created. It’s all math equations as you move through the video games, so it’s got to be able to compute that in real-time and tell the pixel what is the next look that it’s supposed to have.

Denis Phares  3:30 

That’s interesting. If it’s not following real physical equations, the human eye and brain can detect this doesn’t look real.

Mark Spieler  3:40 

Yeah, absolutely.

Denis Phares  3:41 

It knows what physics should look like. So, that’s fascinating. So, the point of having all of these parallel processors on a single unit is to be able to do very rapid physical computations, initially, to be able to trick the human brain into thinking this is real, or recognizing it as potentially real.

Mark Spieler  4:01 

Perhaps. I don’t know what the thought process was back then, but it definitely does that, and it basically, it solves these math problems. And then, in the early 2000s, we started figuring out, with support from the research community, and university community, that you can actually use these cores to do other math problems. Computational science.

Denis Phares  4:25

To do actual physics problems.

Mark Spieler  4:26

Physics problems, chemistry problems, math problems, all kinds of stuff. And so, we started scaling out. We wrote CUDA, which is our development platform. And one of the biggest industries that took off after that was the energy industry with seismic processing. How do we take the seismic shots that are looked at to understand what’s underneath the Earth so that you can explore for oil and gas or hydrocarbons? And there’s a ton of algorithms, but it’s highly parallelized, and then you can pull it all back together into a single image. And so, we were able to get tremendous speed-ups, 5, 10, 15x performance increases, and be able to do what took months down to weeks. And we’ve continued to evolve that into a lot of different industries. And now, we’ve taken that technology and moved it out to the edge, as well as doing a lot of AI because you can run a whole bunch of AI in parallel as you’re ingesting data and training AI models. All of that stuff can be done in parallel; looking at images, looking at videos, looking at words.

Denis Phares  5:38 

I have fond memories of CUDA. My days as a professor, I had a project where we were using it in GPUs to solve fluid mechanics equations. And we were looking at turbulence and flow of the motion of particles in turbulent flows. Very, very complicated stuff. But you can get results very rapidly on these platforms compared to what we used to be able to do in a supercomputer.

Mark Spieler  6:07 

Correct. So, today, a majority of the top 500 supercomputers in the world all use GPU technology. And it’s just because the scale and the capacity to compute on that many more cores and the energy efficiency. In a world where we’re trying to basically cap the energy that is used for computing and other things just due to the capacity to develop it, move it, and all of that, if you can reduce the energy needs by 10x by using GPU computing versus CPU computing because you’re able to do so much more in a smaller energy footprint… The individual GPU may use more power than the CPU, but if you need 30 servers of CPU to do what you do with one server GPU, you’re obviously saving a ton of electricity.

Denis Phares  7:04 

Mm-hmm. Right. So, I guess you have to modify the code, very different code that runs on GPU than even the algorithms because everything has to be highly parallelized and everything is done at the same time.

Mark Spieler  7:15 

It’s changed a lot. It sounds like maybe you haven’t developed on CUDA for a while, but I tell you, a lot of the CUDA…

Denis Phares  7:21 

I haven’t.

Mark Spieler  7:21 

… Development has all been lifted up. We’ve been investing in software, architecture, and infrastructure for many years. And yes, you need to be very efficient as a computer scientist to develop in CUDA. What we find, though, is we’ve created a bunch of extraction layers that basically create API’s other SDKs on top of that so that most people now can develop and take advantage of the GPU infrastructure without having to go into CUDA development. And we’ve just released what’s called NVIDIA AI enterprise, which is a full stack of software tools based on both open codes from NVIDIA, but also, open source codes that basically can leverage GPUs without significant changes in coding.

Denis Phares  8:08 

So, that pretty much explains why NVIDIA is a $650 billion market cap company. You got the hardware, you got the software on top of that, you’re developing both. And I think it makes sense then, when you talk about energy and the electric grid, and how it’s going to get more and more complicated, and more and more smart, this is the type of platform that is required to make it work.

Mark Spieler  8:35 

I think so. I think the best way to put it is the complexity of the grid is increasing at an exponential rate that will increase as laws and regulations and all of that change. Today, we’re at, I don’t know 1% EVs. The goal, I think, in the US, we’ve heard recently is what? 50% EVs by 2030? That is a compelling event in this industry that we’ve never seen before. And, in order to accommodate that, we have two ways of solving that: Run a bunch more lines, or redirect electrons in a more efficient manner. And I use Waze as an example. If you want to go somewhere, and there’s traffic, you can either build wider roads or more roads, or you can redirect traffic, or tell people to come at a different time, or whatever. It’s the same logic there is as people are trying to use more electricity, can we redirect those electrons from other people in the grid who might have spare capacity through solar, EVs, battery walls? And can we encourage people to use less given the constraints and use, what I hear from regulators a lot, carrots and sticks to basically encourage that, but do it in a more real-time fashion so that you don’t overburden the grid and cause failures. Right?

Denis Phares  10:06 

And, on top of that, if you’re talking about electrifying 50%, 30%, whatever it is of transportation, that all comes from the grid now. And, even if you are going to optimize the flow to make it more reliable, you’re going to need more production. And, we maintain, we prefer to see more of that production to be renewable. So, not only now is it a more stress grid, but you have more of your production being intermittent. And so, that means more batteries. And that means more batteries centralized, more batteries on the edge, more batteries in homes, in microgrids. So, all of that needs some high-level control.

Mark Spieler  10:50 

You’re absolutely right. Batteries are going to play an instrumental part in us accelerating renewables. Renewables by design are not reliable. You can’t know that the wind is going to blow, you can’t know that the sun is going to shine and the cloud’s not going to come overhead. And so, in order to maximize the reliability of renewables, you need an intermediary, which is batteries. And so, the question is when’s the right time to charge? When’s the right time to discharge? How do you optimize the life of the battery while optimizing the grid reliability, and, in some cases, when something happens, the resiliency of the grid by leveraging all the power on the batteries to bring the grid back up, or to redirect power from somebody who has batteries to somebody who doesn’t? And that’s where the whole energy equity stance comes in. And regulators will figure out a way pretty quick how to balance that for people who don’t have the means today to acquire batteries.

Denis Phares  11:59 

As things get more distributed, how important do you think it’s going to be that we have a lot less actual transmission? So, it’s going to be more efficient because we’ve got storage and we got production at the edge. So there’s not as much movement of energy, right?

Mark Spieler  12:16 

Yeah. So microgrids are, I guess, the panacea. If you can generate electricity and store generated electricity where it’s being used, that’s perfect. I don’t know if we’re going to be quite there in the short term. The places where there’s a lot of wind and space for solar is not the place where a lot of electricity is being consumed. So, the ability to move that. Now, can you produce enough electricity at a house? Well, my house is tall and thin, so the roofline is not very big. So even if I put solar on it, I’ve got four kids, my wife and I, we can only produce maybe 40 to 50% of the electricity that we need, right? So, even if I put solar panels on the roof, and battery storage, I still wouldn’t be able to provide all of my electricity.

Denis Phares  13:08 

Sure. But if you have storage and production, you could probably, at least, use the storage to mitigate the peaks, mitigate the power. So, the rate of transmission can be minimized. And that ultimately makes it more efficient as well, even if it’s not completely self-sustaining. But all this is also software-driven, you determine where the power comes from, so many sources.

Mark Spieler  13:34 

That’s right. No, you’re absolutely right. And, at some point, it will make financial sense for me to do this with solar.

Denis Phares  13:41 

Or the utility company to do it.

Mark Spieler  13:45 

Exactly. And I think, in the short term, what we will see is I’ll have EVs, maybe battery walls versus generators, and stuff, but the solar, for me, the payback time is just too long. But I think as we get more and more rollouts, it’ll become less expensive. And the payback time, even for the 50% or 60% that I can get, will make a lot of sense. But I think that is going to vary by location, and everything else, but the need for transmission, I think until there’s enough batteries across the grid, and enough local generation, we’re still going to need transmission. But I think you’re right. I think, eventually, if a community can be self-sustaining without having to go outside, they would choose to.

Denis Phares  14:41 

I don’t think you’re going to eliminate the centralized production, the centralized storage, the big storage systems, but ultimately, what we want is just a rock-solid grid. Especially as we come into the electrification of transportation and other things, maybe even industrial processes can be electrified as well. That’s a lot of activity on the grid.

Mark Spieler  15:07 

It’s a lot bigger than I think people can wrap their heads around today. Most of my career, I spent in oil and gas. And they’re innovative in oil and gas for sure because the innovation drives profitability, but they still move pretty slow compared to other rapidly evolving industries like financial services, or health care, or telcos, or other things. Utilities are even slower. And I think they want to build a very robust grid, and they don’t want to take a lot of risks in anything that would affect reliability or resilience. And I think, as they get pushed by the government for more renewables, as well as more renewables just come online and EVS and others, we’re going to see an exponential increase in complexity that only software and AI will be able to balance. Portland General presented this morning, Larry Bekkedahl, and they are going to have 25% of their electricity they anticipate will come from consumers. So, people with solar, people with battery wells, people with EVs, and scaling up and down their usage in real-time, and all that stuff. How do you balance that? How do you go from a world where we had one directional power plants, you need more electricity, it’s hot in the summers, more people run an AC, you just turn up the gas plant, or the coal plant, or the nuclear plant, to a world where you’ve got millions of different points of generation or savings that you need to balance now so that you don’t overproduce fossil fuel-based electricity when you could have used renewables from three doors down. In real-time, you need to make those decisions because, to your point, by the time you turn up the gas plant and move it across the transmission line, you might not need it. And then, if you don’t have the batteries, or space in the batteries, then you end up running that to ground. So, the complexity of this bi-directional grid is going to become very complex as the amount of DERs continues to accelerate at a very rapid pace.

Denis Phares  17:34 

So, if you’re not an engineer, what are you?

Mark Spieler  17:41 

What am I? Well, my background is in business development and marketing. And I worked for Silicon Graphics after I left working for a university. I always thought I’d go back and teach. I love to…

Denis Phares  17:54

You worked for a university?

Mark Spieler  17:55

I did. I worked in the Minnesota State University system. So, I was originally from Minnesota, I graduated with a degree in marketing, got a master’s in professional development leadership. Expected I would spend my career doing student development, and then realize that there’s not a ton of money working in a university setting unless you’re a professor, or researcher, or other areas. And so, I did an internship in college at Craig Research, and I decided Craig was bought by SGI, so I left the university and I went to go work for SGI, which was eventually bought by HP and is part of HP now. But a lot of the folks that I worked with at SGI ended up in NVIDIA, and I took a turn from SGI covering the oil and gas industry. So, they moved me from Minnesota to Texas to cover oil and gas. And then, I went to go work for an oilfield service company for 13 years. And I did…

Denis Phares  18:57

In Texas?

Mark Spieler 18:58

In Texas, in Houston.

Denis Phares  18:59

Which company was that?

Mark Spieler  19:00

Halliburton. So, I worked for Halliburton from 2006 through 2019. Great company, learned a lot about many areas. I started in their software area doing commercial and strategic alliances, and then software development, eventually moved into customer finance, and eventually mergers and acquisitions. And it was fantastic. Traveled the world, met with a lot of oil companies trying to understand what problems they were facing and how could we address it from a software perspective and a technology perspective. And then, eventually went to NVIDIA in 2019 to run their global energy business. And realized very quickly, oil and gas companies were becoming energy companies. They were just as much interested in decarbonization and understanding how they were going to generate hydrogen, geo thermals, wind, other types of power that were less carbon. But also, how do they do carbon capture and sequestration? How do they produce the oil and gas that will be needed in our lifetime, and for many more years, as long as we have plastics, and coatings, and pharmaceuticals, and anything that…?

Denis Phares  20:12 

That’s right. That stuff doesn’t go away, even if your making electricity from renewables.

Mark Spieler  20:15 

It’s not as emission-intensive. So, I’ve shifted from not just an exploration viewpoint, but to how do we help oil and gas companies produce the hydrocarbons needed as environmentally friendly.

Denis Phares  20:32 

So, I can understand why NVIDIA hired you in energy, but when did NVIDIA become… Or have an energy division?

Mark Spieler  20:41 

Oh, 2008, 2007. They’ve used graphics cards for high-end visualization of the subsurface since early 2000s. Any video game-type capabilities was also done with high-end CAD development and seismic interpretation stuff. So, on the graphics side, we’ve been there for years. It was about 2006, 2008, where we started selling our first high-performance computing solutions into oil and gas. And then, it’s evolved into AI digital twins. Most energy companies around the world leverage our technology somewhere. We don’t sell anything direct, so most people won’t find us in their SAP environments. They’re using us in Azure, they’re using us in AWS, in Google, at the edge from different manufacturers like Utilidata, or they’re buying us through HP, Dell, Lenovo, depending on what their strategy is. Our job within my team and the energy vertical is to identify use cases that can benefit from acceleration, creating the software and tool stack to help them be successful in solving those problems. Where they choose to execute those jobs doesn’t really matter to us. If they go and execute in the cloud, great. If you need it at the edge, great. One of our partners is going to sell them something to execute it. Once again, the silicon is just the vehicle to deliver the outcome. And it all comes down to how fast you need that outcome and how much data do you have? And that will determine will 5 5-watt chip at the edge work, or do you need a 400-watt 13,000-core GPU in the data center. And you scale accordingly to maximize price performance and energy efficiency, because really, all people want is outcomes. We buy graphics cards, we buy silicon, we buy that for the outcome it produces, not…Some people think NVIDIA is pretty cool. 

Denis Phares  22:50 

Yeah, it’s ubiquitous. You guys seem to be all over the place even in areas we don’t hear about. You mentioned Tesla as one of your biggest customers as well. I don’t know one of your biggest, but they’re also involved in using that sort of computing in there.

Mark Spieler  23:07 

Yeah. Most large companies today leverage accelerated computing. And when it comes to accelerated computing, we’ve got the software platforms to help them be successful. And that’s why our ISV ecosystem is very big. A lot of the big software companies… And my job is working with energy software companies. I’ve got a peer from manufacturing, he works with the manufacturing, CFD, CAD, all of those ISVs. Got another peer that works with financial services. So, for financial software companies. Another peer with health care, medical instruments, and genomics, and all that. So, we go to market by industry, and we hire people from those industries to take our horizontal software stack and figure out how do we apply our capabilities to solve specific problems in those industries, and then, we build a business. Find the problem, solve the problem, build a business.

Denis Phares  24:04 

Awesome. Well, it all started from gaming.

Mark Spieler  24:08

It all started from gaming.

Denis Phares  24:09

Thank you so much, Mark, for being on the podcast.

Mark Spieler  24:11 

Thank you. It was a great discussion. I appreciate it.

Denis Phares  24:14 

Thanks for listening to The Li-MITLESS ENERGY Podcast. Be sure to subscribe on any of your favorite podcast platforms.

[End Of Recording]

Featured Articles

Stay in the Know

Join our email list to receive the latest Dragonfly Energy announcements, news and trends from our industry, exclusive insights, and more.

By subscribing to our newsletter, you agree that the information you provide will be processed in accordance with our Privacy Policy.