Skip to main content
U.S. flag

An official website of the United States government

Eyes on Earth Episode 132 - Moving Forward with AI at EROS

Right-click and save to download

Detailed Description

Eyes on Earth tackles artificial intelligence (AI) in a 2-part episode. AI is quickly becoming a necessary part of geospatial work at EROS, helping us efficiently do science to better manage our world. In Part 1, we talked about AI’s current and upcoming impact on our work at EROS and clarified some of the AI jargon. The successful use of AI to make NLCD an annual product was a key example.

In Part 2, we discuss another potential application of AI—keeping Landsat satellites safe and healthy in orbit. Additionally, guests comment on how readily staff are adapting to using this rapidly evolving technology. They discuss the biggest benefits and challenges we face in using AI. Among the benefits are making EROS data products more accurate and reliable and getting them to the public in a more timely fashion.

Details

Episode:
132
Length:
00:27:44

Sources/Usage

Public Domain.

Transcript

TOM ADAMSON:
Hello everyone, and welcome to another episode of Eyes on Earth, a podcast produced at the USGS EROS Center. Our podcast focuses on our ever changing planet and on the people here at EROS and across the globe who use remote sensing to monitor and study the health of Earth. My name is Tom Adamson.

In part one of this episode of Eyes on Earth, we talked about what AI is and how it is useful for science applications at EROS. In part two, we talk more about the challenges and benefits of using AI at EROS. We're talking with EROS Director Pete Doucette and scientists at EROS who are using AI in their work: Terry Sohl, Rylie Fleckenstein, and Neal Pastick. I had also wondered if AI could be used in flying the Landsat satellites, so I asked Pete about that.

Well, besides the loads of data that AI can help us with, how can AI help us with even just something like flying Landsat satellites and keeping them safe up there?

PETE DOUCETTE:
The satellite instrumentation is monitoring the surface of the Earth. It's also monitoring itself. So Landsats 8 and 9, for example, have thousands of measurements being made of its subcomponents on the satellite itself, largely through things such as electrical currents and voltages and temperatures and things along this line. Because, you know, in orbit, satellites are being barraged by high energy radiation, high energy particles, thermal fluctuations, you know, going from being, you know, sunlit to in shadow behind the Earth throughout its orbit. And these are tremendous stresses on the components of the satellite.

ADAMSON:
I wasn't even thinking of those things. I was thinking of, like, we don't want it to run into anything up there.

DOUCETTE:
So that's collision avoidance. So what I'm talking about is the trending and analysis for health and safety of the satellite itself. So job one of the flight operations team is to continue the operation of the satellite for as long as possible. That is job one. Understanding the health of the satellite, for example, natural degradation over time, because of these radiation effects. And we do have to compensate for that over time because components will degrade. And so to the extent that we can better understand that through the vital signs of how we're measuring that, and it's across thousands of vital signs, if you will, that we're measuring, and we populate databases of this information over time. That's the kind of thing that you can use to train an AI algorithm to better understand these subtle patterns that may be indicating certain trends.

ADAMSON:
And we might not even be able to see those ourselves--

DOUCETTE:
Humans might not be able to tease those out.

ADAMSON:
Yeah.

DOUCETTE:
So at least that's the hypothesis going into this. Can we use AI to be more effective at teasing out these subtle trends, that humans might not be able to, that are indicating we have degradation going on over here. We need to compensate, right. So we're more effective at maintaining the health for as long as possible of any of these satellites. Then there's the collision avoidance part of it. So as more satellites get launched into orbit, there's more opportunity to have conflicts with other things in orbit, including debris. We can train algorithms on all the debris out there on orbital rates. And maybe we're more effective at being able to maneuver around this debris or other vehicles. You don't necessarily turn over the joystick, so to speak, to the algorithm like a self-driving car.

ADAMSON:
Ah, yeah.

DOUCETTE:
But you inform the human to be more efficient and effective as to how humans command the satellite to avoid debris, for example. And maybe you're saving, you know, relatively small amounts of fuel, because every time we do a maneuver, we're using fuel to do that. So of course we want to minimize that. But over time that may really add up and we'll squeeze out more time, you know, at the latter stages of the vehicle's life.

ADAMSON:
So those are some really great uses of AI for our geospatial work and for keeping Landsat satellites safe and healthy in orbit. Next, I wanted to see what our guests thought of how this new technology is being received among EROS staff. I wondered if there was kind of a learning curve for staff.

TERRY SOHL:
It is definitely a learning curve, but scientists in general are inquisitive. Scientists are always looking for the next best thing. And so I think that's part of the game of being a scientist. You know, you're always looking for that next tool. AI and its various forms is that next tool. And so, you know, people recognize that it's a direction that they have to go to become more efficient, to become more, you know, create a superior product. And so it's something that I don't see as an impediment to adopting it, because I think it's just the nature of a scientist to pick it up and want to do the next best thing.

DOUCETTE:
Of course, there's always a learning curve with new technology. If you're a developer, a software engineer, for example, it's a fairly steep learning curve, but most of us are not software engineers, and we're just consumers of the technology. But there's still a learning curve with how to be a user of the technology. And so I would say that learning curve on the user end of things is far more gradual versus the steep learning curve of being a software engineer or developer. ChatGPT is a good example of a very easy-to-use, out-of-the-box, simple as a Google-like prompt to type in, you know, questions or queries about most anything you want. It's a general purpose tool, ChatGPT is, as are other chatbots. And so that makes it somewhat straightforward to learn how to use. Of course, it's not perfect and much has been said about the hallucinations and the inaccuracies that come from it. But that's part of, again going back to how, the way these neural network methods are trained in a trial-and-error type way. Just as humans will make mistakes, they're imperfect with how they remember things based on how they learn them, so will these methods. And so I think we have kind of an expectation with the AI methods of the past or anything coming out of a computer to be nearly perfect. And so the fact that these methods are now more, again, inspired by human learning that are imperfect, we have to learn how to consume that information with that imperfection. And that I think that's hard for folks right now. So that's the learning curve that needs to occur on the consumer side. It's the same thing with consuming, say, a weather forecast. These are imperfect. But humans have kind of figured out a way to consume a weather forecast that they can use in a, you know, valuable way, knowing that it's going to be imperfect.

ADAMSON:
Knowing that it might actually rain when it said it might not rain.

DOUCETTE:
Exactly right.

ADAMSON:
We just have to be ready for it.

DOUCETTE:
We have to be ready-- And we don't, when a weather forecast proves out to be incorrect, we don't stop using weather forecasts.

ADAMSON:
Yeah.

DOUCETTE:
So we've learned to tolerate a certain amount of inaccuracy while still using some kind of useful information. It's the same idea with these chatbots.

ADAMSON:
Okay. Are you seeing any resistance from anyone to using AI?

DOUCETTE:
Sure. There's always resistance to new technology. And I think to a certain extent, there should be, or at least some healthy skepticism.

ADAMSON:
Okay.

DOUCETTE:
And that's just, you know, being a good scientist or engineer. Always, you know, asking why behaviors are in a certain way of any technology. So, of course, the media has been hyping both the negative and the positive aspects of AI. And, these can get, you know, quite dramatic, as you know, AI kind of taking over from humanity. And that can sound scary. And even some experts, you know, believe that unless we're careful, you know, we could lose control of the technology. I personally think that much of that is somewhat far-fetched. Not to say that it couldn't happen in some distant future, but that's the hype side of AI. Then there's the kind of the loss of one's comfort zone. You know, the way I do my science, I've been trained a particular way. I'm comfortable with that way of doing my science. And this is some kind of newfangled thing. I don't understand it. I would have to take time to understand it. And, I'm comfortable with the way I've been doing business. And so that's more of a comfort zone resistance. And that, too, I think is quite natural for humans to kind of experience that. I would encourage us to be more open-minded to understanding what it can and can't do. But that's true of, you know, technology in general. You're always going to have some level of resistance that's more set in their ways on previous technology. And as we've gone through technological or industrial revolutions from steam powered and electricity, automobiles, and the information age, you know, we went through those growing pains, you know, at every step of the way about cultural resistance. So I don't see this as being anything new. I don't think it's bad, to put it in simple terms. But many have stated that what we're experiencing now with modern AI, with, you know, the neural network foundation and transformers, etc., represents the transition from the third industrial revolution, which was the information age, into the fourth industrial revolution. We won't really know until we're well into it. Many folks believe that we're in the early stages of the fourth industrial revolution with AI.

ADAMSON:
The good news is, Terry is seeing staff readily picking up on AI.

SOHL:
There really hasn't been a lot of resistance. And in fact, it's been kind of an organic picking up of staff by the skills that are needed to do AI. It's like any other tool. I mean, people want to always improve the products they have. And what I found is that project by project, staff by staff, they tend to be picking up those skills on their own without any prodding because it's a superior process. It's a superior product in many cases. So one of the difficulties is that the skill sets needed to do AI, depending on the form, are, can be very different than traditional remote sensing techniques.

ADAMSON:
This is something that's changing really quickly.

SOHL:
Yeah, and there has been a little bit of that. But that resistance I think breaks down very quickly when you look at what's happening in the field.

ADAMSON:
Okay.

SOHL:
And so when you have a product that all of a sudden, you know, you used to be the only game in town, and now all of a sudden, because of AI, it facilitates the use of Landsat data and other data where you can easily create a product. And so all of a sudden we have more people that are doing things similar to what we're doing. And, to be able to keep up in that field, to continue to be the gold standard for geospatial data, which we always hope to be. You know, people recognize that. Then they have to pick up that tool and they have to run with it.

ADAMSON:
Neal's experience really adds to this question of resistance to the use of AI as well.

NEAL PASTICK:
I was a part of the USGS AI strategy team, whose goal was to develop a strategy for increasing adoption of AI within the USGS, as well as balancing the ethical considerations of using AI. So I think you certainly have to weigh the pros and the cons with the understanding that ethical dilemmas will arise. And I think, yeah, it's a balancing act. Some folks are, ride that rocketship up, and other folks are, let's just temper our fears. But I think, I think we're going to see more of the individuals trying to ride the rocketship in the next couple of years, given the buzz around it. But not only the buzz, just the impact that we're seeing from it.

ADAMSON:
Yeah, so you're actually seeing a lot more eagerness to exploit that as a benefit?

PASTICK:
Yeah, absolutely. I leverage generative AI every day to enhance my productivity as it relates to coding. Without generative AI helping with coding, a lot of what I've done probably wouldn't exist in some in some way, it would just take a lot longer, I guess, to do. So you're seeing broad adoption for increasing business productivity in terms of coding and the like. So we're seeing a lot of folks grappling on and pushing it forward. Rightfully so.

ADAMSON:
Pete had another good way of looking at potential resistance to using AI.

DOUCETTE:
If we think about the classic Hollywood murder mystery, where, you know, the cliche at the end of the movie is the butler did it. And so now we're kind of seeing from Hollywood extending that cliche or advancing it to the AI did it, right.

ADAMSON:
Yeah.

DOUCETTE:
And so I think that has caused a lot of fear and consternation among folks that AI may--we may lose control of it. It may take over. But I think there are some Hollywood endings that kind of have an opposite, you know, portrayal of where AI could go in being much more benevolent to humankind going forward. So I think it's an interesting contrast, and I tend to see both clichés play out, but I think it is creating a lot of concern, especially when we hear from the experts expressing concern that we, you know, they're not sure that we wouldn't lose control of it. And now that it's doing things, AI is, that we wouldn't have projected, you know, five years ago, it has just ratcheted up that fear level, that loss of control may be just around the corner. It wasn't 50 years off. It could be, you know, a few years off. Again, I think that's far-fetched. But that's just my opinion.

ADAMSON:
Next, I asked all of our guests about what they thought are the biggest challenges coming up. Here's what they all had to say in response to that.

RYLIE FLECKENSTEIN:
There's a lot of challenges. You know, I think land cover itself, not even talking AI, I think land cover itself is an extremely, extremely challenging problem space.

ADAMSON:
It's complex.

FLECKENSTEIN:
Yeah. There's a lot of nuances. And it would have been more difficult, had we not had NLCD legacy to start with. The high quality data they produced really made our lives a lot easier. Because, you know, if you start with that really high quality data, it makes-- the models can learn a lot faster, a lot more efficiently. The optimization is a lot smoother. But the scalability of our project is, you know, CONUS for 40 years, it's a big, big space. One thing I've learned, too, I don't come from a land cover or remote sensing background, really. I come from more of a STEM background and, technology, right, was kind of, you know, just my main focus. And there's a definite difference between technological applications and scientific applications that at least I've started to observe. The level of--the margin of error is much smaller. The pursuit of that perfection and excellence, especially here at EROS, is I think the utmost, you know, concern and kind of objective for us is to really to do the best we can, and so yeah, there's a lot of little details that need to be getting right and that are scrutinized that, you know, it makes it challenging, especially, you know, my experience too with the deep learning algorithms and in trying to train them and working with them is they're not black and white, you know, they're somewhat malleable. They're somewhat flexible. They're hard to get to do exactly what you want to do, especially when the margin of error is so small and the pursuit of perfection is so high, right.

ADAMSON:
Well, yeah. It's impressive, the concern for making sure NLCD is accurate so that other researchers use it and are confident, you know, that what they're working with is accurate. So now you feel that.

FLECKENSTEIN:
Yeah. It's a humbling and honoring pressure to be a part of though. I've tried to make it personal importance to, you know, try to help perpetuate that level, like quality and standard.

DOUCETTE:
I think a big challenge with what I just talked about, exploring the integration potential of transformers, AI transformers, is just the sheer magnitude of data that's going to be needed to demonstrate, that we can get a more complete picture in terms of reasoning and decision making from multimodal inputs. So that's been demonstrated, I think, fairly convincingly with text through the chatbots. Text is somewhat straightforward compared to image data. It's sequential. And so it's easier to learn from. And we have great amounts of text over the internet, which is, you know, the source of its training. Now we're going multi-dimensional from single, you know, dimensional textual data to two or more dimensions, you know, image data or, you know, including the third dimension, including maybe other dimensions of information that we combine with that image data, right, like socioeconomic data, as an example. Those are very different modes of information. I don't think we quite yet understand just how much data we'll have to train an algorithm with. So that's a big unknown and that's a big challenge. It's going to require probably more compute power than even what we've seen used for the chatbots. So it's kind of the next frontier. To me it kind of amps up the complexity of the data being used to train these algorithms. So that's the big challenge.

SOHL:
You know, when I look towards, you know, trying to put together this more comprehensive, you know, foundation model approach that Neal Pastick is leading, you know, there are challenges in terms of providing that staffing support that really can dive into some of the nitty gritty details of AI, particularly with the transformer based foundation models, which would be a new direction for us here at EROS. You know, the way I look at it is as a scientist, you're always trying to improve the process, and personally and professionally, you're always trying to improve your skill set. And I think that becomes even more and more important as AI takes some of the load off of us from an analysis perspective and puts us more into a management of the process kind of approach. And so I think it's a little bit of a mindset change, too, for some of our researchers. Fortunately, in the building, again, it's happened kind of organically and with a lot of help from the contract side, where a lot of the skills that are needed and the transfer of that knowledge has happened behind the scenes on the contract side, contractors that work on both--on multiple projects where that information is transferred from project to project and that's been really beneficial. So I do want to give a shout out to the contractors as providing a lot of help. It takes some initiative. It takes, you know, realization that this is a direction that we want to go. And, and by and large, the staff have stepped up big time. And, again, I look at what happened with NLCD, a project where originally we were given three years to complete that task, and then we're tasked to do that in two instead. And to be honest, I didn't think we could do it. And the staff stepped up. They completely put together a new process, both from the engineering side and from the science side. And we put out a product that I'm very proud of.

PASTICK:
I think one of the biggest challenges that we face is getting around the need for vast amounts of compute for standing these up. However, some of those limitations often drive innovation. We have to get creative as to how we're using these technologies. Necessity also can be a driver of innovation. So there's--one technical challenge is getting around the compute requirements and the like, but we're also drinking from a fire hose of information. I'd say about 70% of the job in developing the systems is really wrangling the data. So that can also be a cumbersome task. The standing up pipelines that can adjust these things that are machine ready, readable data. So that's another challenge.

ADAMSON:
We sure are dealing with a lot of data if you're talking about Landsat and Sentinel-2 and all of that freely available data to try to build models off of. What's the best way that Landsat especially comes into play there?

PASTICK:
Without Landsat imagery, these models just wouldn't exist. So, I mean, we can get around it by using commercial data and the like, but Landsat is a pivotal component to this, at least our geospatial foundation models. And with the advent of these new satellites going up in 2030, maybe 2031, LNext, we're going to be drinking from a big, even bigger firehose, right? We'll have more spectral bands, higher temporal resolution, spatial resolution, and the like. And so as we start moving in that direction of having more data that we have to comb through, I think it's even more important to try to develop these types of models to kind of exploit the data for better managing our world.

ADAMSON:
After talking about challenges, we should also get everyone's take on the rewards and benefits of using AI.

SOHL:
Hopefully becoming more efficient, saving cost, and producing a better product. And, you know, I look at what's been done in the past for a project like NLCD. You know, it's been a gold standard and it really is something that's, you know, one of the most widely used products across the entire USGS. It's a product that traditionally has taken us some time. There's been a latency in terms of when we actually get a data product out. And typically that's been at least a year to year and a half. With AI, you know, we've revamped the process to the point that we are moving toward a future with annual updates, which is completely new, which we haven't been able to do before. But then, you know, from a latency perspective, too, you know, we're going to be kicking out products at much faster pace where, you know, a 2024 product we hope to have out by April or May. And then that's a much faster turnaround. I think we can even improve that. We have other work too that is using AI very extensively and others where it's moving toward something like a completely AI-based focus is a little bit more R&D. It's not quite operational. But we're moving in that direction and I would say pretty much across the board with the projects in the branch, it's moving in that direction. So faster, better, cheaper. That's what we're aiming for.

FLECKENSTEIN:
For me personally, being able to work on a project of this scale and this kind of impact and really, you know, this has been my first, you know, large scale, actual real world production project to apply a lot of things I've learned in school. So it's like extremely personally rewarding to me to, you know, get out there in the world and actually try to do something, right? Hopefully help move the needle forward in adopting, you know, some of these new advanced technologies into our processes that we have going on, I think is, I think useful and helpful, at least has been so far from what I can tell.

PASTICK:
Well, on a personal level, I get to experiment with AI every day. It kind of fulfills my--it fulfills my passion to use machines to really better understand the earth. Plus, taking a step back, you really develop a sense of pride in having this model or code base stood up once you're done. And you can look back with pride knowing that perhaps this will have a very tangible impact on science and society as a whole. What we're doing is really developing a cost effective tool that could aid not only USGS EROS but all the mission areas within the USGS. And this could span multiple departments. So I see what we're doing here is a very necessary step for USGS as a whole. I'm excited to see what else we can squeeze out of it.

ADAMSON:
In a previous Eyes on Earth episode, Pete, you have said that a leader needs to be able to foresee the inevitable. Well, is AI inevitable?

DOUCETTE:
I think we're probably past the inevitability stage into reality, but it kind of depends on the application and who you ask. Just to contrast where modern AI is with, say, another modern technology such as cryptocurrency--bitcoin, right, is an example of something that has a lot of momentum behind it, has for the last several years. Many believe it is the future of currency. But I don't think it's inevitable because it's gone through some fits and starts. Whereas, AI has demonstrated its value, at least through the generative AI chatbots. I think that's been well demonstrated at this point. And it's not as though we're going to take steps backward from that. It's only going forward, you know, from here. These will become more accurate and reliable over time. So, yeah, I would say we've surpassed the inevitability stage into reality stage.

ADAMSON:
Thank you to all of our guests for talking with us about artificial intelligence, also known as AI. We saw major benefits in its use in the NLCD project in time and cost savings. There will also be other data products that will take advantage of AI. It really shows that at EROS, AI is now. And it's the future.

And thank you listeners. Check out our social media accounts to watch for all future episodes. You can also subscribe to us on Apple and YouTube Podcasts.

VARIOUS VOICES:
This podcast, this podcast, this podcast, this podcast, this podcast is a product of the U.S. Geological Survey, Department of Interior.

Show Transcript
Was this page helpful?