Model Ensembles – Uncertainty’s Endgame

Hello and welcome back to “The Cognitive Whiteboard.” My name’s Luke and today, we’re gonna talk about modeling ensembles and how they can help you understand the uncertainty of your assets and how that impacts your business decisions. So we’ve been talking in the industry about uncertainty for a very long time. And the reason for that being that whenever we make one interpretation of an oil field, given that we don’t have an unlimited amount of data, any one interpretation has a degree of imprecision and a degree of inaccuracy. And so we recognize that when we make a single representation, it’s only ever going to be precisely wrong. For many years, we’ve been exploring as an industry how to try to address that problem, and for some time we’ve been dealing with modeling variations in your input modeling parameters which allow you to explore the precision of your interpretations.

What we’re gonna talk about with ensembles though, is how ensemble models can help you explore the accuracy. And as John Keynes here quoted quite a long time ago, “Accuracy is key to making a good quality decision.” So when we’re in a precision mode, we really got the underlying principles of geology kinda nailed down and we’re exploring how variations in the other subtle parameters might change that answer. Modeling ensembles offers you the opportunity to greatly explore the uncertainty space, to take your skill sets as a geoscientist and bring that to describe alternative scenarios. It moves us more into this accuracy space. And it’s better for us to be here. We would love to be precisely accurate, but that’s just simply not possible in a typical oil field with the data that we have. But let’s talk about how big a modeling ensemble can become.

We have the capability now, with cloud-based systems, to use elastic processing which basically means we can access an enormous number of simulations at once. How big could that be? Well, if we take a static example before we even get to the dynamics and look at how many different scenarios we could generate if each one of these properties was dependent upon the precursor, we start going up in the uncertainty space. So each one of these nodes, say where it’s sort of seven or so of these, if we wanted to, say, take the percentiles out of all the deciles and we take that up to around here, we’re going to be in a number of models that is simply impossible for you to manage cost effectively through a full flow simulation in compositional modeling. Now, there are plenty of cloud providers that would love you to do that, but you’re probably going to spend your annual budget of the entire company in getting all of those systems run on the cloud.

So let’s be more cost effective with it. Well here at Cognitive Geology, we have been working on that space for several years now. We have been developing methodologies that allow you to explore the complete uncertainty space in the entire modeling ensemble, without ever having to break your back in terms of budgets for running your simulations. And the way we’re doing it, we’re taking that entire modeling space, we’re analyzing all of those scenarios and we’re actually identifying before we do a lot of computation that a lot of the scenarios are basically similar. That means we can identify them as in the same area on that dart board, and so we can collapse them down. And what we’ve been doing is developing methodologies that allow us to reduce the number of scenarios that are moving through a progressively increased precision toolkit. So by the time we get down to our full compositional flow, we’re able to identify the ones that actually matter that are going to change your business decisions.

What we’re able to do with this is help you minimize your cost for software as a service and infrastructural cloud computing as a service, while still retaining the full accuracy of your asset. So we’re launching a research consortium to further develop these methodologies that will be launched in London at the EAGE in June. We’re also going to be talking, in a couple of technical talks, about how we’ve developed a few of these methods of the moment. So I’d love to see you there. If you’re a company that’s trying to figure out how you’re gonna manage your budgets going forward with this cloud elastic computing, then come and have a chat with us because I think we can help you tremendously. Until then, if I don’t see the EAGE, I’ll see you back at “The Cognitive Whiteboard” next time.

James Hutton’s Deep Time

Hey guys, welcome back to the Cognitive Whiteboard. My name is Keegs and today we’re going to be looking into the thought processes of James Hutton, the man who coined the term “deep time,” and really shaped our modern understanding of geological sciences, and our understanding of how the Earth was formed. James was born in 1726. In 1749 he moved off to the University of Leiden, in the Netherlands, and obtained his medical degree. In 1750, he moved back to the Scottish Borders and inherited and worked two farms that previously belonged to his parents. It was here that he was able to witness first-hand the processes of erosion and sediment deposition and where he became interested in geology. Subsequently, in 1767, he moved back to Edinburgh where he developed and published his own geological theories.

In the 18th century, the political, social, and scientific landscape is still dominated by the theological views of the Church of England, which was a key player in what was then the Kingdom of Great Britain. Our understanding of the Earth and the processes that formed it at that point were derived from literal interpretations of the Bible. In 1658, Archbishop James Ussher boldly stated in his “Annals Of The World” that according to his most precise calculations, the beginning of time must have fallen on the 23rd of October at precisely 9 am, 4004 BC. Some fairly precise stuff coming out of the Bible there. But he was a senior member of the Church at the time, and so this date was subsequently incorporated into the English Bible, and scarily enough, later made its way into the scripture itself.

But times were changing. Edinburgh, and Scotland as a whole, were moving into a new era we now know as the Scottish Enlightenment. Edinburgh itself was known as a hotbed of genius, and saw the rise in revolutionary ideas in the sciences and the humanities, with concepts such as empiricism and scepticism making their way into the limelight. But also crucially enough, the scientific method of observation and deductive reasoning also took a foothold, and theories of Earth, and the way it has developed, and the processes that formed the Earth, were being propounded at a rapid rate.

In these early days, while James was formulating his own theory, there was a significant attempt, in a dominantly Christian and Western world, to reconcile biblical narratives of creationism with these new theories around the formation of the Earth. And these were convolved into bizarre theories such as Neptunism and Catastrophism, which asserted that the Earth, and the processes that formed the Earth and its materials, were sudden and were short-lived and corresponded to catastrophic global events such as global flooding. Think back to Noah’s floods. But James wasn’t having any of this.

James was a master of observation and deductive reasoning, and his early observations on his own lands had allowed him to see that the Earth was shaped by gradual processes of erosion, uplift, and deposition in timescales that were significantly longer than what was being posited at the time. And there must have been links somehow to the processes of Earth’s heat generation in a way that no one really understood at that point. But to prove his theories, he needed to go on a journey of adventure and discovery. This trip brought him to a number of localities, one of the first of which was Holyrood Park just down the road from our office here.

What James observed there was the base of the Salisbury Crags sill intruded into carboniferous sandstones, cementstones, and shales of the Ballagan Formation.
He observed that the sandstones and cement stones were twisted and contorted as if they’d be broken and bent out of shape by the forceful intrusion of what he thought must have been some fluid-like phase. His observation of this contact supported his assertion that the dolerites were derived from magma generated by the Earth’s heat internally, which was in direct opposition to the commonly accepted theory at the time of Neptunism, which suggested that dolerites and basalts were in fact sediments deposited in a marine environment, something akin to a flooded Earth situation.

So whilst James’s observations were significant, he still needed irrefutable evidence to prove his theory of deep time. And it was in 1788 that he arrived at Siccar Point and found the evidence he needed to really prove his understanding of the cycles and processes that formed the Earth and their corresponded to the principles he was describing in his theory.

What he observed there was a type locality of an angular unconformity represented by the contact between the subvertical alternating light and dark beds of the schistus that he had described at the time, erosionally overlain by deeply coloured red beds of sandstones and conglomerates.

He inferred from the nature of the contact between these two units that there must have been an enormous interval of time and significant pressure and stress required in order to move these subvertical beds into their current orientation, and then to erode them away, and deposit over the top of them the deeply-coloured sandstones and conglomerates of what we now know to be the Devonian old red sandstone unit.

This was the crucial piece of evidence that James needed to find his concept of deep time.

He concluded at the outcrop: “we find no vestige of a beginning and no prospect of an end”.

His work was subsequently published and was later popularized by the work of Charles Lyell, who coined the term ‘Uniformitarianism’, which essentially describes the processes that have shaped earth have been in more or less a continuum since the inception of earth, and are still in operation today.

He famously stated that ‘‘the present is the key to the past!’’

Thanks for joining us, ladies and gentlemen. We hope to see you again next time at the Cognitive Whiteboard. Cheers for now.

A Bayesian look at Star Trek’s ‘Redshirts’

Hello, welcome back to the Cognitive Whiteboard. My name’s Jim. And today we’re going to be taking a Bayesian look at Star Trek’s red shirts. And in particular, the idea that the red-shirt-wearing characters in Star Trek are the perfect example of a disposable character who’s unlikely to make it to the end of the episode without dying. And if you look at the numbers from the original Star Trek series where William Shatner was Captain Kirk, that would certainly seem to be the case. The majority of the deaths are people wearing a red shirt. However, I thought it’d be interesting to use Bayes’ theorem to actually test this hypothesis and figure out who of these three people should be the most worried. Should it be myself wearing the science and medical team blue, our CEO, Luke wearing the leadership command gold, or Eileen, our Chief Operations Officer wearing the operations and engineering red?

So to do that we’re going to need our information here about who died and also some information about the total breakdown of the crew on the Starship Enterprise. So, if we take that information and Bayes’ theorem, which I’ll explain in a moment, we can then try and calculate how concerned each of these people should be. So, what we actually want to look at is what is the probability that someone will die given they are wearing a red shirt. And that’s what we can use Bayes’ theorem for. Testing a hypothesis based on some known observation that we have made. And to do that, we need to combine several probabilities. The first one is the likelihood function, and that’s telling us what is the probability that somebody was wearing a red shirt given that they died. 
And that would be our original data here. So that would be our 26 red shirts out of 45.

We then got the prior and that is the prior probability of dying in the first place. So that’s a total of 45 out of the total crew of the enterprise of 429. And then finally, we’ve got the marginal probability, which is our probability of wearing a red shirt regardless of whether we live or die during the series. And that would be our 240 red shirts out of the total 429. So, if you put those numbers in, we actually come out with a surprisingly low 11%. And the reason for that really is because yes, the majority of the characters who die are wearing a red shirt, but the majority of the crew are wearing a red shirt, so they’re not necessarily more likely to actually meet their end on the show.

If you calculate for the other colors, then the blue shirt comes out at about 7% and then the command gold comes out 19%. They’re actually nearly twice as likely to meet their end compared to the red shirts. So, Eileen has no real need to be any more concerned than me, but Luke maybe has something to worry about if he gets sent off to an unknown planet. However, we all know that the best example of a disposable character is someone who isn’t given a name. So we don’t have room for all the maths, but you can actually calculate this for somebody who doesn’t have a name and use that to update the prior in Bayes’ theorem. And that’s something else you can do based upon more observations that you make, you can update the probabilities that you calculate and update this prior information here and the marginal as well.

So if we do that and actually go through the process of updating that prior, if the red shirt wearing character does not have a name, then that probability shoots up to 33%. So that was just quite a good example of the use of Bayes’ theorem but also how we can go about updating the prior information as well. So, Luke should be concerned a little bit more than before, but we know Eileen’s name so she is absolutely fine. So hopefully we’ll come up with some more examples of these that we can maybe look at as the Whiteboard series goes on. But until then, I hope that was interesting and I’ll see you back here in the future. Take care.

Virtual Metering & Time Travel

Hello. Welcome back to the Cognitive Whiteboard. My name is Jim and today we’re going to be talking about virtual metering and time travel. In other words, where we’re going, we don’t need well tests.

IMG_0572_cropped.png
Now, to start off with this I’m going to talk a little bit about the sort of measurements we might have in our well. We might not have all of these – we might even have very few of them – but this is basically a way we can make use of these measurements to tell us what’s happening in the well. So, for instance, I’ve got an ESP well here, so I might have pressure readings around the pump itself, a wellhead pressure reading, and I’m now going to have certain assumptions about the fluid that is passing through that well, and also the reservoir itself. So, I will have an estimate of water cut, GOR, the oil density API, and also what the reservoir pressure is. Those are not things we would typically measure in an ongoing operation, but we’d have our understanding of what we might expect them to be.

Now, based upon all these various measurements, I can then calculate a rate through the well. And the traditional method of doing this, and the one that is the most common, is a VLP IPR intersection. So, based upon the fluid properties and what I understand the reservoir pressure to be, and the wellhead pressure, I can then calculate those two curves, and the intersection will give me my rate. That’s just one way of calculating the rate. If I have any of these other measurements, I can introduce other methods. So, for instance, if I have the dP across the pump, then I can use the pump calculation to determine what that should be. If I’ve got a choke, and I have a measurement of the flow line pressure and the wellhead pressure, those two measurements can be taken in order to produce the choke rate. And all these different methods will have sets of physics that overlap, and are independent, and they’re all based on the idea of what the fluid properties are, what my water cut is, what the GOR is and so forth.

So, if they disagree, I can then use that to determine what might have actually changed. So, if they diverge over time then that can tell us something, because if we’ve got everything correct, reservoir pressure, water cut, everything, then they will agree perfectly, and that tells us that that is the correct rate. However, when we start to diverge, they’ll do it in different ways depending on what has changed. Now what that will allow us to do is actually fire up the oilfield equivalent of a flux capacitor, and travel back in time and see what changed at that point. Because, for instance, if the water cut goes up, then the choke will respond differently to the reservoir, and the choke has very little interaction with the reservoir. Those sets of physics are largely independent. So, depending on how they start to diverge, we can then use that to interpret what’s happened. In other words, we can go back in time and basically pretend that we did a well test. We can determine what the fluid properties are without needing to intervene directly in the well.

So, what I’m saying is, virtual metering – which is basically what this technology is called – can then be used to help us understand what is happening in the well even if we have less data, if we don’t have the data point of the well test. So, we can do that, determine what’s going on. We can even use that as part of our future prediction. If the reservoir pressure’s going down, that’s a trend that might continue, same with the water cut. And of course, we might see an interaction between these different things that are going on, and that will be reflected in the data that we have. If we don’t have all of it, we can maybe be less certain, but we can still home in on a rough answer of what we expect to be happening. So that’ll be an ongoing theme in these whiteboards, making better use of our data and seeing what we can get from it if it’s particularly sparse. So, I hope that’s been useful. I hope to see you back at the Cognitive Whiteboard soon. Until then, take care.

Correlation vs Causation

Hello, and welcome back to the Cognitive Whiteboard. My name is Luke, and today, we’re going to talk about correlation and causation and really address some of the differences between them. I’m going to highlight the point with a correlation between the number of songs that appear in Rolling Stones’ Greatest from the 20th century against the production of oil from the lower 48 states of America.

What we see if we put those two data sets together is an apparent relationship, and you could argue that we might see that the number of songs that came through in the ’60s predicted that oil boom that came a little bit later. Take another step forward, and we say that shale boom that’s on right now might tell us that the beginning of the 2000s are going to appear prominently in the Rolling Stones’ Top 500 of the 21st century. Now, what I think is quite honestly an obnoxious song, “Gangnam Style”, would be right at the beginning of that. So, it was the top song in its year, is that going to be in that list?

Obviously, this is not predictive at all. It’s completely rubbish, but it’s amazing how often we make correlations and assume causal relationships. A great example of that in the geosciences is actually porosity to permeability. Porosity is dominated by the pore volumes, permeability is dominated by the pore throats. You can look through this proof, do it for yourself, you’ll determine that you can prove that porosity and permeability have a correlation but not a causal link between the two. In a depositional system like a shoreface system, the porosity is going to be really heavily affected by sorting, so in the upper regions of the system, you’re going to have essentially consistent porosity. If you logged it through here, you wouldn’t see anything different under neutron density. However, the grain size is going to radically change permeability, so within this system of apparently static looking porosity, you should see quite a significant relationship in permeability.

Those kinds of differences of that correlation in those regions can really affect how you predict fluids are going to flow, so it’s important we retest it. How could we do that? Well, we could think about the relationship between these two, except that there is a correlation, not a causal link between them, and try to find what is the causality that’s creating that association. On this case, porosity and permeability are both quite strongly linked to their depositional position and their burial history, and both of them have similar kinds of relationships in terms of where and what direction they start to degrade.

The thing is, though, and we use the illite transition zone here to highlight, that the variation doesn’t remain the same for both of the properties. If we take a reservoir that has a little bit of calinite in it, a little deposition, and start burying it, once we get beyond the smectite zone, that smectite is going to turn to illite. And illite, as we all know, is terrible for our permeability because it blocks up our pore throats. So we’re going to suddenly see a rapid divergence between the relationship of porosity and permeability, and it’s going to happen because of the relationship with depth. It might not be important in your reservoir, I’m not saying it is. It depends entirely upon the shape of your structure, but what we want to make sure we’re always doing as geoscientists is throwing a bit of scepticism on any of these correlations that we can’t associate to a direct causal relationship and retesting it as we go through it.

It’s one of the things that I think will remain for a very long time, a core requirement for a professional to help make these interpretations. I think it’s good news because I don’t think a machine is replacing us yet. When you look at the way that machine learning is going to work, it’s essentially these kind of correlations on steroids. We’re talking about many dimensions of analyses that we can start to do, but they can find correlations that aren’t necessarily going to be predictive because they could develop a chart very much like this one. Now, if you have enough data, the theory would be that you eventually get beyond that, but the geoscience isn’t necessarily in that space. So, this is one of the points that allows me to sleep comfortably at night and feel like there’s still need for a geologist coming forward.

Interested to hear what your thoughts are. This one’s going to probably raise a little bit of a question, but happy to have that conversation as well. Let me know your comments below, and until next time, I’ll see you back at the Cognitive Whiteboard.

Red Pill or Blue Pill?: The Impact of Fluid Separation Processes

Hello, and welcome back to the Cognitive Whiteboard. My name is Jim Ross and today I’m going to be talking about the impact of fluid separation processes. And I have gone for a Matrix theme with today’s whiteboard because I’m going to be asking you to make a choice, the blue pill or the red pill. If you take the blue pill, you wake up at your desk and you believe whatever your production technologists or reservoir engineers want you to believe. You take the red pill, you stay at the Whiteboard and I show you how deep the rabbit hole goes.

Now that I’m done with quoting The Matrix, what I’m actually going to talk about today is something which I and a number of clients have learned the hard way across our careers, and that’s that not all barrels of oil are created equal.

And to demonstrate that, let’s start off with a fixed mass, a fixed volume of reservoir-conditions oil. Now, if I flash that to standard conditions, I will end up with a certain set of properties associated with the two fluids that I get. There will be a gas-oil ratio and then the densities of those two phases. However, if I take the very same oil and put it through a different separation process – I have got an extra stage here – I will end up with a different set of properties. I will end up with a different gas-oil ratio and a different density between the two phases. Now, that seems reasonable enough. If I start with the same thing but put it through a different process, I will end up with a different result, but it’s not the only thing that can affect it. If we haven’t got tight temperature control, it can be affected by the time of day and how the temperature changes there or, on a longer time scale, we can also look at seasonal effect due to it being summer or winter.

Now, being a chemical engineer by training, I usually thought about things in mass balance. But when I joined the oil industry, that’s not how we operate. We think of things in terms of volumes. In particular, it’s usually standard-conditions volumes. And that’s what we use to report, that’s what we use for modelling and fiscal allocation: we tend to do on the basis of standard-conditions oil rates. But what we’ve seen here is, depending on the process I follow, I will end up with different properties and, therefore, in this case, a different rate off the back of that.

So what that means is that rate that we often hold to be gospel is anything but. The rate through path one and the rate through path two are not the same. Now, it’s quite common to have different paths. We might have a well-test path and we might have a field process path. That’s not uncommon. And the percentage difference might be quite small. But when we’re dealing with things on the scale of the reservoir, that can actually have quite a profound knock-on impact. The fluid properties we have already spoken about, they can have knock-on impact then, the fluid saturations and how they’ll respond to production and pressure changes, the well performance, multi-phase flow, the field development planning and well design. Are we designing for the correct path to surface? Are we using the correct rates?

History matching itself can be really complicated by this. If this separation process is changing over the life of the field, how is that being accounted for in the historical rates that we are now trying to match to? Basically, it leads to a very Matrix-like awakening where we suddenly find ourselves questioning everything we thought was real, every piece of production data that we have come across. Not all hope is lost, however. It is easy enough to convert between two paths just by looking at the shrinkage factor through both and then looking at that ratio and correcting appropriately. Now, the most robust way to do that is an equation of state. It’s also the most pernickety, and I could do a whole series of videos just on that process. So I’m not proposing that we go through how you do that but it’s more to question where those numbers have come from, and it’s okay to question them once we are armed with this knowledge. How are those rates measured? Were they measured at all? Were they done using allocation factors? If so, how were they calculated? Has this correction already been attempted? And if so, what was the physical basis for doing that?

So, basically, once we have this knowledge, we can take on any modeling challenge. We can use that sceptical attitude about where the numbers have come from to try and help us narrow down any difficulties we might have with our modelling. Once we do that, we can take anything on and we can make full use of our awakening to these possibilities.

I hope that’s been helpful. I look forward to seeing you at the Cognitive Whiteboard again in the future. Until then, take care.

Reservoir Dogs: Sequential vs. simultaneous modelling solutions

Hello, and welcome back to the Cognitive Whiteboard. My name is Jim Ross, and today we’re going to be talking about sequential versus simultaneous modelling solutions. In particular, how to avoid an ugly, confrontational standoff between the two – not unlike this iconic scene from Quentin Tarantino’s “Reservoir Dogs”.

When we’re modelling things in oil and gas, we have a lot of variables in play at any one time. And, broadly speaking, we have two ways that we can address this. We can look at a multi-variable regression, which will try and tweak all the variables simultaneously to minimise our error to observe data, or we can look at a set of sequential functions of each variable that tries to reduce the error as we go along. So which one should we use? I’m going to say both.

Mathematically speaking, [simultaneous solving] is the preferable solution. If we give it the same starting points and the same search algorithm, we will end up with the same solution. However, we know that physics and, in particular, geology, rarely works like that: rarely is everything happening at once, and quite often, we have different behaviours imprinting upon each other as we look at our different facets of our modelling. To illustrate this, I’m going to look at a simple mass balance-based example from earlier in my career. And it’s just a simple tank, so mass in, mass out, and how the pressures and saturations respond to that coming and going of fluid.

So, if we look at the reservoir pressure history, we’ll see that it has a gentle enough decline to start, there’s a sudden spike, and then it levels off a little bit before it completely drops off a cliff and the production starts to peter out. If we try and use this multi-variable, simultaneous solution on that, we’re not going to get a very good history match with this simple model. And that’s because we’re trying to match fundamentally different periods of behavior all at the same time. The answer is not to lump more complexity into the model, but it’s actually to take a step back and look at what’s happening across the history of this model, this actual field, with this sequential behavior. In this first period, really, all we’ve got here is fluid expansion, which will be dependent upon the value of the stock tank oil in place, and the aquifer drive. What’s the strength of the aquifer in this field? In the second period, if we actually look at the production history of the field, we can see that that’s the point at which water injection starts. So naturally, that’s going to have a little impact. But if we can nail down a value of the variables [in period 1], we can carry that forward into the second part. Once we’ve done that, and maybe tuned the transient aquifer response, we can then look at this third period. And it was thought that there was some sort of fracture event and they were then losing fluid to another tank and thus pressure in the field.

So what we can do is we can take each solution and move it forward to inform the next one. We can take the stock tank oil and aquifer strength determined here and carry that forward, where we can then tune the transient response of that aquifer. And we can then take all that forward and then look purely characterising that transmissibility, and typically that would be a transmissibility factor we come up with. In other words, we simplified the problem by looking at a simultaneous solution, but considering each period of time in turn, and applying some sequential thought to that. In other words, if we can successfully combine these approaches, the final result is more robust. And we can not only directly calculate our past behavior, but we can be confident that the future behavior will be predicted well by our model. The alternative is one of these self-defeating standoffs that were so common in “Reservoir Dogs” and not only are we not going to be able to reliably create past behavior, we’re going to have no hope of predicting the future behavior.

I hope that’s been helpful. I look forward to seeing you at the Cognitive Whiteboard again in the future, and I hope that’s soon. I’ll see you then.

Creaming Curves and Exploration: Use through the Basin Lifecycle

Hello and welcome back to the Cognitive Whiteboard. My name’s Kirsty and today I want to talk to you about creaming curves and how we in the oil industry use them for exploration. First off, what is a creaming curve? Well, a creaming curve is when we plot the cumulative discovered volumes in a basin against the number of new-filled wildcat wells.  We do that to minimize the effect of oil price fluctuation on our interpretation.

When we start plotting these curves for basins around the world, we start to see common factors and common patterns occurring. These patterns are these hyperbolic style functions that we see here, where we’re getting large discoveries and then as we drill them out, we see a tail forming on the play.

Picture3.png

We see inflection points as well where new large discoveries were made in the basin. And, in the case of the Norwegian North Sea like I’ve got plotted up here, these tend to be geological. But, they can be other factors as well. For example, as we all know, the technology in the oil industry has moved on so much over the last 30 or 40 years and we’ve moved from being able to drill on-shore and shallow marine to deep water. Or in the case of on-shore U.S., we’ve opened up really interesting new plays by being able to access unconventional oil and gas.

We can also find that some of these upticks and fluctuations can be because of the economics. For example, we’ve got the infrastructure in the basin, so now plays which weren’t accessible have now become economic. Or even political factors opening up new basins in new countries.

So, how can we use these creaming curves as explorationists to make money for our company? Well, we do have to be a bit careful. Here I’ve plotted the creaming curve for the deep-water Gulf of Mexico. And, if I look at it in this gross function like this, I can see inflection points, I can draw some hyperbolic functions, and I can start to make some inferences from those.

Picture2.png

For example, I might look and say, “Well, this is great, this basin’s still growing. There’s more exploration to be done here.” But, looking at it, I don’t think my most recent play is still adding very much volume. Actually, from my interpretation, it looks like these older plays are the ones that are adding the volume. But, can I really make that assumption from this dataset? Well, no, I can’t. As an explorationist, I probably need to go in and look at the data in more detail and actually find out what discoveries are adding these volumes and which plays they come from. And if I separate out the plays, that’s where I can add real value to my company.

Looking forward, where in the world am I looking to see really interesting exploration happening? Well, I mentioned political factors opening things up. The Mexican side of the Gulf of Mexico has opened up recently and some big experienced players from the U.S. side of the Gulf of Mexico are in there exploring. So, I’m really expecting to see some big new discoveries and a really interesting uptick in the creaming curve for that basin.

I’m also really excited to see the basins that are in the early part of exploration where they’ve seen their first big discoveries recently and how they’re going to develop. For instance Guyana, and Suriname where Kosmos are drilling this year, and also over in West Africa. And, other places I’m interested in are those mature basins. There’s going to continue to be lots of interesting exploration in the North Sea and also in Atlantic Brazil, where Petrobras are looking to open a new play in the really mature Espίrito Santo Basin.

So, those are the places I’m looking. I’d be really interested to hear what you think about it and how you use creaming curves for exploration, and also where you’re excited to see the next wells be drilled. Thanks for joining me at the Cognitive Whiteboard. I look forward to seeing you here again soon.

2001: A Geomodelling Odyssey

Hello, and welcome back to the Cognitive Whiteboard. My name is Dan O’Meara, and I’m here to talk to you about a way to evaluate your 3D Saturation models, and along the way this plot that I show you will stimulate a lot of discussion with your interdisciplinary asset teams. So I’ve been in the industry for quite some time, and I was actually there at the dawn of geomodeling, and when we started to put together geomodels. We had some simple expectations. One was that our models were going to honor our observations and the main observations were the well data, the well logs. Another was that our model should honor the physics. So when it comes to porosity, there is no physics, some people tend to use just geostats for porosity. But when it comes to saturation, we know there’s physics to be honored and that’s the physics of capillarity. We have plenty of data in the laboratory and that’s why we get capillary pressure curves. So the physics becomes very important when it comes to saturations, and the physics tells us that as you go up in a reservoir, it’s not just an easy thing where the oil saturation uniformly increases but the oil saturation is going to depend upon the heterogeneity in both the porosity and the permeability, they’re all connected.

If you put together models that are just driven by the physics, what we’re going to do here is we’re going to evaluate those models, and I put together this kind of a plot to try and evaluate them, here’s how the plot goes. So we’re looking at 3D models and we’re only going to consider cells that are penetrated by Wells because the Wells are where we have observations. So along the x-axis here, we’re going to plot the properly pore volume weighted saturation log, the water saturation log. And on this axis, the y-axis, we’re going to plot the water saturation that’s sitting out there in the 3D model. And all we’re going to do is look and see how we’re doing, how they compare. You would expect to get a 45-degree line, at least I did with all of the data generally being on the 45-degree line. Because after all, if you honor the physics, you should be honoring the observations. But what I found was kind of strange and one of the things that just jumped out right away is you see data up here along the top. And what that’s saying is that the water saturation that’s coming out of the model is 100%, but yet there’s that’s contradicting it saying, “We have as much as 80, 90% oil, so what could possibly be going on?” So I turned back to the geological models here and realized, “Hey, there are faults in those models.” With the faults comes the possibility of compartmentalization. So, for instance, in this model here we’ve got four free water levels, and you start to realize that this kind of a data is a signature for compartmentalization.

If you look at this plot more and you’ll see that usually, I was not seeing nice straight lines even when I discounted for this, and if you’re in this area here, the water saturation in 3D is higher than the water saturation in the wells. So here you’re underestimating the reserves and conversely you’re overestimating reserves in this part of the plot. So in going around the industry over the last 20 years or so, I would see people who were struggling where they had a computer programs here that basically said, “Hey, I can do the physics,” but when it came to observations, when we put together these kind of plots here, we were repeatedly seeing things like this. Where we had lots of scatter here and then we had what seemed to be evidence of multiple compartmentalization there. So what I did is to come up with a methodology that both honors the physics which is the first thing that people were doing, but also honors the observations. So that every time that we put together a model in 3D for saturation, we get plots like this which is what you’d expect from porosity. You should honor the observations and you should honor the physics because after all that’s what Mother Nature does. If you want to learn more about how to get there, then Luke will provide you with links to our website below. So  that’s all from the Cognitive Whiteboard today, see you again next time.

The Bigger Picture: Uncertainty from Subsurface to Separator

TRANSCRIPT

Hello and welcome back to the Cognitive Whiteboard. My name is Jim Ross. I’m the new product owner of Hutton here at Cognitive Geology. And today I’m going to be talking to you about “The Bigger Picture: Uncertainty from Subsurface to Separator.” My background is chemical engineering and most recently, I’ve been working as a petroleum engineer in the field of integrated production modeling. What that means is taking models of all the different parts of our oil and gas system and constructing a model that represents the entire behavior of the system and how they’re going to interact and affect each other once we start producing from that asset.

And what you notice when you’re doing this is that everybody in the oil industry kind of works in their own little silos. You might have production engineers who are responsible for the operation design of the wells, you might have a drilling team who decide where and when we’re going to be actually drilling those wells, a facilities team who would look at things such as the process or refinery required to support that production and anything else that’s required to actually make that happen, and then the reservoir model, which of course, would be looking at how the fluids are actually going to move through the porous medium of the reservoir rock. However, they’re all working towards a common goal, which is usually something like how much oil am I going to produce and when am I going to produce it.

And in creating these models, I would often get asked, “How do I know this is correct? How do I know that what we’ve predicted is what will actually happen? And the short answer is, we don’t. We don’t know that that’s entirely correct. And the reason for that is because of all the various uncertainties we might have in the process of building these models. For instance, do I have a reliable lab report? Do I know what my fluid density is, my API gravity, gas gravity is going to be? Do I know what the ratio is going to be of gas and oil and even how much water I’m going to produce over time? Do I know what the drainage region of my well is going to be and how much of that I’m actually going to contact during production and what that pressure decline is going to be, even how many wells I’m going to have and what type they’re going to be and how it’s going behave if we’re not operating at the design capacity of those facilities?

And what that leads to is some very anxious engineers, which is why I can have sympathy with this guy here. So, something that we’ve tried to move towards is to look less at precisely what these inputs would be and look at what impact they have by looking at the different possibilities on the production side. So for instance, a production engineer might sensitize on tubing size to see how much the well is going produce and what effect that will have on the production. If we’re looking at a perforation, we might look at the perforation efficiency, how well we’re making those perforations and what impact that would ultimately have on any wells that we drill. If we undertake a stimulation job, then we might look at what the stimulated PI would be after.

There’s a range of possibilities for what we might end up with there. What we’re trying to build here at Cognitive Geology is something that takes into account the geological possibilities. So what are the different possibilities for filling my rock properties across the entire grid? And what impact does that have on the process? What that allows us to do is to move away from something which is precise, but we don’t necessarily have a lot of confidence in, to something that is approximately accurate that then tells us what the impact of our decisions and the impact of our unknowns would be so we can have greater confidence in what we’ve predicted going forward. That’s all I want to talk to you about today but I look forward to seeing you at the Cognitive Whiteboard again in the future, and I’ll see you then.