Why The Defaults Are Dangerous

This week, Luke takes a look at the dangers in using defaults in geological property modelling, and how the obvious answer isn't always the best one. 

For reference, here's a still of this edition's whiteboard!

whiteboard geology still

 

Video transcription

Hi. My name is Luke. I'm from Cognitive Geology. Today, we are here to talk about a couple of things that can be quite dangerous about using defaults when it comes to making up property models. Before we get into that, I just want to ask a little...pose a question to you here and see what you think. Often when I'm doing a peer review, I see a lot of people present me histograms and say, "Here's an example of why this is a good model." Usually, there's one of these two outcomes that they are referencing. If you have an input data set like this, that's got a negative skew, so a tail off towards the low end, and you have a model that matches that very, very accurately - or a model that doesn't, something that goes the other direction - which one of these do you think you can assess and say is likely to be a better model? I think you can actually make that call, and as we go through this video, we'll see whether we come back to doing that. Let's have an answer for that at the end and see whether we can actually, just from that piece of information, pick a better model.

When we look at doing geostatistics, there's a really important underlying assumption that almost all of the geostatistical methods have, and that is that you are distributing with a geostatistical part at the end, something that is completely stationary. There is no loaded dice. If you roll the dice on one side of the reservoir, the chances of you coming up sixes is the same as rolling it on the other. It's very important that we've taken all those trends out. The first geostsat assumption that we have to make sure we're checking off is that we have dealt with all of those non-stationary components. Of course, that means we have to account for the geological trends. We do know that geology has trends, it has a lot of trends - that's what we base our careers upon; picking those trends. Whether that be something like this ExxonMobil slug diagram here with a proximal to distal trend, or a coarsening upwards trend, or it could be any other number of possible trends; we do know that they exist inside geology.

The question becomes, do our facies models necessarily address all of the non-stationary components that we consider important? What we can probably argue is that that's very rarely the case. When we look at our non-stationary behaviours, we still observe things like porosity depth trends that over over-print any other depositional facies. In truth, when it comes down to the way that we construct a facies model, we usually are talking about a facies assemblage model, so there may still be internal characteristics of non-stationary behaviour such this as fining upwards channel trend that you can reasonably expect in any given set of facies assemblages. So, there are still a couple of significantly important trends that can exist inside your data.

Let's put back into the context of how these models are created: let's decode the defaults. If we're using a histogram that matches our well data, what we're really saying is that well data set has sampled what exists in the geology perfectly. We have got a good distribution of random samples from your reservoir, and they all line up. If then we distribute that input distribution, we distribute the observed data just using the variogram now within the facies, perhaps, we are also invoking that there are no additional geological trends. Those two statements are usually pretty challenging to support. When we decode those defaults, and we have a look at understanding what's going on with our defaults in our systems, I think we can actually answer this question, and in my opinion, we can determine which one is the better model. If you match the data perfectly, it's unlikely that you've dealt with all of these non-stationary effects. If you don't match it, you've done something to account for it. So my simple rule of thumb that I often use is selecting the sample, the model, that has got something different towards the input data set, because in order for that to be right, it's just too high a threshold that's needed. So I think you can answer which model is best, and I'm curious to hear what you think.


Thank you very much. My name is Luke from Cognitive Geology.