Dragon Eggs & Unicorn Tails

Hello and welcome back to the Cognitive Whiteboard. It’s been a while but we have a new cast of characters that we will be introducing shortly. But I’m going to kick off the first of this series of videos with an attack on the hardness of the oil field data sets. To begin with, let’s do some mathematics, not a place I normally start with, but if we look at a grid cell in a geological model, let’s have a look at the reality of how well we’ve sampled that single grid cell, let alone the rest of the field.

By the time we get down the reservoir, we usually around a seven inch bit sample, doesn’t really matter, but let’s assume seven inches and a pretty common grid cell size might be 50 by 50 meters. If we do the mathematics on calculating the sample rate, our well bore area is about 0.02 of the square meter converted into metrics and the grid cell is around about two and a half thousand square meters of rock area, so that sample rate is 1 in 125,000. Question: does that well bore represent the perfect average of that grid cell? Let’s just put it and leave it there for now.

But let’s have a look at an oil field for example. Let’s take Britain’s biggest oil field the Forties. We have a hundred and three wells in it at 90km2 of area. Do the same mathematics and we are at 1 in 45 million as a sample right for that oil field. So even in this well-developed field, we have a pretty big challenge in trying to say we have statistics here, perhaps that’s the reason why we use the term ‘geostatistics’, as to whether we want to be explicitly honoring all the mathematics to this or we want to be a little bit pragmatic and understand that our sample rates are a bit spurious. I would argue on the side of using a little bit of geological intelligence rather than just mathematics here, which is often where we start. But let’s even look in a single well bore just how confident we are that we know where that well bore is.

I was involved in a peer review where we had an issue that one of the wells was off by more than a 150m at the bottom hole location and that was proven because the velocity anomaly that was required to tie that well was just unheard of. It turned out the well was actually on the down thrown inside of a fault where it had been previously assumed to be on the up thrown side. That was discovered because we did a gyro survey over these wells to try to explain some of the issues. We found that about 30% of the wells were off by more than 50 meters and when we corrected all of those we added about 90 million barrels of oil back into that oil field and suddenly all the production history, you know, the general behavior that field started making a lot more sense.

Let’s talk about that production history though. On the single well basis, how confident are we that we know the production is what we say it is? And this is probably some of the softest data that we have in the oil industry. The production data particularly when you’re looking at a downhole zonal allocation can be very, very subject to uncertainty and inaccuracy. The well bore itself is often in practical terms not perfect. Cement bonds can create leakage points behind pipe, the jewelry itself wears over time, and the control of the flow can become problematic, and most of the time, wells are being produced through a cluster so the allocation back to the single well, let alone the zone can be really problematic.

When we look at these production allocations, it’s just worth bearing that in mind. Just a really hilarious point to that, we had a 28-day cycle in one oil field that turned out to be due to the hitches of the operational guys. One of the blokes was measuring the production data accurately, the other guy was just kind of eyeballing it from a distance, and that ended up with this 28-day cycle to our production data that we thought was tidal to start with. In reality, it was just inaccuracy in that measurement method. When the questioning comes, do I honor all of my all of my data? I do feel a little bit like Gandalf going up against the Balrog because the reality is I can’t match all of it. Most of the time, there is going to be inaccuracy somewhere in the piece of the of the puzzle and I can’t always be confident where that lies. What I’m always trying to do is develop the most coherent story I can within the realms of uncertainty that these data provide. Just a little bit of a story there, I hope that’s helpful to you, if you’ve come across any other strangeness in your fields that turned out to be part of this, I’d love to hear about it in the comments below. That’s all for now from the Cognitive Whiteboard. I’ll see you back here again time.

Simulation Sprints: Minimal Cost, Maximum Value

Hello and welcome back to the Cognitive Whiteboard. My name’s Luke and today we’re talking about simulation sprints. This isn’t a technical workflow, it’s a project management one. But it’s something that’s completely revolutionised the way I do my work and I’d like to share with you how this can help you make much more cost-effective analyses of your reservoirs in a vastly shorter period of time.

Now the method is not something that I claim credit for because it was challenged to me by a person called Michael Waite in Chevron who once came to me with a new field to look at, a mature field with a lot of injection and production and asked me to give him a model by the end of the day. My reaction to that was shock – I guess would be the polite way of putting it – because that’s obviously, an impossible ask for you to try to build a reservoir model and understand what’s going on in a complex brown field in just a matter of a single day.

But Michael wasn’t being silly. He was challenging me to use a methodology that would allow us to make quick and effective decisions when we clearly knew what the business decision that was coming up was going to be. What we had in this particular case is a mature field with only about 12 or so locations left for us to drill, and six well slots that we needed to fill. We had wells that were declining in production and we were going to replace them. And so, there really wasn’t any other choice other than optimizing those well, bottom-hole locations.

And so, in that context you can come back and say well, we don’t necessarily need to answer the ins and outs of the entire reservoir but rank and understand those locations so that we can drill the optimum ones during the drilling campaign. And he introduced to me this concept of simulation sprinting. What it is, is a quick loop study that you can do numerous times, iterating through progressive cycles of increasing precision until you get to a point of accuracy that allows you to make a valid and robust business recommendation.

The first one in a single day, we were not going to be able to build a realistic reservoir model by any means. What we were able to do in a single day is do some pretty decent production mapping results. So, taking the last six months of production looking at the water cut, we got together with the production mapping team. And we were able to design a workflow that we could do that day that would give us an idea of what was going to address this bigger objective, and try to say what would be the lowest water cut because that’s another value measure that we could use to understand these wells.

Importantly, because we’re gonna do this a lot, even though we call it sprints, the key is to work smart, not hard, because it’s gonna keep going on over time. So, you wanna be able to do this within the normal working hours. Don’t burn the midnight oil, otherwise, you’ll burn out before you get to make your robust business decision.

The really important piece in this cycle though is the number five. When we come to assess the outcomes of any one of the experiments that we’ve done, we need to rank the wells that we had in order of economic value. So, whatever way we were trying to devise it, we needed to have those well targets ranked from best to worst at the end of each one of these simulations sprints. That’s what Mike was asking me for at the end of the day.

And when we do this assessment, we also spend time to have a look at what’s the worst part of the technical work that we’ve done because that’s gonna form the basis of the next objective of the next sprint cycle. And we could come back and progressively increase the length of this sprint loop. So, the first one was done in a day, the second was in two and then four and eight and so on. But we can adjust this as is needed to determine how we could address these experiments.

But as we come back through this loop, and constantly re-rank ourselves, what was fascinating is that after only four weeks, the answer never changed again. The order of the top six wells was always the top six. And what that shows you is that really, with some quite simple approaches you can get to the same decision that you could with a full Frankenstein model. You can get to that recommendation without having to do years’ worth of work.

So, we were able to make that recommendation. It was an expensive campaign, so we didn’t stop at four weeks. We ended up stopping at about four months. But really importantly, we would’ve taken six months, perhaps a year to get that kind of a stage and we were already significantly ahead of that at the end of this routine. So, it’s a method that has really changed the way I do my work, and it’s something I really recommend you give a go. Hopefully, you enjoyed this, and I’ll see you back here next time at the Cognitive Whiteboard.

Managing Uncertainty: Robust Ranges Using Trends

Warming to his heretical theme, Luke is back to discuss using trend analysis to drive uncertainty in geostatistical models

TRANSCRIPT

Hello, and welcome back to the Cognitive Whiteboard. My name’s Luke, and wow, did that last video generate a lot of interest in the industry! What we did was we talked about how variograms and trend analysis can work hand in hand to try to investigate how your properties are being distributed in three dimensional space. Today I want to show you how we can use the trend analysis to drive the uncertainty in your models as well. In doing so, I think I’ll officially be promoted from geostatistical heretic to apostate. But let’s see how we go.

What I want to do today is really run you through how I used to go about doing geological uncertainty management and how I do it today. I started by thinking about shifting histograms. I think a lot of us do this. If we wanted to get a low case, what if the data was worse than what we observed, or a high case, we could shift it upwards in the other direction? I’ve done this many times before in the early parts of my career. It’s not a particularly valid way of doing it in many examples. When you do just shift the histogram and fit to the same world data, you’ll generate people’s and dimples around your wells, which is undesirable. But if you shift the observed data as well by saying, “Well, the petrophysicist has some uncertainty in their observations,” what we’re really beginning to invoke is that the greatest degree of uncertainty associated with that is at the wells. And I think we can all agree that the greater degree of uncertainty is away from the wells. There are important uncertainties here, but we have bigger ones to deal with up front.

The other way of trying to manage our uncertainty is also in the structure in how we distribute that data. Different variogram models are useful for doing this. We can say fairly that the interpretation of a geological variogram, that experimental data that you get is – particularly in a horizontal direction – usually uncertain. We don’t have enough information there to be confident on how that variogram structure should look, so it’s fair to test different geological models and see what will happen. What’s interesting is, of course, if you vary the histogram, you’ll change STOIP with a similar recovery factor, just generally better or worse. Whereas if you change this, you’ll vary the connectivity, but you won’t really change the STOIP very much. And it’s often difficult to link this variogram structure back to a conversation you can have at the outcrop.

So over my, I guess,  five or six years now, I’ve been focusing on addressing uncertainty by saying, “Actually, the sampling of our data – the biased, directional drilling that we’ve gone out and sought the good spots in the reservoir, typically – is really what we need to try to investigate.” How much does that bias our understanding of what could exist away from the well control?

Got an example here, a top structure map of a field, a real field through here with five appraisal wells along the structure, and the question is: in these two other locations that are gonna get drilled, is it gonna be similar, different, better, worse? And how could we investigate the uncertainty of the outcome on those locations?

We could, for example, investigate: is the variation that we observe in this data set a function of depth primarily? Or perhaps it’s a function of some depositional direction – in this case, the Y position, as a theory. We don’t know at this stage which one it is, and depending on which one we do first, we end up with different correlations. In fact, you can see on this sequence after taking out the Y position, if I analyze the residual for depth, we end up with not two porosity units by 100 meters of burial, but only one porosity unit being reduced as a function of depth. So you can see, fundamentally, we’re invoking a different relationship as a result of that burial.

What’s really interesting is that these are highly uncertain interpretations, but they’re valid ones. And they give us different answers, not just in terms of the absolute positions of those two wells, but actually for the entire model the answer is different. And this is very representative of your uncertainty in three dimensional space. This input histogram with that particular shape is more to do with the sampling of your wells in the particular locations that they were, whereas this trend model behind it is helping you understand is there any variation in three dimension space that’s going on? So we can end up with very different plumbing and in-place structures by really investigating how these trends can go.

You can do all of this manually one after the other through these routines. Our product Hutton does it for you automatically. We run about 300 different paths through your data set to try to investigate how that could go, and we find it’s a very powerful way of really developing robust but different geological interpretations to address your uncertainty in your reservoir. If you’re interested, drop me an email: I’ll let you know more about it. But for now, that’s all from the Cognitive Whiteboard. Thank you very much.

Replacing the Variogram

Hello. Welcome back to the Cognitive Whiteboard. My name is Luke, and today, I’m nailing my thesis on the door. I am going up against the institute to commit to geostatistical heresy by showing you that variograms are not an essential element of our geological modeling processes. In fact, I wanna show you that with careful trend analysis, you can do a better job of producing geological models that represent your geology in a much more powerful way and give you much more predictive answers. To do this, I’ve built a data set – it’s a complex data set, they’re quite geologically realistic, some tilted horse blocks through here into which we have modeled some complex geological properties. We vary those over x and y by some unknown function. We have some very complex sequence cyclicity of fining upwards and coarsening upwards trends. And we’ve over printed this with a burial trend. And what we want do is see how we go about representing that with this perfectly patterned drill data set, either with trends or with variogram analyses and determine which one we think is better. So, let’s first off have a look at the variogram of the raw data set and we can see immediately some of those structures, some of those trends that we impose in the model, are showing up in our variograms. We have some obviously low sills in some of the data sets that have some structures, some correlation over distance before they flatten out into our sill. But we do have some weird cyclicity that’s happening and we should wonder what’s going on there. So in truth, we know that this is just the complexity of having lots of different processes creating nested variograms and various sequences of cyclicity. And all geologists that are familiar with this kind of routine will know to try to take some of these trends out. One way we could start is by subtracting the stratigraphic trend. This isn’t often done but it’s very, very powerful. You could take, for example, a type log and remove that from your log, from your data set, or you could do what I’ve done here and essentially subtract the midpoint or the mean value from every one of your K layers and see what you get after you take that out. You’re basically representing sequence cyclicity when you do this. You wanna keep that trend, this black line here, because you have to add it back into your model afterwards. But when you do it, you see a reduction in the vertical variogram, as you would expect. We have described a lot of the variation that’s occurring in this direction as a function of sequence cyclicity and it’s not just random. And so typically, you’ll see a reduction of probably the half of the variation in the vertical sense. But it won’t have any impact on the major/minor directions because the same columns exist everywhere in x and y space. Once we take that trend out, we’ll have a new property – it’ll be porosity given that trend – and we can do a trend analysis on that property. So, now we’re doing it against depth. And what’s interesting is as you take out trends progressively, you start to see the second and third order effects that might not have been obvious in the raw data sets. In this case, it really tightened up our observation of the depth trend. And we can subtract that next. Take that trend out because it’s not random and see what it does to our variograms. Now, this one changes the major and minor variograms, not the vertical one, even though that seems counterintuitive, and it’s doing that because your burial depth is varying as a function of its map position. So that’s why it changes those two variograms. And again, we can keep diving down deeper and deeper into our data sets, removing progressive trends, linking this to geological processes, and pulling it out of your data. In the end, with this perfect data set, if we had described all of those trends, you would see no structure left in your data or next to no structure. And that’s because you have done a pretty good job of describing geology, not just random processes. Your nugget and your sill is almost identical. That means that the observational point has just as much information immediately as it does at some great distance. That’s great, you no longer need a variogram. You have done it instead with trends. Now, this is obviously a perfect data set with an unrealistic perfectly sampled series of wells. Let’s imagine what we would do with a realistic sample set with much more sparse and biased samples. Well, most gross geological trends are obvious, even in small data, small amounts of samples. But these horizontal variograms are something that we basically never get to measure in the real world. And so, we spend our time in peer reviews often defending whatever settings we have done in these major and minor directions with no basis or outcrop that we can link that to. So, if you want to do something in this space, you can make your models much more predictive because you can end up driving geology into it and removing the dependence upon your random seed. You can do all of this in pretty much any commercial package today, but it’s not particularly easy or intuitive. So, we’ve gone ahead and built a product for you that will do this in a much more powerful way. We call it Hutton, named after the father of geology, because with these small observations, we can make interpretations that can change our understanding of the world. Hutton comes to the market in March 2017. It will help guide you through this trend analysis process, it even has some intelligence that can help you automate that. And if you’re interested in finding out how to throw away the variogram and bring geology back into your geological models, please drop me an email, and I’ll happily show it to you. But for now, in the meanwhile, that’s all from us, and I’ll see you next time at the Cognitive Whiteboard.

My 24 Million Dollar Mistake

Hello, welcome back to The Cognitive Whiteboard. My name’s Luke and today we’re not going to talk about technical best practices. I’m going to share with you an example of why I think communication is at least half the job that we do. I’m gonna illustrate that with an example from my history where I think I made a 24 million dollar mistake in an appraisal well.

Firstly, the well was drilled safely and it was drilled with no environmental impacts, and we achieved all of our appraisal objectives on time and on budget. So it wasn’t a mistake in that regard – but I will explain to you why I think it is. So we had a setting where we were drilling for a lowstand sandstone. It was unusual target for the region. Typically, we were looking for something much deeper, but this lowstand was essentially within marine shales. It was in 1,500 meters of water – so quite deep for us to drill from – and it was underlying a very complex overburden of submarine canyons of cuts and fills, filled with various clay stones and calcilutites making it very, very difficult depth conversion.

The reservoir itself as well was quite unusual. What we had in this reservoir was a structural clay, so if you haven’t seen this before, it’s common for us to see dispersed clays – typically orthogenic cements that are occurring at the grain boundaries. We have laminated cements that are commonly depositional. Structural clays, though, few of us had ever encountered where essentially bioturbation had been so pervasive that these little creatures had essentially concentrated all the clay into fecal pellets, and it was providing framework support for the reservoir. So despite a 30% to 40% clay content, we had fantastic porosity and permeability.

However, those pellets were relatively ductile and what we observed in the core was that we would see a dramatic reduction in porosity and permeability associated with increasing external stresses. So the theory then was that if we went down deeper in depth, particularly below mud line, we would probably expect to see a poorer quality reservoir.

And so I went to my mentor and explained that this had happened. And to my surprise, he put his head in his hands and said, “If you knew the answer, why did you drill that well?” And I really… This is a turning point in my career. It really put me back on my heels. And this is where I think my 24 million dollar mistake came. If we knew this so well and we had such good technical justification, had we worked more on our “Rosetta Stone” of translating technical jargon to business speak at this conversation, had we build a bridge between these two divides and managed that, we may have postponed a 24 million dollar drill.

Now this was a major capital project and so it was always going to get drilled, so it is a data point that we needed, but could it have been delayed? I don’t know that for sure, but I look back on that and I reference this in my career – and that’s the reason why I spend so long on these boards – because communication is at least half of the job that we should be doing as a geologist.

Thanks very much. I’ll see you again here next time.

Geomodel Facies: Methods & Madness

Hello! Welcome back to the Cognitive Whiteboard. My name’s Luke, and today, we’re starting a series of videos around facies modelling.

Why do we do facies modelling?

So when we do facies modelling, we’re normally trying to achieve one or two of these kinds of effects. We’re trying to represent particular rock-flow behaviours, so there might correlative relationships between things like porosity and permeability, or relative permeability differences that might be on bin-like behaviour, or we’re trying to represent geobody shapes. Usually, we see an outcrop like this psychedelic representation of a fluvial system. Now there’s a lot of complexity in those geobodies out there, and facies are a very useful way of getting that into your model.

Different meanings for different folks

Now, when we do our facies modelling, we have to be careful because there are a lot of cats that like to use that term. The sedimentologist, the petrophysicist, the seismic interpreter, all may use the term “facies” for their own meaning. The sedimentologist is perhaps the most traditional way of thinking about it. They’re trying to represent depositional systems, and they get to see all the way down to the millimetre scale textural relationships that are available essentially only to the naked eye and that can help them see quite a lot of character in the rocks.

By the time the petrophysicist gets to see most of the information, most of their logs are at the 10 centimetre to meter type resolution, and a lot of the particularly older wells lack image logs that can give them some of the same textural information that the sedimentologist sees. So, realistically, the petrophysicist is dealing with mineralogical effects. And of course, the seismic interpreter does their best to try to extrapolate that in 3D but they’re working from a meter-plus vertical resolution and what they get to see with that acoustic response is orders of magnitude different to what the sedimentologist can do. So it’s important that we get everyone around the table and understand how they’re linked together because, particularly from seismic all the way down to sedimentology, there is a pretty difficult choice sometimes in trying to bring those two sciences together.

And when we do, we come then into the geomodel’s realm. The geomodeller gets the choice of how they’re going to try to distribute those properties, and they’re not all that easy to do. The traditional object-based and pixel-based methods are still out there and still in use and still add lots of value. The pixel-based methods are very good at incorporating external trends so, say, seismic data or map-based behaviours that you want to instil upon your model, that could help you get those spatial relationships done very, very well – very good at honouring lots of different probabilities.

Object-based vs pixel-based methodologies

But the object-based models are perhaps more powerful than the pixel-based methods at preserving some of those geobody shapes. And that can be particularly useful in say, channelized bodies. But you can also see that some of the choices you get in creating an object-based model don’t necessarily very well reflect what we see in the outcrop. So it’s important to remember to model what’s deposited and preserved, not what’s in an active, modern system.

But the two came together with multi-point statistics where we used an object-based model and a pixel-based methodology to try to give us both the geobody shapes and the external trends all in the one kind of a solution. And in many regards, it’s probably one of the most powerful methods that’s out there in the industry today. It’s preferred by a lot of the super majors. I’ve been using it for a long time. I did feel like a bit of a dunce when I started. It’s very complex to do, and it takes a lot of learning. But if you understand the principles of what goes into it, it can be a very powerful tool to add to the arsenal.

We will talk all about these methods in the upcoming videos, so we’ll go into a little bit more detail on how we can get these to sing and dance in the way that you want them to.

But, at the end, it’s important that we have a good set of quality control checks to make sure we’re getting what we want out of our model. We want to make sure that we have the consistency of the scale that we had at this conversation preserved inside our modelling methods. We also want to make sure that we’ve got all of the spatial relationships that we want to instil upon the map-based trends, the seismic-type trends, are coming into our modelling systems, and we want to make sure these internal architectures that we’re interested in are preserved inside that.

So in the coming videos, we’ll talk a little bit more about how we can bring all of those things together.

Thanks very much.

Stochastic vs Scenario-based Uncertainty Management

Hello, and welcome back to the Cognitive Whiteboard – where I practice my art skills, and share my experiences in applying subsurface best practices to oilfield decision making.

My name is Luke and today, I would like to talk about the differences between stochastic and scenario-based uncertainty methods. In doing so, we will explore the reasons why some of the industry’s leading oil companies prefer to use the latter when it comes to characterising the economic risk associated with developing their assets.

Before we get underway let’s quickly revisit the difference between precision and accuracy, as it’s relevant to this discussion.

Precision is the degree of repeatability of an estimate: the size of the cluster on the dartboard, the random error that you cannot avoid.

Accuracy, on the other hand, is the difference between the average of the estimates and the actual answer.

In the oilfield, precisely accurate is usually unobtainable. Precisely wrong must be avoided at all costs: it encourages over-confidence, and leads to economic train wrecks. Approximately accurate, in contrast, is often perfectly suitable for making robust business decisions.

And so, when it comes to managing uncertainty, in geomodelling, which do we tend to focus on: precision or accuracy?

Well, stochastic theory was developed to address random errors: in geomodelling stochastic methods allow you to vary the input coefficients and test their impact on the answer. Programmatically, this is very easy for software engineers to implement. In object modelling, for example, we can easily assign uncertainty to the parameters controlling channel sinuosity and channel size.

Does this address accuracy? In many ways, it does not.

Changes to these parameters will tend to influence fluid velocity in a simulation model but it may leave the fundamental connectivity of the simulation model broadly unchanged: it explores the precision behind your model concept. As geologists, however, we need the ability to test the economic impact of major assumptions controlling reservoir connectivity.

Was our reservoir deposited in an upper-fan or lower-fan setting? What impact would this have on off-channel sands connectivity?

Stochastic methods are not particularly powerful at exploring these kinds of uncertainties. To investigate high-level assumptions, we need to develop alternative geomodel scenarios and carry these into simulation. We need to see the economic impact of different connectivities between injector-producer pairs

Likening a simulation to a chaotic plumbing diagram, if we just change pipe diameter, or adjust flow rate, we would not change which faucets are connected. And with many simulation results showing similar answers, we may become begin to believe that the geological uncertainties bear little impact on our field development plan.

And so, whilst stochastic methods help us explore precision, it is critical that you carry scenario-based uncertainties into simulation as well – to investigate the impact of systematic unknowns on field economics – and in doing so allow you to improve the accuracy of your predictions and avoid economic train wrecks.

Thank you very much; I hope you enjoyed this video, and I welcome your comments.