Dragon Eggs & Unicorn Tails
Hello and welcome back to the Cognitive Whiteboard. It’s been a while but we have a new cast of characters that we will be introducing shortly. But I’m going to kick off the first of this series of videos with an attack on the hardness of the oil field data sets. To begin with, let’s do some mathematics, not a place I normally start with, but if we look at a grid cell in a geological model, let’s have a look at the reality of how well we’ve sampled that single grid cell, let alone the rest of the field.
By the time we get down the reservoir, we usually around a seven inch bit sample, doesn’t really matter, but let’s assume seven inches and a pretty common grid cell size might be 50 by 50 meters. If we do the mathematics on calculating the sample rate, our well bore area is about 0.02 of the square meter converted into metrics and the grid cell is around about two and a half thousand square meters of rock area, so that sample rate is 1 in 125,000. Question: does that well bore represent the perfect average of that grid cell? Let’s just put it and leave it there for now.
But let’s have a look at an oil field for example. Let’s take Britain’s biggest oil field the Forties. We have a hundred and three wells in it at 90km2 of area. Do the same mathematics and we are at 1 in 45 million as a sample right for that oil field. So even in this well-developed field, we have a pretty big challenge in trying to say we have statistics here, perhaps that’s the reason why we use the term ‘geostatistics’, as to whether we want to be explicitly honoring all the mathematics to this or we want to be a little bit pragmatic and understand that our sample rates are a bit spurious. I would argue on the side of using a little bit of geological intelligence rather than just mathematics here, which is often where we start. But let’s even look in a single well bore just how confident we are that we know where that well bore is.
I was involved in a peer review where we had an issue that one of the wells was off by more than a 150m at the bottom hole location and that was proven because the velocity anomaly that was required to tie that well was just unheard of. It turned out the well was actually on the down thrown inside of a fault where it had been previously assumed to be on the up thrown side. That was discovered because we did a gyro survey over these wells to try to explain some of the issues. We found that about 30% of the wells were off by more than 50 meters and when we corrected all of those we added about 90 million barrels of oil back into that oil field and suddenly all the production history, you know, the general behavior that field started making a lot more sense.
Let’s talk about that production history though. On the single well basis, how confident are we that we know the production is what we say it is? And this is probably some of the softest data that we have in the oil industry. The production data particularly when you’re looking at a downhole zonal allocation can be very, very subject to uncertainty and inaccuracy. The well bore itself is often in practical terms not perfect. Cement bonds can create leakage points behind pipe, the jewelry itself wears over time, and the control of the flow can become problematic, and most of the time, wells are being produced through a cluster so the allocation back to the single well, let alone the zone can be really problematic.
When we look at these production allocations, it’s just worth bearing that in mind. Just a really hilarious point to that, we had a 28-day cycle in one oil field that turned out to be due to the hitches of the operational guys. One of the blokes was measuring the production data accurately, the other guy was just kind of eyeballing it from a distance, and that ended up with this 28-day cycle to our production data that we thought was tidal to start with. In reality, it was just inaccuracy in that measurement method. When the questioning comes, do I honor all of my all of my data? I do feel a little bit like Gandalf going up against the Balrog because the reality is I can’t match all of it. Most of the time, there is going to be inaccuracy somewhere in the piece of the of the puzzle and I can’t always be confident where that lies. What I’m always trying to do is develop the most coherent story I can within the realms of uncertainty that these data provide. Just a little bit of a story there, I hope that’s helpful to you, if you’ve come across any other strangeness in your fields that turned out to be part of this, I’d love to hear about it in the comments below. That’s all for now from the Cognitive Whiteboard. I’ll see you back here again time.