Luke Nominated for EY 'Rising Star' Award

More exciting news for Cognitive Geology today, our founder and CEO Luke Johnson has been announced as a finalist in the 'Rising Star' category of Scotland's EY Entrepreneur of the Year!

 

Luke is quoted as saying “Discovering that I’ve been selected as a finalist in the EY Entrepreneur of the year awards tops off what has been an amazing first quarter of 2018 – we’ve just moved into our brand new offices, have grown the team to 25 and are working on some really exciting new contracts.”

To read the full article about Luke and the other nominees click here

 

 

Eileen Joins the Board

Eileen-M-CROP.jpg

More exciting news here at cognitive geology!! We are really excited to announce that our VP of Software Engineering, Eileen McLaren, has agreed to take on the role of Chief Operating Officer and also joined the board as a director.  

Here's an excerpt of the article from digit magazine by Brian Baglow. See the original article here.

 

One of the key people in both Skyscanner and FanDuel, Eileen McLaren, joins the ambitious geo-technology company’s board – aiming for a trifecta of tech unicorns.

Eileen McLaren,one of Scotland’s most respected digital technology executives, having been instrumental in scaling tech unicorns Skyscanner and FanDuel, has joined the board of Cognitive Geology.

Having helped Skyscanner and FanDuel scale from start-up to global success, Eileen joined Cognitive Geology as VP of Software Engineering in July 2017. She has now been promoted to Chief Operating Officer (COO) and appointed a director, taking a seat on the company’s board.

Founded in 2014, Cognitive Geology designs and builds specialist software for geoscientists in the oil and gas industries, improving accuracy in finding, appraising and developing oil and gas reserves.  In 2017 the company secured £2 million from Maven Capital Partners and Enzo Ventures to develop their range of product solutions for the oil and gas industry.

Cognitive Geology’s Hutton software claims to help companies in the oil and gas sector be ‘approximately accurate, rather then precisely wrong’. Hutton reduces oilfield investment uncertainty by extracting progressive trends from complex geological datasets. Geologists can identify and sequentially remove the effects of the various geological processes, resulting in a vastly reduced range of uncertainty.

The geoscience software industry is estimated to be worth an annual $4.5 billion and is forecast to double over the next several years as antiquated software is replaced by next generation technology solutions. In the recent past Cognitive Geology has secured a seven figure contract with one of the world’s largest energy companies

Eileen McLaren, COO, told DIGIT:  “I’m delighted to join the board of Cognitive Geology as I see it as one of the most exciting early stage tech businesses in Scotland at the moment. I’ve been fortunate enough to have worked at some of the most successful tech companies around and the planning, drive and the quality of the execution I see at Cognitive Geology has the same hallmarks I saw in the early days at Skyscanner & FanDuel.”

 

Cognitive Geology Raises Investment To Fuel Growth

 (Back): Brian Corcoran, Eileen McLaren, Mark Collins, Kelli Buchan. (Front): Fiona Johnson, Luke Johnson, Alex Shiell.

(Back): Brian Corcoran, Eileen McLaren, Mark Collins, Kelli Buchan. (Front): Fiona Johnson, Luke Johnson, Alex Shiell.

We interrupt our regular geology programming with a special announcement: we're delighted to report that Cognitive Geology has recently raised a £2 million investment round to help us scale our business and get our first product Hutton into the hands of more geologists around the globe.

As regular visitors to this blog will know, at Cognitive Geology we have been developing the next generation of technical software for geoscientists and reservoir engineers since the beginning of the industry downturn – helping geologists do their jobs more effectively, and thus helping oil companies in their quest for more cost effective solutions under the new price regime. Now with several major customers on board, at this point in our journey we're very pleased to announce two investment partners in the form of UK private equity house Maven Capital Partners and Enso Ventures, a London-based early-stage venture capital firm.

Here's what our CEO Luke Johnson had to say about the news: “The oil industry, one of the first to adopt 3D visualisation and high performance computing, once led software innovation but today it has fallen behind gaming & cloud technologies. As geologists and technologists ourselves, our mission is to take advantage of modern software architectures to improve the efficiency of our peers around the globe by giving them the best software possible, for the complex tasks they face each day.

To that end, the feedback from our customers so far has been overwhelmingly positive. Today’s low oil prices increase the requirement for better technology. Enso & Maven, both forward-thinking investors, were quick to recognise the fortuitous timing of our company growth. This investment round will provide us the capital to scale our Edinburgh headquarters and position ourselves as innovators of 3rd generation software solutions for the energy services sector."

David Milroy, Investment Director at Maven said: “Luke is a true entrepreneur having created Hutton to tackle the challenges he encountered himself when working as a geologist responsible for high value upstream oil and gas projects. Cognitive has a differentiated approach that provides geologists with the most robust geological explanations for the data they observe and in doing so helps them make more informed decisions. Luke has assembled a very capable technical team as well as an experienced advisory board and we look forward to helping the team execute their exciting plans.”

Enso's CFO, Oleg Mukhanov, added: “We are excited to support this cutting-edge business, which is set to revolutionise the way geological data is analysed and interpreted. We strongly believe that Cognitive’s high calibre team is capable of delivering on this task, which will put them at the forefront of their industry.”

Thanks to everyone in the geology community who has supported us so much thus far: hugely appreciated - you are the folks we are building for!

Ok: that's all on the investment side for now - back to the geology!

Simulation Sprints: Minimal Cost, Maximum Value

We're at the Cognitive Whiteboard again with Luke, discussing simulation sprints, and how a lean and iterative approach can provide robust business recommendations, in a fraction of the time taken to build a “perfect” model.

Click on the image below to see a larger version.

Cognitive+Whiteboard+simulation+sprints.jpg

Transcription

Simulation Sprints: Minimal Cost, Maximum Value

Hello and welcome back to the Cognitive Whiteboard. My name's Luke and today we're talking about simulation sprints. This isn't a technical workflow, it's a project management one. But it's something that's completely revolutionised the way I do my work and I'd like to share with you how this can help you make much more cost-effective analyses of your reservoirs in a vastly shorter period of time.

Now the method is not something that I claim credit for because it was challenged to me by a person called Michael Waite in Chevron who once came to me with a new field to look at, a mature field with a lot of injection and production and asked me to give him a model by the end of the day. My reaction to that was shock - I guess would be the polite way of putting it - because that's obviously, an impossible ask for you to try to build a reservoir model and understand what's going on in a complex brown field in just a matter of a single day.

But Michael wasn't being silly. He was challenging me to use a methodology that would allow us to make quick and effective decisions when we clearly knew what the business decision that was coming up was going to be. What we had in this particular case is a mature field with only about 12 or so locations left for us to drill, and six well slots that we needed to fill. We had wells that were declining in production and we were going to replace them. And so, there really wasn't any other choice other than optimizing those well, bottom-hole locations.

And so, in that context you can come back and say well, we don't necessarily need to answer the ins and outs of the entire reservoir but rank and understand those locations so that we can drill the optimum ones during the drilling campaign. And he introduced to me this concept of simulation sprinting. What it is, is a quick loop study that you can do numerous times, iterating through progressive cycles of increasing precision until you get to a point of accuracy that allows you to make a valid and robust business recommendation.

The first one in a single day, we were not going to be able to build a realistic reservoir model by any means. What we were able to do in a single day is do some pretty decent production mapping results. So, taking the last six months of production looking at the water cut, we got together with the production mapping team. And we were able to design a workflow that we could do that day that would give us an idea of what was going to address this bigger objective, and try to say what would be the lowest water cut because that's another value measure that we could use to understand these wells.

Importantly, because we're gonna do this a lot, even though we call it sprints, the key is to work smart, not hard, because it's gonna keep going on over time. So, you wanna be able to do this within the normal working hours. Don't burn the midnight oil, otherwise, you'll burn out before you get to make your robust business decision.

The really important piece in this cycle though is the number five. When we come to assess the outcomes of any one of the experiments that we've done, we need to rank the wells that we had in order of economic value. So, whatever way we were trying to devise it, we needed to have those well targets ranked from best to worst at the end of each one of these simulations sprints. That's what Mike was asking me for at the end of the day.

And when we do this assessment, we also spend time to have a look at what's the worst part of the technical work that we've done because that's gonna form the basis of the next objective of the next sprint cycle. And we could come back and progressively increase the length of this sprint loop. So, the first one was done in a day, the second was in two and then four and eight and so on. But we can adjust this as is needed to determine how we could address these experiments.

But as we come back through this loop, and constantly re-rank ourselves, what was fascinating is that after only four weeks, the answer never changed again. The order of the top six wells was always the top six. And what that shows you is that really, with some quite simple approaches you can get to the same decision that you could with a full Frankenstein model. You can get to that recommendation without having to do years' worth of work.

So, we were able to make that recommendation. It was an expensive campaign, so we didn't stop at four weeks. We ended up stopping at about four months. But really importantly, we would've taken six months, perhaps a year to get that kind of a stage and we were already significantly ahead of that at the end of this routine. So, it's a method that has really changed the way I do my work, and it's something I really recommend you give a go. Hopefully, you enjoyed this, and I'll see you back here next time at the Cognitive Whiteboard.

MANAGING UNCERTAINTY: ROBUST RANGES USING TRENDS

Warming to his heretical theme, Luke is back to discuss using trend analysis to drive uncertainty in geostatistical models

If you'd like a demo of Hutton - our product that Luke mentioned - just drop him a note: luke@cognitivegeology.com.

I think this is my favourite whiteboard drawing yet - enjoy it in glorious full screen by clicking below!

TRANSCRIPTION

REPLACING THE VARIOGRAM: ROBUST RANGES USING TRENDS.

Hello, and welcome back to the Cognitive Whiteboard. My name's Luke, and wow, did that last video generate a lot of interest in the industry! What we did was we talked about how variograms and trend analysis can work hand in hand to try to investigate how your properties are being distributed in three dimensional space. Today I want to show you how we can use the trend analysis to drive the uncertainty in your models as well. In doing so, I think I'll officially be promoted from geostatistical heretic to apostate. But let's see how we go.

What I want to do today is really run you through how I used to go about doing geological uncertainty management and how I do it today. I started by thinking about shifting histograms. I think a lot of us do this. If we wanted to get a low case, what if the data was worse than what we observed, or a high case, we could shift it upwards in the other direction? I've done this many times before in the early parts of my career. It's not a particularly valid way of doing it in many examples. When you do just shift the histogram and fit to the same world data, you'll generate people's and dimples around your wells, which is undesirable. But if you shift the observed data as well by saying, "Well, the petrophysicist has some uncertainty in their observations," what we're really beginning to invoke is that the greatest degree of uncertainty associated with that is at the wells. And I think we can all agree that the greater degree of uncertainty is away from the wells. There are important uncertainties here, but we have bigger ones to deal with up front.

The other way of trying to manage our uncertainty is also in the structure in how we distribute that data. Different variogram models are useful for doing this. We can say fairly that the interpretation of a geological variogram, that experimental data that you get is - particularly in a horizontal direction - usually uncertain. We don't have enough information there to be confident on how that variogram structure should look, so it's fair to test different geological models and see what will happen. What's interesting is, of course, if you vary the histogram, you'll change STOIP with a similar recovery factor, just generally better or worse. Whereas if you change this, you'll vary the connectivity, but you won't really change the STOIP very much. And it's often difficult to link this variogram structure back to a conversation you can have at the outcrop.

So over my, I guess,  five or six years now, I've been focusing on addressing uncertainty by saying, "Actually, the sampling of our data - the biased, directional drilling that we've gone out and sought the good spots in the reservoir, typically - is really what we need to try to investigate." How much does that bias our understanding of what could exist away from the well control?

Got an example here, a top structure map of a field, a real field through here with five appraisal wells along the structure, and the question is: in these two other locations that are gonna get drilled, is it gonna be similar, different, better, worse? And how could we investigate the uncertainty of the outcome on those locations?

We could, for example, investigate: is the variation that we observe in this data set a function of depth primarily? Or perhaps it's a function of some depositional direction - in this case, the Y position, as a theory. We don't know at this stage which one it is, and depending on which one we do first, we end up with different correlations. In fact, you can see on this sequence after taking out the Y position, if I analyze the residual for depth, we end up with not two porosity units by 100 meters of burial, but only one porosity unit being reduced as a function of depth. So you can see, fundamentally, we're invoking a different relationship as a result of that burial.

What's really interesting is that these are highly uncertain interpretations, but they're valid ones. And they give us different answers, not just in terms of the absolute positions of those two wells, but actually for the entire model the answer is different. And this is very representative of your uncertainty in three dimensional space. This input histogram with that particular shape is more to do with the sampling of your wells in the particular locations that they were, whereas this trend model behind it is helping you understand is there any variation in three dimension space that's going on? So we can end up with very different plumbing and in-place structures by really investigating how these trends can go.


You can do all of this manually one after the other through these routines. Our product Hutton does it for you automatically. We run about 300 different paths through your data set to try to investigate how that could go, and we find it's a very powerful way of really developing robust but different geological interpretations to address your uncertainty in your reservoir. If you're interested, drop me an email: I'll let you know more about it. But for now, that's all from the Cognitive Whiteboard. Thank you very much.

Replacing the Variogram

This week, Luke nails his sacrilegious thesis to the door - that the variogram can be replaced by quality data analysis. 

We're building a product called Hutton to do exactly what has been outlined above: If you'd like to get a sneak peak, drop Luke a line at: luke@cognitivegeology.com

To dig deeper into this week's whiteboard, click on the image below to get a fullscreen version.


TRANSCRIPTION

Replacing The Variogram

Hello. Welcome back to the Cognitive Whiteboard. My name is Luke, and today, I'm nailing my thesis on the door. I am going up against the institute to commit to geostatistical heresy by showing you that variograms are not an essential element of our geological modeling processes. In fact, I wanna show you that with careful trend analysis, you can do a better job of producing geological models that represent your geology in a much more powerful way and give you much more predictive answers.

To do this, I've built a data set - it's a complex data set, they're quite geologically realistic, some tilted horse blocks through here into which we have modeled some complex geological properties. We vary those over x and y by some unknown function. We have some very complex sequence cyclicity of fining upwards and coarsening upwards trends. And we've over printed this with a burial trend. And what we want do is see how we go about representing that with this perfectly patterned drill data set, either with trends or with variogram analyses and determine which one we think is better.

So, let's first off have a look at the variogram of the raw data set and we can see immediately some of those structures, some of those trends that we impose in the model, are showing up in our variograms. We have some obviously low sills in some of the data sets that have some structures, some correlation over distance before they flatten out into our sill. But we do have some weird cyclicity that's happening and we should wonder what's going on there. So in truth, we know that this is just the complexity of having lots of different processes creating nested variograms and various sequences of cyclicity. And all geologists that are familiar with this kind of routine will know to try to take some of these trends out.

One way we could start is by subtracting the stratigraphic trend. This isn't often done but it's very, very powerful. You could take, for example, a type log and remove that from your log, from your data set, or you could do what I've done here and essentially subtract the midpoint or the mean value from every one of your K layers and see what you get after you take that out. You're basically representing sequence cyclicity when you do this. You wanna keep that trend, this black line here, because you have to add it back into your model afterwards. But when you do it, you see a reduction in the vertical variogram, as you would expect. We have described a lot of the variation that's occurring in this direction as a function of sequence cyclicity and it's not just random. And so typically, you'll see a reduction of probably the half of the variation in the vertical sense. But it won't have any impact on the major/minor directions because the same columns exist everywhere in x and y space.

Once we take that trend out, we'll have a new property - it’ll be porosity given that trend - and we can do a trend analysis on that property. So, now we're doing it against depth. And what's interesting is as you take out trends progressively, you start to see the second and third order effects that might not have been obvious in the raw data sets. In this case, it really tightened up our observation of the depth trend. And we can subtract that next. Take that trend out because it's not random and see what it does to our variograms. Now, this one changes the major and minor variograms, not the vertical one, even though that seems counterintuitive, and it's doing that because your burial depth is varying as a function of its map position. So that's why it changes those two variograms.

And again, we can keep diving down deeper and deeper into our data sets, removing progressive trends, linking this to geological processes, and pulling it out of your data. In the end, with this perfect data set, if we had described all of those trends, you would see no structure left in your data or next to no structure. And that's because you have done a pretty good job of describing geology, not just random processes. Your nugget and your sill is almost identical. That means that the observational point has just as much information immediately as it does at some great distance. That's great, you no longer need a variogram. You have done it instead with trends.

Now, this is obviously a perfect data set with an unrealistic perfectly sampled series of wells. Let's imagine what we would do with a realistic sample set with much more sparse and biased samples. Well, most gross geological trends are obvious, even in small data, small amounts of samples. But these horizontal variograms are something that we basically never get to measure in the real world. And so, we spend our time in peer reviews often defending whatever settings we have done in these major and minor directions with no basis or outcrop that we can link that to.

So, if you want to do something in this space, you can make your models much more predictive because you can end up driving geology into it and removing the dependence upon your random seed. You can do all of this in pretty much any commercial package today, but it's not particularly easy or intuitive. So, we've gone ahead and built a product for you that will do this in a much more powerful way. We call it Hutton, named after the father of geology, because with these small observations, we can make interpretations that can change our understanding of the world.

Hutton comes to the market in March 2017. It will help guide you through this trend analysis process, it even has some intelligence that can help you automate that. And if you're interested in finding out how to throw away the variogram and bring geology back into your geological models, please drop me an email, and I'll happily show it to you. But for now, in the meanwhile, that's all from us, and I'll see you next time at the Cognitive Whiteboard.

Learning to Speak Cajun: Geomodeling for Well Planning

Luke is back at the Cognitive Whiteboard, looking at the importance of learning the language and processes of your colleagues when planning wells.

Click on the image below for a detailed view of this week's whiteboard.

download (3).jpg

TRANSCIPTION

Hello, welcome back to the Cognitive Whiteboard. My name's Luke, and today I'm going to use my outrageous Australian accent to try to speak Cajun. I want to talk you through a drilling history that I've been involved with, a six-well campaign where something went wrong and we used geomodeling methodologies to make sure that it wouldn't happen again. 

So, let me talk you through the field firstly. It's a salt raft on the West African coast that has a number of different reservoir units. The main productive interval was this yellow one through here where the lower two units particularly were of much better quality and we'd had a lot combining production across those zones. The field had initially been exploited on primary decline and then water injection, so it's quite a complex reservoir management story. 

What we needed to do was make a better job of exploiting the shallowest unit of that main reservoir. We had, however, a reservoir above us that had been under production for some time, and it had some changes that had resulted in the pressure regime associated with that that we hadn't accounted for properly. So, in the pre-drill pressure prediction, we essentially said that the   pressure was going to be relatively consistent across the field. The fracture gradient would therefore be also pretty similar, and our mud weight window required only one casing depth, and we would drill through both the shallow unit and the target reservoir with the same open hole section. 

When we drilled it, however, we encountered a problem, and the problem was this. We had had, above that target, the biggest producer of that shallow reservoir, and so it had been causing a lot of pressure depletion locally in that area. We had a subsurface blowout at that point. We had a kick that we hadn't accounted for, we didn't anticipate it, it hadn't been observed before, a small kick that would've been easily contained by the well design. However, with this depletion above here, the impact of that on the gradient of the shallow reservoir with that additional kick weight resulted in a fracture inside here and massive losses of our mud system into that reservoir unit. We even ended up producing quite a lot of that drilling mud from the reservoir later on.

So, we had a situation where that well design didn't work. The problem was we had five other wells that were designed identically that needed to do the same job, and from the driller's perspective, they were not going to go near it and touch that with a bargepole. We were just post Macondo, so everyone was very sensitive around our drilling parameters. We didn't want to have anything going wrong, even more than normal, and so what they wanted to do was redesign with an additional casing string that would have required us to case between the two units that are quite close together preventing us then from being able to get a horizontal section into the target reservoir. 

That sub-vertical production would have resulted in a roundabout 60 million barrels of lost reserves. So, we really needed to make sure that that had to happen. This was a field in late production life, so it was probably the last campaign that would ever exploit that particular reservoir, and we really wanted to make sure that this is what had to be done.

 

We spent some time with the drillers and we realized that they do a lot of their benchmarking in 1D. They're doing that in these kinds of pressure elevation plots, so they just dump all of the wells, essentially, onto these kinds of diagrams and look out for these red spots where there's some crossover and say, "Well, there's a one in X number of chance of that happening, therefore, that's not an acceptable risk." 

What we were able to do by building a geomodel that went all the way through to the surface and incorporating all of the basic rock properties of these reservoirs but actually copying the pressure matched current day pressures from all of our simulation history models. We were able to show that the three-dimensional relationships of these pressures today under the current poor pressure regime meant that the other wells were going to be safe with that design. In fact, they're all placed underneath injectors surprisingly, and the effect was actually reversed. 

So, by putting all of this together, inverting our model into drilling mud weight, we were able to generate a example using geomodeling technologies that communicated very, very clearly to the drillers that the wells in the remaining campaign would be safe. So, we executed the plan as we had initially designed the wells, and they all came in safely, and they're now producing very nicely. So, without doing this kind of a work and linking it all together into a single story using those three-dimensional models and the current pore pressure matched simulation models, we would've missed some 60 million barrels of additional reserves. 

I hope you liked this little example. I would love to hear your stories around similar problems during a production scenario. Thank you very much.

 

 

 

Tetris in the third dimension: object-based modeling

Luke is back at the Cognitive Whiteboard to take a look at how object-based modeling is a lot like playing your old Gameboy favourite in 3D. 

Click on the image below for a detailed view.

tetris+in+3d.jpg

TRANSCRIPTION:

Tetris in the third dimension: object-based modeling

Hello, and welcome back to The Cognitive Whiteboard. My name's Luke, and today we're playing Tetris in 3D.

GEOLOGICAL MODEL from A different perspective 

I want to talk today about how I like to use object-based modeling for some of the more complex facies environments that I encounter. I'm going to use that as an example; this complex fluvial system that Boyan, from the Wave Consortium, recently highlighted on a fantastic LinkedIn post, where he really showed how you can use Google Earth to understand the true complexity that can occur on a geocellular scale for a geological model.

 The original article by Boyan is  here

The original article by Boyan is here

So, this trace of his study through here has got 50 by 50 meter grid cells overlaying on it - a typical size, perhaps, for a geological model. What we can see, when we look at it, is that each individual cell within this system has got radically different potential permeability relationships. We might have systems that are mostly sand, but with a mud plug along one edge, would have no permeability in an east-west direction. Likewise, it could be the north-south orientation that gets destroyed by a mud plug. What it means is that when we try to represent the facies inside this system, it's not suitable just to think about rock types at one point, and then continuous properties in isolation: we need to pay attention to the relationships of both the facies and the continuous properties of those facies in 3D.

And so, with object-based modeling we can do this but, importantly, before we get in there, you can't model this system with one of these. So, the fluvial environment models that you see in object-based modeling typically look like these channel systems; often just a simple sinusoid with some levee banks. I can't see that anywhere here. I don't ever see that. In fact, what I usually see in these kinds of systems are things that look more like an aeolian dune or an oxbow lake. So my recommendation: usually I'm using things like the aeolian dunes, one after the other, to represent the development of those point bars that come across these systems.

 

Connecting the pieces: object-based rather than pixel-based modeling

We do have levees. We do have overflow banks. But by and far the dominant systems are those depositional environments that are occurring because of channel migration - so we want to represent that in our geology. So, how could we then use a shape like this and still get that kind of behavior in the permeability vector? So, what we can do with object-based modeling that's somewhat unique to object methods - as opposed to pixel-based methods - is that you can preserve individual orientation attributes of each one of the objects. In particular, directional trends, depth trends, and distance and curvature are properties that you can represent out of any one of your objects, and you can use this when you come to do your correlations for porosity and permeability now.

download (2).jpg

So, how could I represent, perhaps, this feature here using those outputs? Well, if I knew that I had an oxbow lake coming around the edge, then I could preserve that permeability in the direction of the object. It's high, but counter to it is low. Likewise, I might want to say that these point bar systems have got coarser sediment at the base of them, and they're fining upwards across them, so that depth trend could come in very, very handy. It's very difficult, when you're doing your data analysis, to have hard data to correlate this against so you're gonna have to model that based upon your understanding of geology. But this method, together with that kind of third dimension in the context, is really the only way that I've ever seen that works to deliver the relationship between rock types and continuous properties - except for one other method that is developing that I've seen out there, coming out of Geneva. A method where you use multi-point statistics. They simultaneously simulate rock types and continuous properties, but as far as I know, that's not particularly commercially available yet. Anyway, I hope you enjoyed this. Thanks very much, and I'll see you again at The Cognitive Whiteboard next time.

My 24 Million Dollar Mistake

At the Cognitive Whiteboard this week Luke describes a 24 million dollar mistake that he made, and how it changed the way he thinks as a geologist.

Click on the image below to get a more detailed view of this week's Cognitive Whiteboard.

My+24+Million+Dollar+Mistake.jpg

TRANSCRIPTION:

My 24 Million Dollar Mistake

Hello, welcome back to The Cognitive Whiteboard. My name's Luke and today we're not going to talk about technical best practices. I'm going to share with you an example of why I think communication is at least half the job that we do. I'm gonna illustrate that with an example from my history where I think I made a 24 million dollar mistake in an appraisal well.

Firstly, the well was drilled safely and it was drilled with no environmental impacts, and we achieved all of our appraisal objectives on time and on budget. So it wasn't a mistake in that regard - but I will explain to you why I think it is. So we had a setting where we were drilling for a lowstand sandstone. It was unusual target for the region. Typically, we were looking for something much deeper, but this lowstand was essentially within marine shales. It was in 1,500 meters of water - so quite deep for us to drill from - and it was underlying a very complex overburden of submarine canyons of cuts and fills, filled with various clay stones and calcilutites making it very, very difficult depth conversion.

download.jpg

Unusual Reservoir

The reservoir itself as well was quite unusual. What we had in this reservoir was a structural clay, so if you haven't seen this before, it's common for us to see dispersed clays - typically orthogenic cements that are occurring at the grain boundaries. We have laminated cements that are commonly depositional. Structural clays, though, few of us had ever encountered where essentially bioturbation had been so pervasive that these little creatures had essentially concentrated all the clay into fecal pellets, and it was providing framework support for the reservoir. So despite a 30% to 40% clay content, we had fantastic porosity and permeability.

However, those pellets were relatively ductile and what we observed in the core was that we would see a dramatic reduction in porosity and permeability associated with increasing external stresses. So the theory then was that if we went down deeper in depth, particularly below mud line, we would probably expect to see a poorer quality reservoir.

 "What do you mean, you drilled it anyway?"

"What do you mean, you drilled it anyway?"

We Were Right - But We Were Wrong

And so I went to my mentor and explained that this had happened. And to my surprise, he put his head in his hands and said, "If you knew the answer, why did you drill that well?" And I really... This is a turning point in my career. It really put me back on my heels. And this is where I think my 24 million dollar mistake came. If we knew this so well and we had such good technical justification, had we worked more on our "Rosetta Stone" of translating technical jargon to business speak at this conversation, had we build a bridge between these two divides and managed that, we may have postponed a 24 million dollar drill. 

Now this was a major capital project and so it was always going to get drilled, so it is a data point that we needed, but could it have been delayed? I don't know that for sure, but I look back on that and I reference this in my career - and that's the reason why I spend so long on these boards - because communication is at least half of the job that we should be doing as a geologist.

Thanks very much. I'll see you again here next time.

Juggling Multiple Vertical Facies Proportional Curves

Responding to feedback from our viewers, this week Luke pulls one of his old tricks out of his sleeve: how he blends multiple vertical curves together to produce a coherent three-dimensional trend model. Enjoy!

If you'd like to explore this week's whiteboard, click the image below to get a closer look.

How+to+create+a+three-dimensional+facies+model.jpg

TRANSCRIPTION: 

Juggling Multiple Vertical Facies Proportional Curves

Hello, and welcome back to The Cognitive Whiteboard. My name is Luke and, today, this is the third video we're filming around facies modelling. I was planning on talking more about methodology at this point, but there was so much interest in how I blend multiple vertical curves together into a coherent three-dimensional trend model, that I thought I'd show you exactly how I do that. So let's get started.

WELL DATA AND VERTICAL CURVES

1476796763617.png

What we want to do is have a conceptual depositional model in mind. So this is something you have built up from your field data and analogous nearby data sets so that you understand how the facies might be brought together in the overall system of that formation. And we’ll take a look at the wells that we've acquired and see where they cluster inside that depositional model. Because, what we're going to do, is say that these clusters of data give us some observed information, but it's an incomplete set. We need it to represent that depositional suite in a more robust fashion.

So we're going to cluster our wells together. In this case, I've got three particular groups that I've used that represent my depositional model, and I'm also concerned that there might be another set of facies that hasn't been sampled by my wells.

So when I start with my data, I've upscaled the facies logs three different times for different groups of wells, and now I'm going to build some vertical proportion curves. And I'm going to use some detective work to understand how to take this observed set, that has a terrible sample rate, to construct something that represents that depositional region of the model. We're going to do that for each one of the scenarios that we want to carry, including the one that we haven't sampled, so that we're going to really base this entirely upon depositional concepts. And we're going to build vertical curves four times over, and these will exist there, and every IJ column of your grid will have these percentages of these particular facies types.

Now, obviously, this is tremendously uncertain at this stage, so it's well worth considering; does this need to be analysed multiple times so that you're representing your uncertainty appropriately? But let's say for now these are the trends that we want to do. So, essentially, we have picked up the vertical trends out of our depositional model linked to the wells, but that's really where it's coming from, so that we come down and say, "How are we going to distribute this geological behaviour onto that grid?"

MAPS AND EQUATIONS

1476797266971.png

What I do then is I construct some simple two-dimensional maps that overlay the reservoir. So I can now map out where vertical curve one will exist, where two, where three, and where four should be on a map-based sense. This is simply a surface that goes literally from value one to value four, and I've sculpted that myself using geological intuition.

So for example, I want to say, "What if this carbonite facies exists?" Perhaps it could be in this corner of the map or the other corner, or perhaps along the entire edge. And I want to see, does this change the answer for the particular business decision that I'm facing next. And I can simply blend together these four properties that are the same everywhere in three-dimensional space into a model that has different vertical curves in every point in space by using this equation here.

So you have to increment that for each one of the trends that you're doing and, obviously, you could combine one and four if you wanted to think about how you could do that mathematics; you could obviously change the numbers around a little bit. But in doing so, you can blend these curves together so that if you're 30% of the way between one and two on this map, the vertical curve you'll get will have 70% of this facies assemblage and 30% of that facies assemblage with those same vertical patterns. And in doing so, you can really take control of the way your facies models appear, and I find this a fantastic way to generate very different geological scenarios and test what's going to change my next business decision.

I hope this is helpful, and I'd love to hear some critique on the method. So please, feed back in the comments and let me know what you think.

Removing The Blindfold: Taking Control in Facies Analysis

Luke is back at the Cognitive Whiteboard again for the second video in our series on facies - this week he looks at data analysis in facies modelling.

Below is a still image of this week's whiteboard, for your further perusal!

Data+analysis+in+facies+modelling.jpg

TRANSCRIPTION

 

Removing The Blindfold: Taking Control in Facies Analysis

 

Hello! Welcome back to The Cognitive Whiteboard. My name's Luke and today, we're filming the second video around a series on facies modelling. In this video we're going to focus in on how we do the data analysis behind the facies model. It's arguably one of the most difficult parts of the workflow and so it's a place to take your time and get it right.

First things first

Really, before we get into any of the analysis of this, we should have had the conversation with the stratigrapher, the geophysicist, and all of the rest of the geologists in your region to understand what your depositional environment is likely to be in your particular area, and make sure that you understand how the best practices reflect that kind of depositional system. And the job that we're going to do as modellers is going be to implement that theory in three dimensional space.

A common starting point is to look at the vertical proportions that we see in the wells and this isn't an easy task to do by any means. It's not easy for a couple of reasons. Firstly, we're sampling a discrete property, and we're sampling it usually with a handful of wells and so it's quite difficult to see a lot of character in that. We have essentially thrown away a lot of information, we made the choice of bidding a bunch of rock types together into a particular facies class. So it's important that we take that observational data, tie it back into the depositional concept and produce a conceptual vertical curve that mirrors what you're trying to invoke inside the model.

1474908296432.jpg

Worthwhile extra work

Now, I, personally, am not smart enough to get the right curve for a full field in one go. Quite frankly, I don't know what the right vertical proportion curve would be for that model there in that grid, because it varies across space. Essentially, over here, it's 100% of the orange facies and over here, there's none of it. So what's the right vertical proportion curve? It really depends upon the grid structure as well as the depositional model. So I find a much simpler routine is to actually generate more than one vertical curve across my model, I do this manually, and then draw some polylines and use a little bit of mathematics to blend them together and construct a combined vertical proportion concept, essentially a three dimensional model now that blends together both my map-based theory and my vertical proportion curves. It's a little bit more work but I find it gives me a lot more control.

And then of course, it's very important that we see if we can drive something out of the seismic to give us an insight into the reservoirs. When we do so, just, of course, be aware of the vertical resolution that you get from seismic. There may be a lot of tuning effects. This reservoir is thinning off to the left here so I will be quite skeptical about what I'm seeing here. And of course, the seismic wavelet may be several times thicker than the reservoir target so it's important to understand: is this an average effect over the whole reservoir? Is it a particular stratigraphic zone, because any seismic image sees not only the target of interest but also the rock adjacent to that.

And of course, the seismic interpreter, the geophysicist, is going to be seeing the same rocks binned in a different way. They're going to be binning it by acoustic properties and the sedimentologist might be binning it by other methods. So it's quite important, referring back to the first video, that we have that clear and calm conversation so that we can relate these properties to the facies that we have observed in the wells and bring it all together.

1474908320811.jpg

Adding mathematics

And finally, that point of bringing it together, most tools - when we come to execute the facies modelling routines - allow you to blend more than one trend into essentially a single prior that will go into the geostatistical routine. Most of the time, commercial software will give you essentially slider bars of these various kinds of properties which are essentially saying “I love” or “I hate” this property, it works or it doesn't work, I'm going to try to blend this together. Arguably - maybe because that's quite easy to control mathematically in an uncertainty workflow at the end - I'm not a big fan of this because I don't get to see that outcome in three dimensional space before it goes into the modelling routine. So again, it's relatively simple mathematics: a couple scripts, and you can end up combining these together yourself and have a nice three dimensional concept that you get to QC yourself before it goes into the modelling routine.

 

I hope this is all helpful. We use this kind of concepts in the next video and start talking about executing specific routines.

Geomodel Facies: Methods and Madness

Luke is back to discuss one of the trickiest elements of the geologist's workflow: geomodel facies.

For a detailed view of this week's whiteboard - with added colours! - check out the still shot below.

geomodel+facies.jpg

TRANSCRIPTION

Geomodel Facies: Methods and Madness

 

Hello! Welcome back to the Cognitive Whiteboard. My name's Luke, and today, we're starting a series of videos around facies modelling.

Why do we do facies modelling?

So when we do facies modelling, we're normally trying to achieve one or two of these kinds of effects. We're trying to represent particular rock-flow behaviours, so there might correlative relationships between things like porosity and permeability, or relative permeability differences that might be on bin-like behaviour, or we're trying to represent geobody shapes. Usually, we see an outcrop like this psychedelic representation of a fluvial system. Now there's a lot of complexity in those geobodies out there, and facies are a very useful way of getting that into your model.

1472548835495.jpg

Different meanings for different folks


Now, when we do our facies modelling, we have to be careful because there are a lot of cats that like to use that term. The sedimentologist, the petrophysicist, the seismic interpreter, all may use the term "facies" for their own meaning. The sedimentologist is perhaps the most traditional way of thinking about it. They're trying to represent depositional systems, and they get to see all the way down to the millimetre scale textural relationships that are available essentially only to the naked eye and that can help them see quite a lot of character in the rocks.

By the time the petrophysicist gets to see most of the information, most of their logs are at the 10 centimetre to meter type resolution, and a lot of the particularly older wells lack image logs that can give them some of the same textural information that the sedimentologist sees. So, realistically, the petrophysicist is dealing with mineralogical effects. And of course, the seismic interpreter does their best to try to extrapolate that in 3D but they're working from a meter-plus vertical resolution and what they get to see with that acoustic response is orders of magnitude different to what the sedimentologist can do. So it's important that we get everyone around the table and understand how they're linked together because, particularly from seismic all the way down to sedimentology, there is a pretty difficult choice sometimes in trying to bring those two sciences together.

And when we do, we come then into the geomodel's realm. The geomodeller gets the choice of how they're going to try to distribute those properties, and they're not all that easy to do. The traditional object-based and pixel-based methods are still out there and still in use and still add lots of value. The pixel-based methods are very good at incorporating external trends so, say, seismic data or map-based behaviours that you want to instil upon your model, that could help you get those spatial relationships done very, very well - very good at honouring lots of different probabilities.

1472548863992.jpg

Object-based vs pixel-based methodologies


But the object-based models are perhaps more powerful than the pixel-based methods at preserving some of those geobody shapes. And that can be particularly useful in say, channelized bodies. But you can also see that some of the choices you get in creating an object-based model don't necessarily very well reflect what we see in the outcrop. So it's important to remember to model what's deposited and preserved, not what's in an active, modern system.

But the two came together with multi-point statistics where we used an object-based model and a pixel-based methodology to try to give us both the geobody shapes and the external trends all in the one kind of a solution. And in many regards, it's probably one of the most powerful methods that's out there in the industry today. It's preferred by a lot of the super majors. I've been using it for a long time. I did feel like a bit of a dunce when I started. It's very complex to do, and it takes a lot of learning. But if you understand the principles of what goes into it, it can be a very powerful tool to add to the arsenal.

We will talk all about these methods in the upcoming videos, so we'll go into a little bit more detail on how we can get these to sing and dance in the way that you want them to.

But, at the end, it's important that we have a good set of quality control checks to make sure we're getting what we want out of our model. We want to make sure that we have the consistency of the scale that we had at this conversation preserved inside our modelling methods. We also want to make sure that we've got all of the spatial relationships that we want to instil upon the map-based trends, the seismic-type trends, are coming into our modelling systems, and we want to make sure these internal architectures that we're interested in are preserved inside that.

So in the coming videos, we'll talk a little bit more about how we can bring all of those things together.

Thanks very much.

Stochastic vs Scenario-based Uncertainty Management

This week, Luke take a look at the differences between stochastic and scenario-based uncertainty methods, and how the underlying question is one of precision versus accuracy.  

Here is a shot of Luke's artwork, without his mug getting in the way!

stochastic+vs+scenario-based.jpg

Transcription

Hello, and welcome back to the Cognitive Whiteboard - where I practice my art skills, and share my experiences in applying subsurface best practices to oilfield decision making.

My name is Luke and today, I would like to talk about the differences between stochastic and scenario-based uncertainty methods. In doing so, we will explore the reasons why some of the industry’s leading oil companies prefer to use the latter when it comes to characterising the economic risk associated with developing their assets.

Precision vs. Accuracy Defined

Before we get underway let's quickly revisit the difference between precision and accuracy, as it's relevant to this discussion.

Precision is the degree of repeatability of an estimate: the size of the cluster on the dartboard, the random error that you cannot avoid.

Accuracy, on the other hand, is the difference between the average of the estimates and the actual answer.

In the oilfield, precisely accurate is usually unobtainable. Precisely wrong must be avoided at all costs: it encourages over-confidence, and leads to economic train wrecks. Approximately accurate, in contrast, is often perfectly suitable for making robust business decisions.

And so, when it comes to managing uncertainty, in geomodelling, which do we tend to focus on: precision or accuracy?

Which Is Which?

Well, stochastic theory was developed to address random errors: in geomodelling stochastic methods allow you to vary the input coefficients and test their impact on the answer. Programmatically, this is very easy for software engineers to implement. In object modelling, for example, we can easily assign uncertainty to the parameters controlling channel sinuosity and channel size.

Does this address accuracy? In many ways, it does not.

Changes to these parameters will tend to influence fluid velocity in a simulation model but it may leave the fundamental connectivity of the simulation model broadly unchanged: it explores the precision behind your model concept. As geologists, however, we need the ability to test the economic impact of major assumptions controlling reservoir connectivity.

Avoiding Train Wrecks

Was our reservoir deposited in an upper-fan or lower-fan setting? What impact would this have on off-channel sands connectivity?

Stochastic methods are not particularly powerful at exploring these kinds of uncertainties. To investigate high-level assumptions, we need to develop alternative geomodel scenarios and carry these into simulation. We need to see the economic impact of different connectivities between injector-producer pairs

Likening a simulation to a chaotic plumbing diagram, if we just change pipe diameter, or adjust flow rate, we would not change which faucets are connected. And with many simulation results showing similar answers, we may become begin to believe that the geological uncertainties bear little impact on our field development plan.

And so, whilst stochastic methods help us explore precision, it is critical that you carry scenario-based uncertainties into simulation as well - to investigate the impact of systematic unknowns on field economics - and in doing so allow you to improve the accuracy of your predictions and avoid economic train wrecks.

Thank you very much; I hope you enjoyed this video, and I welcome your comments.

How To Make Geology Magically Appear In Your Models

In this video, Luke explores the subject of getting geology out of the domain of algorithms and into the sphere of the human experts, the geologists! 

Here's a still of this week's whiteboard. Top marks to Luke for artistic impression!

Magic+Models+whiteboard+still.jpeg

TRANSCRIPTION:

Hello, and welcome to another edition of Cognitive Geology. My name is Luke, and today we're following up on our previous video, where we talked about some of the challenges that can arise from using defaults in property modelling, by giving some ideas about how we could do better. Particularly what we're trying to do is to take back the geology; get it out of the hands of the algorithms and into the hands of the geologists.  

 

Algorithms don't model; geologists do

I'll start this off by just showing you one of the challenges that can arise if you use variograms to try to deliver your geological trends into the model. In doing so, we'll use for an example two coarsening-upward sequences, where we have two para-sequences of, say, a shoreface system coming through here, observed by these four wells A1 to A4. And let's see what would happen if we tried to distribute those purely with a variogram, that particular pattern.  

Variograms can invert real geological trends

So, if we did a variogram analysis and found that at some distance S we no longer have any information in the model from the observed points, what's going happen in a typical SGS-type implementation is once you get away from the well control, away from the distance of your variogram, the algorithm is going try to search to find a value to put there for you.  

Now, if you are thinking along a K layer, as it's populating around the wells, it's consuming the data that it's observing in the nearby values out of the histogram. So, in the areas between it, it tends to fill it up with the other end of that histogram. And that, in this particular case, would result in a low value between these wells at that K layer, and vice versa at the bottom of the coarsening upward sequences, you might start to observe high values at that location. 

So importantly, we can end up inverting that real geological trend if we just let the variogram be the only way of distributing that sequence. In doing so, you can end up with some significant problems from a flow perspective. You'd start to send the flow path in an elongated direction so that would delay water arrival time in your simulation, and increase your estimated recovery factor. So, you really could be setting yourself up there for some serious hurt when it comes to matching the real field results.

 

"No trend" is still a trend 

So, what should we do instead? We would advocate, here at Cognitive Geology, that you should be looking at describing the trends yourself explicitly in the models. This is where the geologist can try to instill their own scenarios of geological behaviours that they want to test, and see what impact that has on your economic result. So, an important suite that we often look for are things like depth trends, map-based trends, and sequence stratigraphic trends, and correlative relationships.  

So, when we look at a particular vector - so something like porosity against depth - if we don't do an explicit model of analyzing that, we don't get away with it. We can't be the three wise monkeys and just ignore it, because no trend is still a trend in these types of property models. You are just invoking that there is no trend. It's a zero function as a relationship of that. So, if you want to say there is a degradation of porosity with depth, it's a good thing to draw that in there. It's a necessary thing.  

Invoke your inner license, Y = MX + C...your artistic license, sorry. We don't get plus and minus infinity in any geological property. So, we tend to end up with shapes that are a little bit less functional to mathematics and a little bit more organic. So, feel free to describe that, particularly when you're looking at correlative relationships. Let's say this was acoustic impedance against the porosity estimate you might want to say, "Well, we have information from the acoustic impedance within some range, but at other ranges it becomes relatively uninformed." 

Likewise with stratigraphy, we often describe a type well showing particular para-sequences that we expect to see in geology. You can invoke that in the K layer direction by describing those functions in the way that honours the well data, but implies the trends that you want it to be. Now, these can all be very uncertain, so you have a lot of degrees of freedom to try to map those out there so that you generate the scenarios that defend both the observed data, but also all the geological insight and any external self-control properties like your seismic attribute.  

And just finally, one last point that we should address is that in a poorly sampled dataset - taking the example of a particular reservoir perhaps with a cluster of wells along the crest maybe one well down dip - sometimes the sequencing order that you choose in de-trending one after the other can actually result in a different 3D answer for the grid.  

So, in this particular case, if we had this one well down here with a poorer quality reservoir, it wouldn't be clear from the observational data whether that was due to perhaps a depositional direction or a burial position at a relatively deeper location. So, if we started to try to understand what was in the north of this field, depending on what sequence we went through of de-trending of depth first and then the depositional direction or you can de-trend by depositional direction and then by depth, you can end up with quite different answers.  

So, these are very powerful ways for you to invoke different property models and test scenarios. And, the important point is, get it into simulation as soon as you can, and see what impacts the bottom line, and what's controlling the flow behaviour of your reservoir.  

I hope this was helpful - please let me know in the comments below. Thank you very much.

Why The Defaults Are Dangerous

This week, Luke takes a look at the dangers in using defaults in geological property modelling, and how the obvious answer isn't always the best one. 

For reference, here's a still of this edition's whiteboard!

whiteboard+geology+still.jpeg

Video transcription

Hi. My name is Luke. I'm from Cognitive Geology. Today, we are here to talk about a couple of things that can be quite dangerous about using defaults when it comes to making up property models. Before we get into that, I just want to ask a little...pose a question to you here and see what you think. Often when I'm doing a peer review, I see a lot of people present me histograms and say, "Here's an example of why this is a good model." Usually, there's one of these two outcomes that they are referencing. If you have an input data set like this, that's got a negative skew, so a tail off towards the low end, and you have a model that matches that very, very accurately - or a model that doesn't, something that goes the other direction - which one of these do you think you can assess and say is likely to be a better model? I think you can actually make that call, and as we go through this video, we'll see whether we come back to doing that. Let's have an answer for that at the end and see whether we can actually, just from that piece of information, pick a better model.

When we look at doing geostatistics, there's a really important underlying assumption that almost all of the geostatistical methods have, and that is that you are distributing with a geostatistical part at the end, something that is completely stationary. There is no loaded dice. If you roll the dice on one side of the reservoir, the chances of you coming up sixes is the same as rolling it on the other. It's very important that we've taken all those trends out. The first geostsat assumption that we have to make sure we're checking off is that we have dealt with all of those non-stationary components. Of course, that means we have to account for the geological trends. We do know that geology has trends, it has a lot of trends - that's what we base our careers upon; picking those trends. Whether that be something like this ExxonMobil slug diagram here with a proximal to distal trend, or a coarsening upwards trend, or it could be any other number of possible trends; we do know that they exist inside geology.

The question becomes, do our facies models necessarily address all of the non-stationary components that we consider important? What we can probably argue is that that's very rarely the case. When we look at our non-stationary behaviours, we still observe things like porosity depth trends that over over-print any other depositional facies. In truth, when it comes down to the way that we construct a facies model, we usually are talking about a facies assemblage model, so there may still be internal characteristics of non-stationary behaviour such this as fining upwards channel trend that you can reasonably expect in any given set of facies assemblages. So, there are still a couple of significantly important trends that can exist inside your data.

Let's put back into the context of how these models are created: let's decode the defaults. If we're using a histogram that matches our well data, what we're really saying is that well data set has sampled what exists in the geology perfectly. We have got a good distribution of random samples from your reservoir, and they all line up. If then we distribute that input distribution, we distribute the observed data just using the variogram now within the facies, perhaps, we are also invoking that there are no additional geological trends. Those two statements are usually pretty challenging to support. When we decode those defaults, and we have a look at understanding what's going on with our defaults in our systems, I think we can actually answer this question, and in my opinion, we can determine which one is the better model. If you match the data perfectly, it's unlikely that you've dealt with all of these non-stationary effects. If you don't match it, you've done something to account for it. So my simple rule of thumb that I often use is selecting the sample, the model, that has got something different towards the input data set, because in order for that to be right, it's just too high a threshold that's needed. So I think you can answer which model is best, and I'm curious to hear what you think.


Thank you very much. My name is Luke from Cognitive Geology.