3.2 Resolving an occasional misunderstanding between geologists and modelers...

Most modelers and geologists understand the benefits of collaborating one with the other. Nevertheless, problems between geologists and modelers do surface once in a while. Not often, far from that, but still enough to make it useful for us to try to resolve any misunderstanding. This is the focus of this section. Once this has been addressed, then we can talk about collaboration.

Some geologists would say that “us geologists, we do geology, you in modeling, you do mathematics”, implying that modelers focus too much on mathematics, statistics and geostatistics, and don’t create models which are “geologic enough”. Some modelers have the exact opposite view of geologists: “us modelers, we are more rigorous because we rely on mathematics, while you geologists, your results are too interpretative”.
Of course, these criticisms might be punctually true, but overall, they are largely misconceptions about what each side wants/can do.
The source of this misconception seems to be rooted in the opposition between hand-contouring and automated contouring in the 70s and 80s. Then, through the decades, this original reciprocal suspicion has somehow impacted the relationship between geologists and 3D modelers. Reinvestigating briefly the original questions around contouring will help us going passed this misunderstanding.

Before the age of computers, geologists relied heavily on manual contouring techniques to predict rock properties between the locations where samples were available (Tearpock and Bischke, 2003). Manual contouring was applied, and is still applied today, to create everything from structural maps to property maps (porosity maps, net-to-gross maps…). As computers became more powerful and readily available, many experts looked at how they could replace the manual contouring by automated interpolation techniques (Watson, 1992). In these automated approaches, contours are no longer modeled per se. Instead, the property is interpolated at each location of a grid, using the data point as input parameters. Then, a set of contours is extracted from the property distribution on the grid, as a visual tool to review the results. Some software packages allow editing the spatial distribution by manually adjusting these contours. Otherwise, editing the maps is done by changing the input data or changing the parameters used in the mapping algorithm.

While these new techniques became progressively more common, some opposition grew. Some supporters of manual contouring concluded that computers can’t be trusted to give a realistic, geological result. Hand-drawn contours will take into account the data but also the experience of the geologist and the local geological context. Computer-generated gridding relies too heavily on the data and only the algorithm, thereby creating mathematically correct but geologically incorrect maps. Meanwhile, some supporters of automated mapping maintained that only mathematical algorithms ensure “objective” mapping. Gridding algorithms are “free of any geological bias or interpretation” (AAPG Wiki, webpage on “contouring geological data with a computer”), which is considered an improvement over “the subjective nature of manual contouring (which) was inimical to precision in maps” (Watson, 1992, page 40). Given the same set of input data and the same gridding parameters, everyone would get the same output map. Of course, proponents of automated gridding argue that gridding parameters must be selected carefully otherwise the maps might not make much sense. But at the end of the day, these subtleties were overshadowed by the more general question: who can we trust? Mankind and its intuition or machines and their advanced mathematics? This philosophical question still somehow underlies the opposition between some geologists and some modelers.

We are not suggesting that this debate can be settled in a few lines. Very humbly, we are only suggesting that one should look at this question from a different angle. Ultimately, a map is “good” if it is using the known (or assumed) geological characteristics of the reservoir to transform the input data into geological information (the map), and if it is useful in making predictions. Such a “good” map can be made by hand or by computer, in the same way that a “bad”, non-geological map can be created by hand or by computer. Yes, manual maps might be an easier way to include geological knowledge rather than mathematical algorithms. And yes, gridding algorithms are more easily repeatable (a more neutral term than “objective”) than hand-made maps. But at the end of day, as long as the resulting map is meaningful, it doesn’t really matter how it is created.

Figure 1, Figure 2 and Figure 3 illustrate this point. A sand/shale reservoir has been sampled by approximately 20 vertical wells. The sand proportion in the reservoir varies from location to location (Figure 1) and we are asked to create a map out of this data. A grid of cell size 100m*100m is created and a simple spline interpolation algorithm is run to interpolate the sand proportion between the wells (Figure 2). The result is “objective” – to use the old-school terminology one last time: all the data points are respected, no geological “bias” has been introduced and the map is repeatable. But is it geologically-correct?
A closer inspection shows that most wells have a low sand proportion, between 10% and 30%, except for several wells which are aligned along an approximated North-South axis in the middle of the map. There, the sand proportion rises to 55%, 70% and even 90%. Upon review of the well data, the reservoir is interpreted as a general plain with low sand content which was later incised by a single large channel rich in sand. With this in mind, it is in fact a mistake to interpolate between data points from the plain and from the channel. They must be treated separately to create a more realistic map (Figure 3). Firstly, the geologist drew a general shape for the channel (Figure 3, white lines delimiting the lateral extent of the channel). Then, the spline gridding algorithm was used first to interpolate the sand proportion in the plain and then to interpolate the sand proportion in the channel. The final map combines these two maps.

The map could be further refined, but this is not needed for us to draw our conclusions. Firstly, by “blindly” applying some default gridding algorithm, we will generate maps which are visually appealing and mathematically correct, but geologically wrong (Figure 3). But realistically, a less experienced geologist lacking any deep geological experience might have also created a very similar-looking map by doing hand-contouring. The final, “good” map required more time and effort to properly combine our understanding of the reservoir (concept of the channel) with the data. This “good” map, shown here, is created with a more complex usage of our gridding algorithm. A geologist experienced in hand-contouring would have drawn something similar using pencil and paper.
Again, the quality of the final product (the map) is more important than the means by which it is created. Reservoir modeling can’t be done by hand, of course. Nevertheless, geologists doubting geomodeling need to realize that current modeling techniques allow for the integration of their geological expertise with the data. In return, modelers who believe that mathematics are the alpha and omega of their work must meet with geologists, listen to their understanding of the reservoir and then find ways to translate their geological concepts into mathematics. Doing so, men and computers can work together and not in opposition.

  • Hits: 3445