Profile
International Journal of Earth & Environmental Sciences Volume 1 (2016), Article ID 1:IJEES-106, 3 pages
https://doi.org/10.15344/2456-351X/2016/106
Commentary
Relevant Quality of Digital Elevation Models in Earth and Environmental Studies?

Tomaž Podobnikar

Faculty of Civil and Geodetic Engineering, University of Ljubljana, Kongresni trg 12, 1000 Ljubljana, Slovenia
Dr. Tomaž Podobnikar, Faculty of Civil and Geodetic Engineering, University of Ljubljana, Kongresni trg 12, 1000 Ljubljana, Slovenia; E-mail: tomaz.podobnikar@fgg.uni-lj.si
27 January 2016; 17 April 2016; 19 April 2016
Podobnikar T (2016) Relevant Quality of Digital Elevation Models in Earth and Environmental Studies? Int J Earth Environ Sci 1: 106. doi: https://doi.org/10.15344/2456-351X/2016/106

1. Geospatial Datasets – Digital Elevation Model in Environmental Phenomena Studies

The conceptions of various earth and environmental studies and applications tend to explain and increase understanding of the observable (natural) phenomena. Such studies verify talus cones stability due to hazards related to geological, geomorphological, hydrological or extreme weather phenomena, or talus cones types for modeling pika habitat, coral reef resilience or landscape and ecological connectivity due to climate changes, local wind and solar conditions in order to decrease carbon footprints, etc. These rising themes are supported with geoinformatics and other disciplines, such as conservation science; environmental geography, geology and archaeology; palaeoenvironmental, agriculture and ecosystems analysis; balanced urban development, etc. Many related conceptions handle various kinds of geospatial datasets, such as digital elevation/ terrain models, where the quality of derived information needs to be comprehensively considered.

This commentary responses to one of the most systematic and complete article with a title “Causes and consequences of error in digital elevation models” written by Fisher and Tate [1] and to corresponding articles on uncertainties of digital elevation/terrain models, DEMs/DTMs. These kinds of models are most important datasets in the spatially-related earth and environmental studies and foundational components to mapping our world.

There is a great terminological confusion in the definition and use of the single term. However, I propose – but here not discuss – a solution that is quite common in the whole world: The DEM is a continuous surface model usually in 2.5D, which consists of elevation values that describe the topographic surface/landform [2]. In contrast to a DEM, a digital surface model (DSM) includes buildings (e.g. houses, viaducts), vegetation cover [3], as well as natural terrain features (e.g. temporal snow cover, 3D surface of caves). Moreover, a digital terrain model (DTM) is a continuous surface that, besides the values of height as a grid (known as a digital elevation model – DEM), consists also other elements that describe the topographic surface, such as slope or skeleton [4]. A term DEM will be used in the continuation as a generic term for all family of landform model datasets [1].

The authors state that the models of elevation are distinct to any other geographical data for four reasons: they were one of the first forms of digital geographical information which became available; they are now widely used; they are closely associated with the mathematical concepts of surface modeling; and they represent a tangible, directly observable phenomenon of which all people have direct experience: the surface of the earth [1]. The quantity and availability of digital spatial datasets, including DEMs rapidly increase since information long-term explosion with exponential growth [5], but this was not always usual for the DEMs. One of the main reasons was the obvious low quality, especially low resolution of these datasets. However, it seems that in contrast to many other geospatial datasets the usability of the DEMs considerably increase sbecause of a higher resolution and accuracy which are now both more similar to the other widely applicable earth observation (EO) data provided by a series of satellites with active and passive sensors at high spatial, spectral, and temporal resolution. Many new methods for data acquisition that support higher quality have been introduced, especially from satellite and airborne, e.g. for Lidar Surface Topography, or mixed – EuroDEM. There are also a growing number of application platforms, for example, Google Earth and Geopedia.

However, there are still distinct obstacles in using the DEMs. It is evident that most of the users think that they know everything about these models due to their direct experience, but, unfortunately many of them cannot recognize or understand most of the errors or even frauds in processing [4]. One of the reasons for this situation is that the mapping becomes simpler and thus more accessible for everyone. The usability is, in this case, dependent on (lack of) of uncertainty presentation and users’ understanding of the product real quality(ease of learning), in relation to user’s requirements. Thus, the quality of the DEM needs to be appropriate, well known and well presented. Consequently, there are two important interrelated issues: greater quantity of more detailed DEMs and potentially their better quality.

2. Complexity of Digital Elevation Model Quality

Very general convention considers the error to be the objective or formal problems with measurement/estimation and other, less tangible issues to be uncertainty [1]. I think that there are many other shades in between and out of these two terms that cannot be precisely named due to the complexity of this issue, and also due to perceptions in different languages, cultures, and also in different disciplines – engineering, managing, physics, philosophy, etc. Historically and generally, the term quality relates to satisfaction with something as a relation between the user and producer. The complexity of this term is continuously increasing with the development of our society. In the case of our geospatial datasets, a quality is considered as fitness for purpose (use) and a state of being free of errors (with minimized uncertainty).

The standard ISO 9000 defines quality as the degree to which a set of inherent characteristics fulfils requirements. The last ISO19157standard of geographic data quality includes six elements of data quality: completeness; logical consistency; usability, and three types of accuracy (positional, temporal, and thematic).There are some adjustments according to the previous standards, such as usability element and new data quality measures that improve estimation of data quality. The (in) accuracy as specific standardized element contains a description of measurable errors.

The ISO standards are much appreciated in geospatial community, but their measures are very conservative, static and generic in terms of many practical needs, and in the understanding of the quality/ inaccuracies/errors of the particular spatial datasets, such as DEM. Many authors emphasize shortcomings of the widely used rootmean- square error (RMSE) as a measure of the DEM’s vertical accuracy [1,6,7].This simple analytical error model assumes to follow a Gaussian distribution what is just a very rough approximation of the actual situation. It also assumes a ground truth concept within the conceptual model of spatial data, which can be part of the conceptualization of the process of production of DEM. The simplified conceptual model of spatial data assumes that the quality is part of this concept, as a difference between the abstracted theoretical model and a spatial dataset. In this way we can verify if the processed DEM is in accordance with a certain conceptual model. This concept is often problematic because the reference points adopted as ground truth (in relation to the abstracted theoretical model) are insufficient or inappropriate quality. The other issue is adequacy definition of the conceptual model of DEM.

Quantification of the errors and uncertainties – measurements, estimation and assessment often need better solutions than a simple RMSE with a deeper understanding the whole complexity. The authors [1] exposed other analytical models and also unconditioned/ conditioned error simulation models, fuzzy logic approach, and also discuss on error propagation issues and empirical error estimation. Similar problem is to define other quantifiable properties of the quality of data, information, model, etc. For example, it is interesting that the concept of uncertainty as a quantifiable attribute is still relatively new in the history of measurement [8]. Still DEM vendors generally provide users with only the RMSE statistic.

There are also qualitative, mostly visual approaches in error/ uncertainty identification and assessment. Visual approaches being qualitative are generally more neglected than analytical ones which are considered to be more objective. The other reasons for the lower acceptance of visual methods lie in the insufficient graphical capabilities of computers in the past and especially in the longer tradition of using statistical methods, with the exception in cartography [9].

The commented authors [1] stressed that the central question in a modeling process suffused with uncertainty is: are the errors which may be present in one type of data input to the model significant in terms of the sensitivity of the model? In certain situations, they may be critical, but in others they may not, the both options are possible even on the same DEM. On this question partly answers the already mentioned ISO 19157 with the implementation of usability as a statement on the general quality of the datasets. The concepts of usability, fitness for use (purpose) and lineage are vague and often misused. The usability concept can include any important information that may affect the purposes for which the data is used. Namely, the user should be aware of the particular application (according to individual requirements) or to more universal use (multi-purpose). Sensitivity analysis in error and uncertainty propagation is, therefore, an important issue towards robust models combining different datasets, such as the DEM which is analyzed with other data. The problem of combined data outlines the authors [1] explain in contexts of hydrological and diffuse pollution modeling, where the effect of the error may be diluted, and be unimportant compared to errors in other data and uncertainty in the model itself. There are also some more general situations where different properties of DEM are important, such as sensitivity to small random or gross errors (blunders) on flat areas in flooding analysis [9]. In this case, the systematic error (e.g. systematically incorrect altitude of 10 m) does not influence the results.

The authors [1] review a number of articles concerning error reduction and fitness for use. Standardized and other data quality concepts are part of continuous evolution. The usability can be enhanced in order to realize a quality control/assurance, e.g. to verify the data or identify the errors. Further on, the errors and uncertainty can be reduced, corrected or eliminated and the quality of the dataset (in our case the DEM) improved [3]. The more comprehensive approaches comprise interoperability, total quality control or management [10], sustainability and other principles in the concept of spatial data quality improvement.

3. More Robust and Comprehensive Solutions

It is clear from the previous sections that there are many practical gaps in understanding the uncertainty of the datasets used in the earth and environmental studies. I think we need more research on different kind uncertainties of the spatial datasets in relation to the target applications, and towards near-universal spatial datasets for as many applications. After all, the DEM is such a model, which has to be a very common dataset, irrespective to the methods used for processing. A number of particular solutions in quality control are discussed [1], but there exist no unique criteria or single measure for the DEM quality [11].The DEM errors are also not randomly, not normally, not identically, and not stationary distributed [6]. It is known that the spatial autocorrelation of the error on detailed DEM is the result of a complex combination of random and systematic-like components [12].

Core supports to overall better DEM quality are statistical, empirical and visual quality control/assurance approaches which can be used for more robust DEM processing. As discussed, the “classical” solutions require that the target dataset (DEM) is compared with another dataset, typically with high-quality reference points that are considered as a ground truth. Therefore, the error/uncertainty can be assessed where higher quality reference data is available. Fortunately there may be various alternatives of which expose three that consider the spatial dependence of uncertainties on some way. The first is based on analysis of the DEM itself, the second on analysis of more available DEMs, and the third on visual methods.

The analysis of the DEM itself: Most of the empirical research has observed that DEM errors are in some way correlated with characteristics of the real terrain, abstraction result, processing method, and the DEM. The error/uncertainty field has a specific spatial pattern in terms of geomorphology(e.g. ruggedness, slope, hydrological network, etc.), sampling density, scale/resolution, generalization methods (downscaling), interpolation/filtering methods, vegetation cover, built-up areas and other anthropogenic impacts, etc. [1,3,13].Some studies suggest that the method used to generate the DEM, which much depended on the method of acquisition, influences more to the accuracy pattern than the geomorphological character of the DEM [14]; but the acquisition influence need to be more carefully eliminated.

The analysis of more available DEM serrors upgrades the previous approach with combined analysis of more datasets. Basically, we can use a high-quality DEM as a reference and compare to other available DEMs, what is similar to the classical methods that use the reference points. Moreover, we can generally analyze any kind of different overlaying DEMs (e.g. to compute the differences) in order to compare them [4,9], similarly to map algebra philosophy in GIS.

The visual methods can reduce some weaknesses of statistical and empirical methods. In these cognitive methods, the result depends on the expertise and experience of the operator. The most common view of a DEM as a contour map or as a colour or grey scale image is good for detecting the most extreme errors [1]. There are more comprehensive options, such as analytical shadings, modulo approaches, multi-scale or profiles presentation of the DEMs, or that visualize the analyzed DEMs, for example as differences between two datasets, with point density fields of data sources, etc. [3,9].Besides of static presentation, there are also effective possibilities to present the error with animations [1]. The reasoning with visual methods is often based on the rule-of-thumb.

4. Conclusion

The positional accuracy as a core element of the DEM quality in earth and environmental studies should not be based only on its geometrical (vertical and planimetric position) accuracy, but especially on geomorphological/topographic accuracy that considers the shape and semantic [12], i.e. consider contextual information around every point of the DEM(as a surface).The assessment of the planimetric (horizontal) accuracy of DEMs is more complicated than the assessment of the vertical accuracy [3]. It is also difficult to separate between vertical and planimetric errors, especially on a relatively non-rugged terrain. Considerably more challenging is to assess the geomorphological accuracy.

For the three alternatives that I exposed it is important to point out that the DEM is a surface, where knowledge about topography is needed. Further on, they can synergistically contribute to the multimodal solutions/tools where different interpretations of experts contribute to better understanding of the nature of uncertainty [15] and/or integrated solutions/tools in DEM processing [4] that can be used in order to improve the quality of the processed DEM and derived applications. The quality assurance is thus one of the most complex topics in geospatial information applications. Yet it seems that the creative artistic mode of thinking and reasoning, with some intuition, helps to develop more comprehensive solutions, in order to achieve a relevant quality of DEM and other spatial datasets.


References

  1. Fisher PF, Tate NJ (2006) Causes and consequences of error in digital elevation models. Prog Phys Geog 30: 467-489. View
  2. Evans IS (2012) Geomorphometry and landform mapping: What is a landform? Geomorphol 137: 94-106. View
  3. HöhleJ, Potuckova M (2011) Assessment of the Quality of Digital Terrain Models. Euro SDR. View
  4. Podobnikar T (2005) Production of integrated digital terrain model from multiple datasets of different quality. Int J Geogr Inf Sci 19: 69-89. View
  5. Price DJ de S (1963) Little science, big science. New York: Columbia University Press. View
  6. Liu XH, Hu P, Hu H, Sherba J (2012) Approximation Theory Applied to DEM Vertical Accuracy Assessment. Trans GIS 16: 397-410. View
  7. Carrara A, Bitelli G, Carla R (1998) Comparison of techniques for generating digital terrain models from contour lines. Int J Geogr Inf Sci 11: 451-473. View
  8. GUM (2008) Evaluation of measurement data- Guide to the expression of uncertainty in measurement. JCGM. View
  9. Podobnikar T (2009) Methods for visual quality assessment of a digital terrain model. S.A.P.I.EN.S. 2: 15-24. View
  10. Saylam K (2009) Quality assurance of lidar systems – mission planning. ASPRS Annual Conference. View
  11. Östman A (1987) Accuracy estimation of digital elevation data banks. Photo gramm Eng Remote Sens. 53: 425-430.
  12. Oksanen J, Sarjakoski T (2006) Uncovering the statistical and spatial characteristics of fine toposcale DEM error. Int J GeogrInfSci 20: 345-369. View
  13. Aguilar FJ, Agüera F, Aguilar MA, Carvajal F (2005) Effects of Terrain Morphology, Sampling Density, and Interpolation Methods on Grid DEM Accuracy. Photo gramm Eng Remote Sens 71: 805-816. View
  14. Carlisle BH (2005) Modelling the Spatial Distribution of DEM Error. Trans GIS 9: 521-540. View
  15. Lovatt E (2013) Global flight-path maps: Five interpretations. BBC News. View