Recent developments in sensors and technology have enabled capturing high resolution geospatial data. Ancillary information or metadata associated with these data sets help produce quality digital terrain and elevation models, which find applications in decision-making, emergency response and disaster management. Models for three dimensional visualization of geoinformation have also been brought forward by various researchers (Zlatanova 2000; Zlatanova and Verbree 2000; Lattuada 2006; Lin and Zhu 2006; van Oosterom, Stoter, and Jansen 2006).
One of the newer technologies that has attained the level of an industry standard technique for topographic data collection is Light Detection and Ranging (LiDAR). This technology procures dense, accurate, and precise three dimensional topographic information in terms of data points. Based on the principle of laser ranging, LiDAR is available in the terrestrial and airborne modes. In this paper, we shall discuss in the context of airborne altimetric LiDAR.
LiDAR data sets are large for even very small areas. Thus, systems for data quality assessment, decision making etc., require a visualization process or pipeline to see LiDAR data or the terrain represented by these. With respect to the geospatial data sets, a good visualization process is one where the features on the terrain are perceived as they appear in reality. Therefore, it is necessary to compare various visualization pipelines based on user feedback and provide them with a ranking. We have been working in the direction of studying and developing pipelines for LiDAR data visualization aided with georectified aerial images. In this paper, we first design an experiment to compare the various visualization pipelines studied by us, and then use statistical tools to rank them.
This paper targets two research objectives: (a) design of an experiment to compare the various visualization methods developed by us (described in section "Visualization pipelines studied by us"); (b) describe the statistical methodologies to obtain conclusions from the data obtained through the experiment. Thus, the paper statistically ranks the various visualization schemes in the order of their effectiveness in terms of feature recognition and depth perception.
Data sets and software APIs
In 2004, Optech Inc., Canada conducted a LiDAR flight over the environs of Niagara Falls. The flight was conducted at an average height of 1200 m above ground level. Along with an ALTM 3100 instrument, the airplane was also equipped with the high resolution aerial camera to photograph the terrain. Five different subsets of 100 m x 100 m each were cut out from the data sets. The number of data points in each of these data sets is given in Table 1. Each of these subsets contains buildings, trees, roads, and vehicles.
The aerial images corresponding to each of the data sets were georectified using the TerraSolid package. In our studies, the OpenSceneGraph display engine with its C++ based API is used for the visualization of the LiDAR data sets and derived products with the georectified aerial images. This visualization engine can restitute scenes in mono and stereo modes. For stereoscopic visualization anaglyph glasses were used. The programming was done on an Ubuntu Linux 11.04 64 bit platform with 4 GB RAM and 1TB 5400 rpm HDD.
Visualization pipelines studied by us
MacEachren and Kraak (2001) had pointed out four different research challenges in the field of geovisualization. Out of these, the second agenda item concerns the development of new methods and tools as well as a "fundamental effort in theory building for representation and management of geographic knowledge gathered from large data sets" (Dykes, Mac Eachren, and Kraak 2005).
Visualization engines capable of displaying three dimensional data are typically based on OpenGL (Open Graphics Library) or Microsoft DirectX. Advanced visualization engines are based on Graphical Processing Units (GPUs) which have the algorithms for three dimensional display embedded in them. In principle, graphics engines understand the language of points, lines, and triangles, which in the language of mathematics are known as simprices. In addition, these simplices can be given color or texture attributes. In order to visualize LiDAR data, it has to be translated to the language of the visualization engine, i.e. it has to be converted to a combination of points, lines, and triangles, through a certain process. This process is known as a pipeline.
Several approaches for processing and visualizing LiDAR data have been found in the literature, which could be grouped in the following classes for preparing 3D maps: (a) direct visualization of LiDAR point cloud, (b) through manual process of classification which is time consuming, or (c) through the process of segmentation. In the cases of segmentation and classification, the extracted features are first generalized and then visualized. Out-of-core computation algorithms have also been reported which handle and visualize large point cloud data sets.
We have been studying and developing systems for visualization of LiDAR data using simplices (points, triangles, and tetrahedrons) (Ghosh and Lohani 2007a, 2007b) and have also developed a heuristic to process, extract, and generalize LiDAR data for quick and effective visualization (Ghosh and Lohani 2011). In the following paragraphs, we present a brief summary of the various visualization pipelines for 3D visualization of LiDAR data aided by georectified aerial images.
0-simplex based visualization
Kreylos, Bawden, and Kellogg (2008) visualized large point data sets using a head tracked and stereoscopic visualization mode, using a multiresolution rendering scheme supporting billions of 3D points at 48-60 stereoscopic flames per second. When used in a CAVE based immersive environment, this method has been reported to give better results over other analysis methods. Massive point clouds were rendered using an out-of-core real time visualization method using a spatial data structure for a GPU-based system with 4 GB main memory and a 5400 rpm hard disk (Richter and Drllner 2010). In our study we name this process as PTS. The mono mode of 0-simplex-based or point-based visualization is named as PL-PTS and the corresponding anaglyh mode is termed as AN-PTS (Table 2).
2-simplices or triangles can be generated from data points either by using a triangulation algorithm or by generating tetrahedra and then exploding them to their respective facets. We generated Delaunay triangulations and tetrahedralizations by using the Quickhull algorithm (Barber, Dobkin, and Huhdanpaa 1996) provided in the QHULL tool (http://www.qhull.org).
We first generated a TIN from LiDAR data and then draped it with the georectified texture (Ghosh and Lohani 2007a). The LiDAR data, the TIN, and the georectified texture were sent to the visualization engine for visualization. We name this process as DTR/. The mono mode of visualization is named as PL-DTRI and the anaglyph mode is termed as AN-DTRI (Table 2).
The TIN generated from LiDAR data through the Delaunay Triangulation was found to have triangles with small areas but very large edges. Such triangles are "culled" using a certain threshold. The remaining triangles along with the point data and georectified texture are sent to the visualization engine (Ghosh and Lohani 2007b). We name this process as TDTRI. The mono mode of visualization is named as PL-TDTRI and the anaglyph mode is termed as AN-TDTRI (Table 2).
A Delaunay tetrahedralization is generated using the LiDAR data points. The tetrahedrals generated by the process are exploded to their respective triangular facets (Ghosh and Lohani 2007b). The triangular facets containing long edges are removed using a threshold. The remaining triangles, the georectified texture, and the points are sent to the visualization engine. We name this process as TDTET. The mono mode of visualization is named as PL-TDTET and the anaglyph mode is termed as AN-TDTET (Table 2).
Heuristic based visualization
In case of the Delaunay triangulation and subsequent trimming, we have noted that certain dome-shaped terrain features, e.g. very sparse trees, get deleted, whereas planar features are well represented in the case of triangulation. In the case of Delaunay tetrahedralization, the sparse trees were better represented. Owing to the complexity of the tetrahedralization process, the planar features are over represented and the rendering is comparatively slow. It is therefore clear that the LiDAR point cloud needed a hybrid mode of processing, where both triangulation and tetrahedralization can be used. We therefore developed a heuristic-based method.
Ghosh and Lohani (2013) identified that density-based methods are suitable for extracting features from LiDAR data sets. The authors concluded that Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al. 1996) was well suited to extract terrain features from LiDAR data sets. We first extract clusters of points using the DBSCAN algorithm, and then observe that there are four kinds of clusters namely sparse, flat-and-wide, dome-shaped, and those potentially containing planes. We developed strategies for treating these clusters separately for each of the types using heuristics (Ghosh and Lohani 2011). In this paper, we processed the data using two variants of the algorithm presented in Ghosh and Lohani (2011). In the first variant, which we name MDL, the dome-shaped clusters are generalized as described in Ghosh and Lohani (2011). In the second variant, which we name PMDL, the dome-shaped clusters are displayed as points. The mono modes of visualization for the two processes are named as PLMDL and PL-PMDL. The stereoscopic modes of visualization are named as AN-MDL and AN-PMDL (Table 2).