Abstract
The 4D -Cultural Heritage Project takes advance of Imagery from the wild (Internet) and existing repositories to reconstruct 3D Models of Monuments, Places and Landscapes. This method however can not always reproduce a complete dataset of an object since the obliged views are depending on the prominent positions of the recording. We would like to present two different approaches to a work-flow which overcomes this issue through a semi – automated reconstruction as well as the use of simulation with a grammar based modelling.
CalwTrailer1 from 7reasons Medien GmbH on Vimeo.
Keywords
CH VR, VR, AR, Mobile Applications, Reconstruction, Photogrammetry, Historic Reconsructions, Virtual Archaeology,
Introduction to the 4DCH Project
The advent of technology in digital cameras and their incorporation into virtually any mobile device has led to an explosion of the number of photographs taken every day. Today, the number of photographs stored online and available freely has reached unprecedented levels. Advances in the fields of Photogrammetry and Computer Vision have led to significant breakthroughs such as the Structure from Motion algorithm which creates three-dimensional models of objects using their two-dimensional photographs. The existence of powerful and affordable computational machinery has now made possible not only the reconstruction of complex structures but also entire cities.
The Case Study of the City of Calw
Within the 4DCH project, the city of Calw has been chosen as a test site for 3D reconstruction where different approaches were applied to model the central square of the settlement in different time settings (see Dieter Fritsch and Michael Klein, 2014). According to the project plan not only the central square of the town, but also the somewhat larger historical settlement core was modeled.
The results will soon be made accessible in an interactive virtual 3D realtime environment , powered by a game engine allowing the user to walk or fly -through the 3D model of the settlement and retrieve additional information on places and buildings on his demand. Further the switch into different time periods of the environment or a building will be possible depending on the accuracy and quantaty of the source material which is necessary to model out the complete set of structures.
Production Pipeline for (Modelling for 3d Realtime and VR Environments)
The results of a computer generated 3D model through SfM and/or dense matching is bound to the quality and quantity of the source images available. It reflects the time epoch for which all images were taken. Missing features of a model as well as anomalies in the resulting 3D model can be substituted through a camera matched, manual modelling approach. This technique also allows the 3D reconstructions using historic photos. However, the results obtained might be subjective and are depending on the operator’s personal perception and his skills in 3D modelling. Nevertheless a comparison between a SfM calculated structure and the para-metrical model can lead to an adequate accuracy level. As once defined by Paul Debevec (1996), an expert of computer vision, his interpretation of photogrammetry is as follows: “A method for interactively recovering 3D models and camera positions from photographs.”
The modern digital breakthrough work done by Debevec, part of the Computer Vision Group at the computer science division of UC Berkeley, was the first research that raised interest in photogrammetry as a tool for architectural reconstruction from a single view source.
Through the manual alignment of the vanishing points in an image its distortion and perspective can be used to calculate the position of the camera which collected it. Using the alignment of the camera the geometry can be adjusted to fit to the camera matched image in the background through orthogonal translations and extrusions respecting the coordinate system of the modelling environment, modelling out the features of the desired structure.
In order to automate the process of image rectification one could also use the calculated camera positions generated by a SfM processing and therefore bypass the manual alignment of the vanishing points as described above, eliminating subjective factors in the rectification process. This however requires more than one view to allow for the the simultaneous image data processing in a bundle block adjustment.
Figure 1: Historical Image and the resulting 3D Model of the City Hall of Calw
To illustrate this process we set up a test scene consisting a 3D model of the Hermann-Hesse Birthplace building, which was previously constructed through a manual camera match approach, from which several rendered images from different viewpoints were taken to provide a source for the SfM model generation. Any SfM sofware, such as VisualSfM, will deliver the camera positions, orientations and a sparse 3D model, which can be reimported to a 3D modelling program (eg. Autodesk 3ds Max). Inside the 3d modelling program one of the reconstructed camera views was chosen to verify the matching of the perspective with the background image, which served as a template for a newly constructed geometry, representing the outlines of the object (house). Since the constructed geometry, matched seamlessly with the vanishing points of the projected background image it can be confirmed that the camera match calculated from the SfM procedure is adequate for this purpose.
The accuracy of the models generated through this procedure is highly dependent on the comprehension, skills and precision of the modeller in charge. Using this modelling technique historical images or even illustrations can be used to recreate a periodic morphology of the chosen monument in order to compare the changes occurring in this time range. Utilizing the manual approach of 4D modelling together with other automated modelling techniques like laser scanning, SfM and dense image matching in combination, will lead to an optimized model of a monument, which can be produced and delivered for dissemination.
Figure 2: Set -up with the calculated camera position and the 3d Model of Hermann Hesses Birthplace
The CALW VR mobile application
With this production pipeline almost 70% of the building structures of the city of calw were processed and passed on to a 3D Realtime Editor in ordr to be integrated into the premade Framework for the dissemination to mobile devices. The functions of the application includes an interactive overview of the town from different camera angles as well as a walkthrough the recent structures. While the visitor can view the Buildings as they stand today, he can switch to a mode allowing him to a view of former times e.g. the 19th century. Informations about places and houses can be retrieved by audio, text and images which are triggered when standing near to this area of architecture. In addition a 3D inspection of several places can be processed to get a detailed view of architectual structures and objects. The application will be published later this year (2019) together with an exhibition which will inform the people about the technological achievments and simantic relations of the existing data in order to motivate the citizens supplying more material (historic Photos, Plans and oral information of former for ongoing 4D modelling)
.
Figure 3, VR Overview the City of Calw with various navigational options
Figure 5, Screenshot from the walkthrough position with accompanying text & map
Outloook
When published the mobile application will be made available freely and will additionally be available for VR Headsets. The aim is to complement the remaining building structures and develop past periods views which reach back even further than the 19th century. To due so, the participation of the citizens has to be motivated.
References
Becker, S., 2011. Automatische Ableitung und Anwendung von Regeln für die Rekonstruktion von Fassaden aus heterogenen Sensordaten. Dissertation, Universität Stuttgart.
Tutzauer, P.; Haala, N., 2015. Façade Reconstruction Using Geometric and Radiometric Point Cloud Information, ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-3/W2, 2015, pp.247-252, DOI:10.5194/isprsarchives-XL-3-W2-247-2015
Author
Dieter Fritsch¹, Patrick Tutzauer ¹, Michael Klein²
¹ Universität Stuttgart, Institut für Photogrammetrie .
² 7Reasons Medien GmbH, Austria.