When I look ahead to the future of geospatial infrastructure and 3D, the words that come to mind are: more, faster, and … more! Humans are poor prognosticators when confronted with exponentially increasing or improving factors, and today we see an explosion of capability and content in the geospatial world that will lead to new solutions and problems that we can’t possibly predict.
Ten years ago, a few of us considered the possibility of generating textured 3D mesh models of whole cities on a continuously updated basis—at the level of resolution that could be used for asset inspection and inventory on individual building elements. Back then, the concept seemed interesting, yet farfetched.
Today, we know that companies such as Aerometrex Ltd. and Vricon are collecting large swathes of cities and the globe in 3D at resolutions ranging from 2cm to 50cm. In 10 years, the entire globe will be collected at medium resolution, and major urban centers will likely have high-resolution models that are updated every few months, giving them a dynamic fabric of 3D basemap to use for planning, disaster response and damage assessment, policing, and much more.
About 25 years ago, game-engine technology produced low-resolution, jerky visual experiences. Today’s game engine technologies produce 120-frame-per-second or faster visual experiences and support photo-quality graphics and realistic, physics-based effects, and they’re starting to consume geospatial and building information modeling (BIM) data.
Tools such as VectorZero Inc.’s Road Runner are being used to generate 1000’s of photorealistic simulation environments to test autonomous vehicle systems in virtual space before they go on the road. In 10 years, will we be able to taste and smell the salt air around a new proposed bridge over San Francisco Bay in a virtual experience? Probably not.
But will we be able to rapidly aggregate GIS, BIM, and other spatial content in game-like engines and display it with haptic sensing devices in XR experiences to easily drive, walk, and even learn how to repair the bridge before it’s built? Almost certainly.
VisiCalc, first released by Software Arts for Apple II computers in 1979, is credited as the software application that started the digitalization of spreadsheet workflows, eventually changing the role of office workers and transforming analysis and reporting in industries as diverse as finance and medicine. In 2019, machine learning (ML) and artificial intelligence (AI) applications are emerging that are more advanced than spreadsheet applications but that can be as easy to use.
Tools such as ArcGIS Notebook Server are enabling anyone who can code a little python or manipulate a graph-based scripting tool to apply ML and AI techniques for classification and knowledge extraction from spatial and non-spatial data.
If we can classify items, track them, and predict activity or conditions from moving imagery or tabular information today, we will surely be able to go far beyond in the next 10 years after refining AI and ML techniques on the petabytes of multi-dimentional IoT sensor data, 3D models, imagery, and other data that will be collected in that next decade.
The way we manage, plan, engineer, and change the world around us is going to change drastically in the next decade. We are likely to encounter unexpected roadblocks, especially when it comes to cataloging, streaming, and processing the petabytes (or more) of data that will be needed to drive the innovative design and intelligent response we’ll need to mitigate the pressures of human population growth and resource consumption. Legal and social frameworks may need to change to accommodate new patterns of data analysis that will allow problem solving while respecting individual privacy and that foster economic growth.
In 10 years, we will have built entirely new geospatially enabled infrastructures to consume, manage, analyze, and distribute a rich fabric of data describing the world around us, and we will be looking forward to a future with even more exponential change and growth.