Green Urban Scenarios: Reality vs Models
This is the third of a series of posts on a framework and software we have built within the TreesAI project. In these posts, we would like to highlight our underlying thinking. In a nutshell, we aim to create a set of tools for practitioners and researchers so that they can design a new urban forest or explore the impact of an existing one with the ability to estimate each ecosystem service, such as air-pollution removal or flood risk reduction.
How to address uncertainties and maintain transparency
This is the third post in the series. In the first we explained how urban forests are a critical part of urban infrastructure, presenting our change model in complex systems that combines policy intervention, planning, impact forecasting, and monitoring. In the second post on our Green Urban Scenarios (GUS) model, we looked into the process of creating a digital twin of urban forests to estimate systemic benefits to a city. Since models are abstractions of reality, we responded to critical questions such as how granular can GUS be? And how does it aim to capture complexity?
This post discusses the inherent problems of such models and how we address them. In a way, we are talking about how to break out of the dilemma: “All models are wrong, but some are useful” as coined by G. Box, a famous statistician. Or “Everything simple is false. Everything which is complex is unusable.” by P. Valery a philosopher-poet.
Limitations and errors are inevitable in any modelling exercise on complex systems. In any case, we must be transparent about the sources of uncertainty and how they affect measurements. This holds especially true in our impact modelling of urban forests since these uncertainties have further implications on outcome-driven valuations and financing strategies in planning and maintaining green infrastructures. Here are three potential sources of uncertainties that make the model divergent from actuality.
Above all, we must remember that all models inevitably diverge from reality. Thus, in order to reduce the reality gap or make such divergence explicit and quantifiable, models should be constantly and iteratively verified, validated, and calibrated. This would involve reexamining and rethinking the way we collect and monitor data, build scenarios out of such data, and assess impacts based on simulations.
Next, the collected data itself may not be accurate enough to represent reality. It is a common fallacy to assume that data will always tell the truth, but indeed data collection and processing are prone to errors. For example, data collected by humans can have lower accuracy than a remote sensor. However, in some cases, remote sensor devices also might fail to capture accurate estimates or inject additional errors. Hence, the collected data should be constantly monitored through recurrent data validation practices.
Lastly, while we translate the results of the simulations into more comprehensible forms of impact, this process of association entails a reality gap. Since the impacts we want to create also sprout from complex interdependencies, concluding sets of numeric data from the simulations into ecosystem benefits can be much harder for some outcomes, particularly the ones that cannot be fully quantified. For example, compared to an urban forest’s contribution to the removal of carbon dioxide, mental health may not be effectively isolated from other socio-economic factors. To what extent mental health cases can be only associated with residential proximity to the parks? In this case, decoupling the contribution of other underlying sociodemographic factors requires extensive and detailed empirical studies.
In the future, we aim to reduce this gap between reality and the model by serving as a computational laboratory to experiment and observe other sociodemographic effects. We plan to scale its complexity by adding human agents, considering their mobility and housing choices concerning their sociodemographic profiles.