The Future of Flood Modelling: Speed, Scale, and Smart Choices
- Oliver Ashton
- Dec 15, 2025
- 3 min read
A couple of decades ago, running 10 or 20 simulations on a local PC was enough to satisfy most projects. Today, the game has changed. We’re running hundreds or thousands of simulations for individual projects, and even hundreds of thousands for the most complex strategies and mega projects.
What is driving this? Advances in science, growing regulatory requirements, and client expectations have raised the bar. Changes in technology also allow us to evaluate things we couldn’t previously.
For the Oxford-Cambridge Arc project, 45,000 simulations were needed to assess flood risk over a century, and the Thames breach risk mapping study required a staggering 250,000 simulations. These aren’t outliers; they’re the new normal.
With this surge in demand comes two critical questions. Are more simulations actually adding value and insight? If so, where and how do we deliver more to programme and budget?
Step One: Challenge the Need
Just because we can do something doesn’t mean we should, especially with the associated monetary and programme costs. In modelling, it is tempting to believe that more is better, but there are often diminishing returns to both reduced uncertainty and increased understanding. Clients and suppliers should critically challenge the details of the modelling requested and how it supports the project's objectives. Can we achieve the same for less?
Step Two: Optimise Before you Compute
Before you throw hardware or cloud resources at the problem, optimise your models. Can you reduce the active area? Adjust cell sizes? Shorten simulation durations? Often, the peak occurs within a few hours. Do you really need a 36-hour run? Subtle tweaks, such as refining sub-grid sampling or improving model stability, can slash run times without compromising accuracy.
Step Three: Embrace the Cloud - but Wisely
Local hardware might seem appealing. However, high-end GPUs like the Nvidia 5090 can cost thousands and still take days to churn through a large batch of simulations. Enter the cloud. Platforms such as Jacobs’ Flood Platform, built on Microsoft Azure, offer scalable compute power with automated setup and preferential rates. Rather than wrestling with file paths and solver versions, you upload your models, hit “go,” and run hundreds of simulations in parallel. This can cut project timelines from weeks to hours.
Flood Platform doesn’t just give you raw compute; it gives you efficiency. It handles licensing, solver versions, model consistency and results processing. Results are accessible in-browser, enabling seamless collaboration with clients and stakeholders. No more hours spent disentangling models and shepherding remote computers; what used to take days now takes minutes.
Fastest Isn't Always Best
Speed costs money. High-performance cards like the H200 deliver lightning-fast results, but at a premium. For many projects, mid-tier options such as the T4 strike the right balance between cost and performance. The sweet spot depends on your deadlines, budget, and model size. Flood Platform even gives you the flexibility to choose, so you can tailor your approach to each project.
For example, a typical appraisal project might require 144 simulations. Running locally on a top-tier GPU could take seven days and cost thousands in hardware and setup. In the cloud, you could run them all in parallel and finish in a day, or even two hours if you opt for the fastest cards. And thanks to Jacobs’ preferential Azure rates, you’ll pay less than if you went direct.

The Bottom Line
Flood modelling is evolving, and so must we. The future isn’t about brute force; it’s about smart choices. Optimise your models, question your assumptions, and harness the power of the cloud. With the right strategy, you can deliver faster, collaborate better, and keep costs under control. In flood risk management, efficiency isn’t just a nice-to-have; it’s a game-changer.
Watch How It's Done
Whether you're running Flood Modeller or TUFLOW, watch this webinar as Adam Parkes, Technical Director at Jacobs, helps you understand how to get the most out of Flood Platform's high-performance computing options.
