Predicting wettability of mineral/CO2/brine systems via data-driven machine learning modeling: Implications for carbon geo-sequestration

by Zeeshan Tariq, Muhammad Ali, Aliakbar Hassanpouryouzband, Bicheng Yan, Shuyu Sun, Hussein Hoteit
Year: 2023 DOI: https://doi.org/10.1016/j.chemosphere.2023.140469

Extra Information

Chemosphere, Volume 345, 140469 (2023)

Abstract

Effectively storing carbon dioxide (CO2) in geological formations synergizes with algal-based removal technology, enhancing carbon capture efficiency, leveraging biological processes for sustainable, long-term sequestration while aiding ecosystem restoration. On the other hand, geological carbon storage effectiveness depends on the interactions and wettability of rock, CO2, and brine. Rock wettability during storage determines the CO2/brine distribution, maximum storage capacity, and trapping potential. Due to the high CO2 reactivity and damage risk, an experimental assessment of the CO2 wettability on storage/caprocks is challenging. Data-driven machine learning (ML) models provide an efficient and less strenuous alternative, enabling research at geological storage conditions that are impossible or hazardous to achieve in the laboratory. This study used robust ML models, including fully connected feedforward neural networks (FCFNNs), extreme gradient boosting, k-nearest neighbors, decision trees, adaptive boosting, and random forest, to model the wettability of the CO2/brine and rock minerals (quartz and mica) in a ternary system under varying conditions. Exploratory data analysis methods were used to examine the experimental data. The GridSearchCV and Kfold cross-validation approaches were implemented to augment the performance abilities of the ML models. In addition, sensitivity plots were generated to study the influence of individual parameters on the model performance. The results indicated that the applied ML models accurately predicted the wettability behavior of the mineral/CO2/brine system under various operating conditions, where FCFNN performed better than other ML techniques with an R2 above 0.98 and an error of less than 3%.