Skip to main content Skip to secondary navigation
Journal Article

2D-to-3D image translation of complex nanoporous volumes using generative networks

Abstract

Image-based characterization offers a powerful approach to studying geological porous media at the nanoscale and images are critical to understanding reactive transport mechanisms in reservoirs relevant to energy and sustainability technologies such as carbon sequestration, subsurface hydrogen storage, and natural gas recovery. Nanoimaging presents a trade off, however, between higher-contrast sample-destructive and lower-contrast sample-preserving imaging modalities. Furthermore, high-contrast imaging modalities often acquire only 2D images, while 3D volumes are needed to characterize fully a source rock sample. In this work, we present deep learning image translation models to predict high-contrast focused ion beam-scanning electron microscopy (FIB-SEM) image volumes from transmission X-ray microscopy (TXM) images when only 2D paired training data is available. We introduce a regularization method for improving 3D volume generation from 2D-to-2D deep learning image models and apply this approach to translate 3D TXM volumes to FIB-SEM fidelity. We then segment a predicted FIB-SEM volume into a flow simulation domain and calculate the sample apparent permeability using a lattice Boltzmann method (LBM) technique. Results show that our image translation approach produces simulation domains suitable for flow visualization and allows for accurate characterization of petrophysical properties from non-destructive imaging data.

Author(s)
Timothy I. Anderson
Bolivia Vega
Jesse McKinzie
Saman A. Aryana
Anthony R. Kovscek
Journal Name
Scientific Reports
Publication Date
October 21, 2021
DOI
10.1038/s41598-021-00080-5