Subtopic Deep Dive
Fully Convolutional Networks for Building Extraction
Research Guide
What is Fully Convolutional Networks for Building Extraction?
Fully Convolutional Networks (FCNs) for Building Extraction apply end-to-end pixel-wise segmentation models to detect and delineate building footprints from high-resolution aerial and satellite imagery.
FCNs and variants like U-Net enable semantic segmentation of buildings despite occlusions and varying resolutions in multisource data. Ji et al. (2018) introduced a high-quality open dataset and FCN model achieving top accuracy, with 1662 citations. Over 10 key papers since 2018 demonstrate progressive improvements using fused data and advanced architectures.
Why It Matters
FCNs automate building extraction for urban mapping in disaster response, enabling rapid damage assessment (Ji et al., 2018). City planners use these models for population estimation and infrastructure monitoring from VHR satellite images (Li et al., 2019; Yi et al., 2019). Applications extend to land use classification supporting sustainable development (Zhang et al., 2018).
Key Research Challenges
Occlusions from Shadows and Vegetation
Buildings often obscured by trees or shadows in aerial imagery reduce segmentation accuracy. Ji et al. (2018) highlight multisource data fusion needs. Hosseinpour et al. (2021) address color intensity variations with cross-modal gating.
Varying Spatial Resolutions
Inconsistent resolutions across satellite and aerial sources challenge FCN feature extraction. Bittner et al. (2018) fuse DSMs with imagery to improve delineation. Liu et al. (2019) use spatial residual inception to handle scale variations.
Complex Urban Backgrounds
Dense urban scenes with similar textures confuse building boundaries. Yi et al. (2019) tackle this with deep CNNs on VHR data. Chen et al. (2021) propose sparse token transformers as CNN alternatives.
Essential Papers
Fully Convolutional Networks for Multisource Building Extraction From an Open Aerial and Satellite Imagery Data Set
Shunping Ji, Shiqing Wei, Meng Lü · 2018 · IEEE Transactions on Geoscience and Remote Sensing · 1.7K citations
The application of the convolutional neural network has shown to greatly improve the accuracy of building extraction from remote sensing imagery. In this paper, we created and made open a high-qual...
Semantic Segmentation-Based Building Footprint Extraction Using Very High-Resolution Satellite Images and Multi-Source GIS Data
Weijia Li, Conghui He, Jiarui Fang et al. · 2019 · Remote Sensing · 237 citations
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explo...
Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network
Yaning Yi, Zhijie Zhang, Wanchang Zhang et al. · 2019 · Remote Sensing · 214 citations
Urban building segmentation is a prevalent research domain for very high resolution (VHR) remote sensing; however, various appearances and complicated background of VHR remote sensing imagery make ...
Urban Land Use and Land Cover Classification Using Novel Deep Learning Models Based on High Spatial Resolution Satellite Imagery
Pengbin Zhang, Yinghai Ke, Zhenxin Zhang et al. · 2018 · Sensors · 199 citations
Urban land cover and land use mapping plays an important role in urban planning and management. In this paper, novel multi-scale deep learning models, namely ASPP-Unet and ResASPP-Unet are proposed...
Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network
Penghua Liu, Xiaoping Liu, Mengxi Liu et al. · 2019 · Remote Sensing · 193 citations
The rapid development in deep learning and computer vision has introduced new opportunities and paradigms for building extraction from remote sensing images. In this paper, we propose a novel fully...
CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images
Hamidreza Hosseinpour, Farhad Samadzadegan, Farzaneh Dadrass Javan · 2021 · ISPRS Journal of Photogrammetry and Remote Sensing · 178 citations
The extraction of urban structures such as buildings from very high-resolution (VHR) remote sensing imagery has improved dramatically, thanks to recent developments in deep multimodal fusion models...
Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review—Part II: Applications
Thorsten Hoeser, Felix Bachofer, Claudia Kuenzer · 2020 · Remote Sensing · 175 citations
In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investi...
Reading Guide
Foundational Papers
Start with Ji et al. (2018) for open multisource dataset and baseline FCN, as it sets accuracy standards cited 1662 times.
Recent Advances
Study Hosseinpour et al. (2021) for gated fusion and Chen et al. (2021) for transformer shifts from pure FCNs.
Core Methods
Core techniques: end-to-end FCN segmentation, DSM-image fusion (Bittner et al., 2018), residual inception blocks (Liu et al., 2019), and multi-scale ASPP-Unet (Zhang et al., 2018).
How PapersFlow Helps You Research Fully Convolutional Networks for Building Extraction
Discover & Search
Research Agent uses searchPapers and exaSearch to find top FCN papers like Ji et al. (2018), then citationGraph reveals 1662 citing works and findSimilarPapers uncovers variants like Liu et al. (2019).
Analyze & Verify
Analysis Agent applies readPaperContent to extract Ji et al. (2018) dataset details, verifyResponse with CoVe checks segmentation metrics against claims, and runPythonAnalysis reimplements FCN residuals from Liu et al. (2019) with NumPy for IoU verification; GRADE scores evidence strength on occlusion handling.
Synthesize & Write
Synthesis Agent detects gaps in occlusion fusion post-Ji et al. (2018), while Writing Agent uses latexEditText for methodology sections, latexSyncCitations for 10+ papers, latexCompile for full reports, and exportMermaid diagrams FCN architectures.
Use Cases
"Reproduce IoU metrics from Liu et al. 2019 building extraction FCN on sample imagery"
Research Agent → searchPapers(Liu 2019) → Analysis Agent → readPaperContent → runPythonAnalysis(residual inception NumPy code) → matplotlib IoU plots and statistical verification.
"Write LaTeX review comparing FCNs in Ji 2018 vs Hosseinpour 2021 for VHR building extraction"
Research Agent → citationGraph(Ji 2018) → Synthesis Agent → gap detection → Writing Agent → latexEditText(intro) → latexSyncCitations(10 papers) → latexCompile(PDF review).
"Find GitHub repos implementing FCNs from Chen et al. 2021 sparse transformers for buildings"
Research Agent → searchPapers(Chen 2021) → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect(transformer code) → exportCsv(repos with stars).
Automated Workflows
Deep Research workflow scans 50+ FCN papers via searchPapers, structures reports with GRADE-verified metrics from Ji et al. (2018). DeepScan's 7-step chain analyzes occlusions: readPaperContent → runPythonAnalysis → CoVe verification. Theorizer generates hypotheses on DSM fusion post-Bittner et al. (2018).
Frequently Asked Questions
What defines Fully Convolutional Networks for Building Extraction?
FCNs perform pixel-wise semantic segmentation on aerial/satellite images to output building masks without fully connected layers, as in Ji et al. (2018).
What are common methods in this subtopic?
Methods include U-Net variants, spatial residual inception (Liu et al., 2019), and cross-modal fusion (Hosseinpour et al., 2021) on VHR data.
What are key papers?
Ji et al. (2018, 1662 citations) provides FCN dataset benchmark; Li et al. (2019, 237 citations) adds GIS fusion; Chen et al. (2021, 159 citations) uses transformers.
What open problems remain?
Challenges persist in real-time extraction under heavy occlusions and multi-resolution fusion beyond Bittner et al. (2018) DSM methods.
Research Automated Road and Building Extraction with AI
PapersFlow provides specialized AI tools for Engineering researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Paper Summarizer
Get structured summaries of any paper in seconds
Code & Data Discovery
Find datasets, code repositories, and computational tools
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Engineering use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Fully Convolutional Networks for Building Extraction with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Engineering researchers