TerraMosaic Daily Digest: Feb 8, 2026
Daily Summary
This digest synthesizes 128 selected papers and focuses on flood generation, routing, and hydroclimatic forcing, seismic source-to-ground response pathways, high-resolution remote-sensing monitoring workflows. Top-ranked studies examine earthquake-triggered slope response and liquefaction, risk, fragility, and resilience assessment, and coastal and submarine hazard coupling.
Across the full set, evidence converges on mechanism-constrained analysis with operational relevance, especially for landslide process mechanics and slope evolution and coastal and submarine hydro-geomechanics. The strongest contributions pair interpretable process evidence with monitoring or forecasting workflows that support warning design and risk prioritization.
Key Trends
- Flood analyses are becoming event-specific and process-based: Papers emphasize precipitation structure, antecedent wetness, and catchment controls rather than static hazard descriptors.
- Seismic hazard research links source behavior to ground response: Recurring topics connect rupture or loading conditions with geotechnical performance and consequence assessment.
- Monitoring workflows rely on integrated remote-sensing products: Multi-source satellite and airborne observations are used for deformation retrieval, change detection, and rapid post-event mapping.
- Landslide studies increasingly resolve process chains: Contributions connect triggering conditions, slope deformation, and mobility outcomes, improving the basis for warning thresholds and scenario testing.
- Coastal and submarine hazards are treated as coupled systems: Wave, mass-transport, and shoreline processes are analyzed together with engineering implications.
Selected Papers
This digest features 128 selected papers from 1013 papers analyzed (out of 2700 raw papers scanned; 1017 new papers after deduplication) across multiple journals. Each paper has been evaluated for its relevance to landslide and broader geohazard research and includes links to the original publications.
1. A rate-and-state friction based criterion for the probability of earthquake fault jumps
Core Problem: Geometrical complexities in natural fault zones (steps and gaps) pose a challenge in seismic hazard studies as they can act as obstacles to seismic ruptures, making it difficult to predict if and how ruptures will jump between disconnected faults.
Key Innovation: Proposes a rate-and-state friction based criterion to estimate the efficiency of an earthquake rupture to jump between two spatially disconnected faults, which successfully predicts jumps where simpler Coulomb stress change calculations fail, and further introduces a probabilistic framework to account for uncertainties.
2. Resilience assessment of geohazard emergency system based on the ‘hazard‒impact‒drive’ model
Core Problem: Objectively reflecting historical patterns, future trends, and internal drivers of geohazard risk for effective resilience assessment of emergency systems.
Key Innovation: Proposing the ‘hazard–impact–drive’ (HID) model for a comprehensive resilience assessment of geohazard emergency systems.
3. Coulomb Pre‐Stress Changes Modulate Coseismic Rupture Kinematics of the 2025 Mw7.7 Myanmar Earthquake Revealed by Space Geodesy
Core Problem: The influence of cumulative Coulomb pre-stress changes, resulting from historical earthquakes and interseismic tectonic loading, on the coseismic rupture kinematics and propagation of major strike-slip earthquakes remains unresolved.
Key Innovation: Integration of multi-source satellite observations (optical, SAR) to derive a detailed finite-fault slip model of the 2025 Myanmar earthquake, combined with long-term Coulomb pre-stress evolution modeling (1839-2025) to reveal how these changes modulated the rupture behavior and inhibited southward propagation.
4. Urban Spatio-Temporal Foundation Models for Climate-Resilient Housing: Scaling Diffusion Transformers for Disaster Risk Prediction
Core Problem: Climate hazards increasingly disrupt urban transportation and emergency-response operations by damaging housing stock, degrading infrastructure, and reducing network accessibility, requiring robust forecasting of building-level climate-risk indicators.
Key Innovation: Presents Skjold-DiT, a diffusion-transformer framework that integrates heterogeneous spatio-temporal urban data to forecast building-level climate-risk indicators. It explicitly incorporates transportation-network structure and accessibility signals, enabling hazard-conditioned routing constraints and producing calibrated, uncertainty-aware accessibility layers for intelligent vehicles, and introduces the multi-hazard BCUR dataset.
5. Theoretical constraints on tidal triggering of slow earthquakes
Core Problem: Understanding how small tidal stresses can trigger seismic events, particularly slow earthquakes, is essential for constraining tectonic environments sensitive to such perturbations, but the key parameters controlling this triggering are not fully characterized.
Key Innovation: Employs a spring-block model with rate-and-state friction to investigate tidal triggering on stable sliding faults, identifying the normalized perturbation period and amplitude as primary controls, and providing theoretical constraints on frictional strength and characteristic slip distance for triggered events.
6. Downscaling Neural Network for Coastal Simulations
Core Problem: Learning fine-scale details of coastal ocean simulations from coarse representations is challenging, yet high-resolution simulations are necessary for accurately predicting flooding from tsunamis and storm surges.
Key Innovation: Proposes a Downscaling Neural Network for Coastal Simulation (DNNCS) for spatiotemporal enhancement, which uses grid-aware spatiotemporal attention, positional encoding, and spatiotemporal bilinear operations for reconstruction, augmented with a physics-informed loss to guarantee gradient consistency and momentum changes.
7. Mamba-FCS: Joint Spatio- Frequency Feature Fusion, Change-Guided Attention, and SeK Loss for Enhanced Semantic Change Detection in Remote Sensing
Core Problem: Semantic Change Detection (SCD) in remote sensing imagery requires models that can balance extensive spatial context, computational efficiency, and sensitivity to class-imbalanced land-cover transitions, with existing methods (Convolutional Neural Networks, Transformers) having limitations.
Key Innovation: Mamba-FCS, a SCD framework built on a Visual State Space Model backbone, incorporating a Joint Spatio-Frequency Fusion block for edge clarity and artifact suppression, a Change-Guided Attention (CGA) module linking BCD and SCD tasks, and a Separated Kappa (SeK) loss for class-imbalanced optimization, achieving state-of-the-art metrics on SCD datasets.
8. Rapid flood mapping from aerial imagery using fine-tuned SAM and ResNet-backboned U-Net
Core Problem: The need for efficient and rapid models to identify flood-affected areas from aerial imagery to minimize loss of life and property and facilitate damage assessment during flood events.
Key Innovation: Developed and compared two segmentation approaches for rapid flood mapping: a fine-tuned Segment Anything Model (SAM) and a U-Net model with ResNet-50/101 backbones. Found that fine-tuned SAM with point prompts achieved the best performance (Accuracy: 0.96, IoU: 0.90), providing valuable tools for emergency response, damage assessment, and generating training data for further flood analysis.
9. Assessing the Value of Information in pricing insurance against multiple hazards: the case of earthquake and liquefaction
Core Problem: The need for robust estimates of potential losses from multiple interacting hazards (earthquake and liquefaction) for natural hazard insurance pricing, especially under significant uncertainty in key risk parameters.
Key Innovation: A novel framework for quantifying the Value of Information (VoI) from targeted, site-specific data to reduce uncertainty in multi-hazard earthquake and liquefaction risk assessments, integrating loss assessment, expected utility theory, and probabilistic graphical models to identify optimal insurance pricing and demonstrate its benefits for both clients and insurers.
10. Seasonal variations in proglacial lake area revealed by high spatial resolution planetscope satellite imagery
Core Problem: Long-term records of proglacial lake behavior often omit crucial seasonal variations due to trade-offs in satellite imagery resolution, leading to underestimation of lake area and misrepresentation of growth patterns, which impacts the accurate prediction and risk assessment of glacial lake outburst floods (GLOFs).
Key Innovation: Utilized high spatial resolution (3.7 m) PlanetScope imagery to accurately detect short-term (seasonal) variations in proglacial lake area, demonstrating that moderate resolution imagery and single-season observations significantly underestimate these fluctuations, thereby improving the understanding of lake growth and downstream flood risk from GLOFs.
11. AdaptOVCD: Training-Free Open-Vocabulary Remote Sensing Change Detection via Adaptive Information Fusion
Core Problem: Existing remote sensing change detection methods rely on predefined categories and large-scale pixel-level annotations, limiting their generalization and applicability in open-world scenarios for detecting arbitrary changes, which is crucial for environmental monitoring and disaster assessment.
Key Innovation: AdaptOVCD, a training-free Open-Vocabulary Change Detection (OVCD) architecture based on dual-dimensional multi-level information fusion, integrating multi-level information across data, feature, and decision levels with targeted adaptive designs to achieve deep synergy among heterogeneous pre-trained models for zero-shot detection of arbitrary category changes.
12. Physics-informed extreme learning machine for Terzaghi consolidation problems and interpretation of coefficient of consolidation based on CPTu data
Core Problem: Solving the Terzaghi consolidation equation and interpreting the coefficient of consolidation from CPTu data is challenging, especially when initial excess water pressure distributions are unknown, and traditional PINNs are computationally intensive.
Key Innovation: Introduction of a Physics-informed Extreme Learning Machine (PIELM) that efficiently solves the Terzaghi consolidation equation and interprets the coefficient of consolidation from CPTu data by integrating physical laws and measured data into a least-squares minimized loss function, improving training efficiency and handling unknown initial conditions.
13. Multi-Sensor Attention Networks for Automated Subsurface Delamination Detection in Concrete Bridge Decks
Core Problem: Subsurface delaminations in concrete bridge decks are undetectable by visual inspection, requiring automated non-destructive evaluation methods that can effectively integrate multi-sensor data and quantify uncertainty.
Key Innovation: A deep learning framework that integrates Ground Penetrating Radar (GPR) and Infrared Thermography (IRT) using hierarchical attention mechanisms (temporal, spatial, and cross-modal) and Monte Carlo dropout-based uncertainty quantification, demonstrating substantial performance gains for automated subsurface delamination detection in bridge decks.
14. Ensemble Transport Filter via Optimized Maximum Mean Discrepancy
Core Problem: Existing ensemble-based filters, like particle filters, can struggle with high-dimensional data assimilation problems, and robustly matching posterior distributions remains a challenge.
Key Innovation: Proposes an Ensemble Transport Filter that uses an optimized Maximum Mean Discrepancy loss function with a variance penalty term to construct a transport map, enabling accurate and robust estimation of posterior distributions in high-dimensional assimilation problems, outperforming ensemble Kalman filters.
15. Antarctica’s uncertain future: global sea-level rise from oceanic and atmospheric forcing, with a focus on atmospheric rivers
Core Problem: Substantial uncertainties in Antarctica's future contribution to global sea-level rise due to limited understanding of atmospheric and oceanic forcings (e.g., atmospheric rivers) and their impact on ice mass balance.
Key Innovation: Assessing Antarctica's future contribution to sea-level rise, highlighting uncertainties from atmospheric and oceanic forcings, and their consequences for projecting sea-level rise-related hazards.
16. Distinguishing Single and Linked Ruptures in the Laboratory and Nature
Core Problem: Distinguishing between single and linked earthquake ruptures and identifying the fault conditions that promote linked ruptures, which has different implications for earthquake magnitude predictability.
Key Innovation: Laboratory experiments show that higher concentrations of normal stress (due to increased normal force, applied stress asperities, or larger heterogeneity) significantly increase the likelihood of linked ruptures, and that the mean radiated energy enhancement factor (REEF) is an excellent proxy for identifying these events.
17. Remote Sensing Data Assimilation With a Chained Hydrologic‐Hydraulic Model for Flood Forecasting
Core Problem: The need for reliable flood forecasts with extended lead times for effective flood risk management, which requires reducing uncertainties in hydrological forcing and friction parameters in modeling frameworks.
Key Innovation: Developed a chained hydrologic-hydraulic modeling framework (ISBA-CTRIP to TELEMAC-2D) for near-real-time flood forecasting, integrating an Ensemble Kalman Filter to jointly assimilate in-situ water level measurements and satellite-derived flood maps. This framework improves forecast accuracy, especially when using observed discharge during reanalysis combined with CTRIP-predicted runoff for forecasting.
18. AnyThermal: Towards Learning Universal Representations for Thermal Perception
Core Problem: Existing thermal backbones are task-specific and limited by small-scale data, restricting their utility and generalizability across diverse environments and tasks in thermal perception.
Key Innovation: AnyThermal, a thermal backbone that learns robust task-agnostic thermal features by distilling representations from visual foundation models into a thermal encoder, supported by the new TartanRGBT data collection platform and dataset, achieving state-of-the-art results across diverse thermal perception tasks and environments.
19. MMEarth-Bench: Global Model Adaptation via Multimodal Test-Time Training
Core Problem: Existing geospatial benchmark datasets lack multimodal data and global representation, limiting the evaluation and adaptation of pretrained models for Earth observation tasks, especially regarding geographic generalization.
Key Innovation: Introducing MMEarth-Bench, a new multimodal, globally distributed benchmark for environmental tasks, and proposing Test-Time Training with Multimodal Reconstruction (TTT-MMR) to improve model adaptation and generalization across new tasks and geographic domains using all available modalities.
20. The effect of recycled rubber energy-absorbing grids on the cyclic shear response of ballast
Core Problem: Ballasted railway tracks face accelerated degradation, settlement, and maintenance needs due to increasing axle loads and speeds, with traditional reinforcement solutions lacking combined energy absorption and lateral confinement.
Key Innovation: Introduces Recycled Rubber Energy Absorbing Grids (REAGs) that combine damping and interlocking effects, demonstrating significant enhancement in interface shear performance, reduced deformation, and minimized ballast breakage under cyclic loading compared to conventional geogrids.
21. Enrichment and release mechanisms of geogenic fluoride in multi-layered clayey sediments of coastal aquifers under groundwater overexploitation
Core Problem: The formation mechanisms of fluoride in clay porewater during compression, leading to high F groundwater and waterborne fluorosis, remain unknown, especially under groundwater overexploitation and associated subsidence.
Key Innovation: Elucidated the enrichment and release mechanisms of geogenic fluoride in clayey sediments, showing that clay compression due to overexploitation intensifies F-bearing mineral dissolution and desorption, quantitatively assessing its impact on deep groundwater F- concentration, and providing a basis for mitigation.
22. Multivariate analysis of drought and hot events using vine copula in a tropical humid region (Kerala) of India
Core Problem: Univariate analysis is inadequate for accurate risk assessment of interrelated drought and hot event characteristics in tropical humid regions.
Key Innovation: A multivariate joint dependence model using C-vine copulas was developed to analyze drought (meteorological, agricultural) and hot event characteristics, outperforming traditional copulas and highlighting the importance of incorporating non-stationarity for accurate extreme event assessment.
23. An Effective Monitoring of Evolving Groundwater Drought via Multivariate Data Assimilation and Machine Learning
Core Problem: Monitoring groundwater drought with high spatial and temporal resolution remains challenging due to limited in-situ observations, coarse satellite data, and model uncertainties.
Key Innovation: Developed an observation-informed approach combining multivariate data assimilation (SMAP, GRACE-FO into Noah-MP) and machine learning (Growing Neural Gas) to produce high-resolution daily groundwater drought maps, identifying evolving regional drought patterns.
24. A Fast and Generalizable Fourier Neural Operator-Based Surrogate for Melt-Pool Prediction in Laser Processing
Core Problem: High-fidelity simulations of complex thermo-fluid phenomena (e.g., laser welding) are computationally expensive, limiting large-scale process exploration and real-time use.
Key Innovation: LP-FNO, a Fourier Neural Operator (FNO) based surrogate model that learns the parametric solution operator for laser processes, reformulating the transient problem into a quasi-steady state to achieve fast, accurate, and generalizable prediction of 3D temperature fields and melt-pool boundaries.
25. Structural barriers to complete homogenization and wormholing in dissolving porous and fractured rocks
Core Problem: Understanding how dissolution processes in porous and fractured rocks lead to different flow patterns (uniform, channeling, wormholing) and how initial structural heterogeneity influences these processes.
Key Innovation: Quantifying differences in dissolution patterns across various network models (pore, fractured) using a unified flow focusing profile metric, demonstrating that structural heterogeneity fundamentally limits flow homogenization and must be accounted for in upscaling dissolution kinetics.
26. Toward generative machine learning for boosting ensembles of climate simulations
Core Problem: The computational constraints of physics-based climate models limit the generation of large ensembles needed for robust uncertainty quantification in climate predictions, forcing a trade-off with model resolution.
Key Innovation: Developing a conditional Variational Autoencoder (cVAE) trained on limited climate simulations to generate arbitrarily large, physically consistent ensembles that reproduce realistic statistics and teleconnection patterns, offering a computationally efficient framework for boosting climate simulation ensembles.
27. Taming SAM3 in the Wild: A Concept Bank for Open-Vocabulary Segmentation
Core Problem: The reliance of SAM3 on pre-defined concepts makes it vulnerable to performance degradation in Open-Vocabulary Segmentation (OVS) when visual distributions (data drift) or conditional label distributions (concept drift) shift in the target domain, breaking the alignment between visual evidence and prompts.
Key Innovation: Presents ConceptBank, a parameter-free calibration framework that restores alignment on the fly by constructing a dataset-specific concept bank from target statistics, anchoring target-domain evidence via class-wise visual prototypes, mining representative supports to suppress outliers, and fusing candidate concepts to rectify concept drift, demonstrating effectiveness in remote-sensing scenarios.
28. TFusionOcc: Student's t-Distribution Based Object-Centric Multi-Sensor Fusion Framework for 3D Occupancy Prediction
Core Problem: Existing 3D semantic occupancy prediction methods for autonomous vehicles rely on intermediate representations (3D voxel volumes or Gaussians) that hinder efficient and effective capture of fine-grained geometric details in complex 3D driving environments.
Key Innovation: TFusionOcc, a novel object-centric multi-sensor fusion framework for 3D semantic occupancy prediction, which leverages multi-stage fusion, Student's t-distribution, T-Mixture model, and deformable superquadric primitives to achieve state-of-the-art performance and robustness to sensor corruption.
29. Reclaiming First Principles: A Differentiable Framework for Conceptual Hydrologic Models
Core Problem: Calibration of conceptual hydrologic models is slow and numerically fragile due to reliance on computationally demanding and error-prone numerical differentiation methods (finite-difference or autodiff frameworks).
Key Innovation: Introduces a fully analytic and computationally efficient differentiable framework for hydrologic modeling based on exact parameter sensitivities, jointly evolving model states and the Jacobian matrix. This provides analytic gradients, eliminating numerical issues and enabling rapid, stable, and transparent gradient-based calibration.
30. Rebenchmarking Unsupervised Monocular 3D Occupancy Prediction
Core Problem: Existing unsupervised monocular 3D occupancy prediction methods have inconsistencies between training/evaluation protocols and fail to address ambiguity in occluded regions due to insufficient geometric constraints.
Key Innovation: Presents a reformulated benchmark and an improved approach that interprets variables for physically consistent occupancy probability, aligns evaluation with 3D ground truth, and introduces an occlusion-aware polarization mechanism to enhance discrimination in occluded regions, significantly outperforming existing methods.
31. Forest canopy height estimation from satellite RGB imagery using large-scale airborne LiDAR-derived training data and monocular depth estimation
Core Problem: Global forest canopy height observations from spaceborne LiDAR are spatially sparse and uncertain, necessitating high-resolution, spatially continuous mapping.
Key Innovation: Develops Depth2CHM, a monocular depth estimation model trained with large-scale airborne LiDAR-derived CHMs and satellite RGB imagery, enabling accurate and spatially continuous forest canopy height estimation directly from PlanetScope RGB imagery, outperforming existing global products.
32. Gold Exploration using Representations from a Multispectral Autoencoder
Core Problem: Traditional mineral exploration is costly and limited by on-site data availability, making large-scale prospectivity mapping challenging.
Key Innovation: A proof-of-concept framework that leverages generative representations learned from multispectral Sentinel-2 imagery using a pretrained autoencoder (Isometric) to identify gold-bearing regions, significantly improving accuracy and demonstrating the potential of foundation-model representations for efficient, scalable mineral exploration.
33. FlowDA: Accurate, Low-Latency Weather Data Assimilation via Flow Matching
Core Problem: Data assimilation (DA) is a major computational bottleneck in machine learning-based weather forecasting pipelines, with existing generative ML-based DA methods suffering from many sampling steps and error accumulation.
Key Innovation: Proposes FlowDA, a low-latency weather-scale generative DA framework based on flow matching, which conditions on observations and fine-tunes a foundation model to deliver accurate, efficient, and robust analyses, demonstrating superior performance and stability in long-horizon cycling DA.
34. Extracting Manifold Information from Point Clouds
Core Problem: Effectively interpolating and analyzing point clouds to extract geometric information like dimension, normal, and curvatures, especially when data is noisy or unstructured, remains a challenge.
Key Innovation: Proposes a kernel-based method for constructing signature functions of subsets of R^d (including point clouds), which can estimate manifold dimension, normal, and curvatures, offering a global approach that handles unstructured and noisy data with a variational formulation.
35. Forecasting with Hyper-Trees
Core Problem: Conventional tree-based models struggle to effectively forecast time series data directly, lacking the inductive bias of classical time series models and facing scaling limitations for high-dimensional parameter estimation.
Key Innovation: Introduces Hyper-Trees, a novel framework that uses gradient boosted trees to learn the parameters of target time series models (e.g., ARIMA, Exponential Smoothing) as functions of features, combining tree effectiveness with time series inductive bias and using a hybrid tree-neural network approach for scalability.
36. M4-SAR: A Multi-Resolution, Multi-Polarization, Multi-Scene, Multi-Source Dataset and Benchmark for Optical-SAR Fusion Object Detection
Core Problem: Single-source remote sensing object detection (optical or SAR) struggles in complex environments due to individual limitations (e.g., clouds for optical, speckle noise for SAR), hindering accurate detection.
Key Innovation: Proposes M4-SAR, the first comprehensive multi-source dataset (112,184 aligned image pairs, nearly 1 million labeled instances) for optical-SAR fusion object detection, along with a unified benchmarking toolkit and a novel E2E-OSDet framework, demonstrating significant mAP improvement (5.7%) over single-source inputs.
37. SPARK: Scalable Real-Time Point Cloud Aggregation with Multi-View Self-Calibration
Core Problem: Existing real-time multi-camera 3D reconstruction methods struggle with effective multi-view fusion, accurate handling of camera extrinsic uncertainty, and scalability for large camera setups, which are crucial for 3D perception and robotics.
Key Innovation: Proposing SPARK, a self-calibrating real-time multi-camera point cloud reconstruction framework that jointly handles point cloud fusion and extrinsic uncertainty through a geometry-aware online extrinsic estimation module and a confidence-driven point cloud fusion strategy, achieving superior accuracy, consistency, stability, and real-time performance.
38. Science-Informed Design of Deep Learning With Applications to Wireless Systems: A Tutorial
Core Problem: Conventional deep learning models often lack transparency, exhibit weak generalization, and lack a principled framework for parameter tuning, undermining trust and limiting their application in scientific and engineering domains.
Key Innovation: Presents a structured tutorial and taxonomy for Science-informed deep learning (ScIDL), demonstrating how integrating scientific knowledge into DL pipelines can address transparency, generalization, and parameter tuning challenges, with applications in wireless systems.
39. Better continental-scale streamflow predictions for Australia: LSTM as a land surface model post-processor and standalone hydrological model
Core Problem: Many land surface models struggle to accurately capture streamflow timing and magnitudes at continental scales, especially in large catchments and when calibrated broadly.
Key Innovation: Demonstrated that two LSTM-based approaches (standalone LSTM-C and LSTM-QC as a post-processor for AWRA-L) consistently outperform traditional land surface models and conceptual models for continental-scale streamflow predictions in Australia, showing robustness across various cross-validation strategies and highlighting the LSTM's ability to correct systematic biases and enhance routing signals.
40. Mild-to-wild plasticity of Earth’s upper mantle
Core Problem: Traditional models of Earth's upper mantle flow assume slow, continuous creep, but high-resolution experiments reveal a spectrum of 'mild-to-wild plasticity' with intermittent fluctuations. The problem is to determine if this 'wildness' applies to olivine and Earth's mantle, and its implications for geodynamic models.
Key Innovation: Nanoindentation experiments on olivine single crystals show measurable 'wildness' (intermittent bursts of displacement) even under mild conditions, accounting for ~8 ± 6% of plastic strain. This suggests upper mantle flow may involve intermittent wild fluctuations, increasing with depth, and provides new constraints on dislocation-mediated flow and transient instabilities like deep earthquakes and slow-slip events.
41. Fully coupled conditional simulation of geological and geotechnical variabilities for sparse geotechnical data in three dimensions
Core Problem: Accurately characterizing geological uncertainty and geotechnical variability jointly, especially with sparse site investigation data, where conventional methods decouple soil-category and soil-property simulations.
Key Innovation: Proposing a fully coupled conditional simulation framework that jointly conditions on categorical and continuous soil data, using a modified hierarchical Bayesian model to learn and transfer correlation characteristics from a global soil database, enabling efficient 3D applications.
42. Understanding the limits of two lumped hydrological models through divergences between daily and sub-daily projections
Core Problem: Divergences between daily and sub-daily projections of flow metrics from lumped conceptual hydrological models lead to uncertainties in their application for engineering and climate change contexts.
Key Innovation: Four common reasons for divergences between daily and sub-daily hydrological model projections were identified, revealing compensatory mechanisms in parameter sets and emphasizing that time-step dependent projections are crucial for assessing model validity and behavior under non-stationary climate conditions.
43. Quantitative study on the effect of moisture content and external induced load characteristics on the dynamic separation of bauxite
Core Problem: The idealized treatment of moisture content and external induced load characteristics in theoretical and experimental studies leads to inaccurate risk assessments for cargo fluidization, specifically for bauxite.
Key Innovation: A quantitative study using shaking table tests and regression analysis reveals that increasing load frequency, peak acceleration, and moisture content accelerate bauxite dynamic separation. It quantifies their relative impact weights (47.11%, 38.62%, and 14.27% respectively) and describes the complex liquid migration and interface morphology, providing a more accurate basis for fluidization risk assessment.
44. Modeling Interactions and Dynamic Saturation Processes in Karst Media
Core Problem: Simulating complex conduit-matrix interactions and dynamic saturation processes in karst aquifers is highly challenging for existing models.
Key Innovation: Developed KarstFOAM, a high-fidelity, physics-based numerical model that accurately simulates conduit-matrix interface velocity, transition zones, dynamic saturation, conduit drying, and matrix water retention effects in karst media.
45. MGP-KAD: Multimodal Geometric Priors and Kolmogorov-Arnold Decoder for Single-View 3D Reconstruction in Complex Scenes
Core Problem: Single-view 3D reconstruction in complex real-world scenes is challenging due to noise, object diversity, and limited dataset availability, leading to difficulties in achieving accurate geometric integrity, smoothness, and detail preservation.
Key Innovation: MGP-KAD, a novel multimodal feature fusion framework that integrates RGB and dynamically adjusting geometric priors (class-level features) with a hybrid Kolmogorov-Arnold Networks (KAN) based decoder, achieving state-of-the-art performance in single-view 3D reconstruction by significantly improving geometric integrity, smoothness, and detail preservation.
46. ATEX-CF: Attack-Informed Counterfactual Explanations for Graph Neural Networks
Core Problem: Interpreting Graph Neural Networks (GNNs) by identifying minimal changes that alter a model's prediction, with traditional approaches often treating adversarial attacks and counterfactual explanations separately.
Key Innovation: ATEX-CF, a novel framework that unifies adversarial attack techniques with counterfactual explanation generation for GNNs, efficiently integrating both edge additions and deletions to produce faithful, concise, and plausible instance-level explanations.
47. SPDA-SAM: A Self-prompted Depth-Aware Segment Anything Model for Instance Segmentation
Core Problem: The Segment Anything Model (SAM) performance in instance segmentation is severely dependent on manual prompt quality, and the inherent lack of depth information in RGB images hinders the ability to perceive spatial structures and delineate object boundaries.
Key Innovation: Proposes SPDA-SAM, a Self-prompted Depth-Aware SAM for instance segmentation, featuring a Semantic-Spatial Self-prompt Module (SSSPM) to extract semantic and spatial prompts, and a Coarse-to-Fine RGB-D Fusion Module (C2FFM) that fuses features from monocular RGB images and estimated depth maps to provide structural guidance and compensate for spatial information loss.
48. Robust Pedestrian Detection with Uncertain Modality
Core Problem: Existing cross-modal pedestrian detection methods struggle to extract robust information and maintain performance when input data has unpredictable combinations of available modalities (e.g., missing RGB, NIR, or TIR), leading to significant degradation.
Key Innovation: Proposes the Adaptive Uncertainty-aware Network (AUNet) with Unified Modality Validation Refinement (UMVR) and Modality-Aware Interaction (MAI) to accurately discriminate modal availability and effectively fuse information from available modalities, along with a new Triplet RGB-NIR-TIR (TRNT) dataset.
49. Point Virtual Transformer
Core Problem: LiDAR-based 3D object detectors struggle with far-field objects due to point cloud sparsity, and existing augmentation methods with virtual points increase computational cost and fusion challenges.
Key Innovation: Point Virtual Transformer (PointViT), a transformer-based 3D object detection framework that jointly reasons over raw LiDAR and selectively sampled virtual points, examining multiple fusion strategies to improve accuracy and efficiency, achieving state-of-the-art performance on the KITTI benchmark.
50. The Window Dilemma: Why Concept Drift Detection is Ill-Posed
Core Problem: Concept Drift detection in data streams is fundamentally ill-posed because perceived drift is often a product of the windowing method used, and verifying actual drift events in practice is implausible, leading to questions about the utility of drift detectors compared to traditional adaptation strategies.
Key Innovation: Introduces the 'Window Dilemma' to highlight that perceived drift is often an artifact of windowing. It empirically demonstrates that traditional batch learning techniques can often outperform drift-aware counterparts in stream classification, challenging the necessity and efficacy of explicit drift detection.
51. DiTS: Multimodal Diffusion Transformers Are Time Series Forecasters
Core Problem: Existing generative time series models, particularly Diffusion Transformers, do not adequately address the multi-dimensional properties of time series data and tend to underutilize cross-variate dependencies in covariate-aware probabilistic forecasting.
Key Innovation: Proposed Diffusion Transformers for Time Series (DiTS), a general-purpose architecture that frames endogenous and exogenous variates as distinct modalities and employs a dual-stream Transformer block (Time Attention and Variate Attention) to better capture both inter-variate and intra-variate dependencies, achieving state-of-the-art performance in generative time series forecasting.
52. Memory-Conditioned Flow-Matching for Stable Autoregressive PDE Rollouts
Core Problem: Autoregressive generative PDE solvers, while accurate for single steps, suffer from drift over long rollouts, especially in coarse-to-fine regimes where unresolved fine scales must be regenerated, due to structural limitations of memoryless closures.
Key Innovation: Introduces memory-conditioned diffusion/flow-matching, which injects a compact online state into denoising via latent features, leading to improved accuracy and markedly more stable long-horizon rollouts for PDEs, with better fine-scale spectral and statistical fidelity.
53. Sample Complexity of Causal Identification with Temporal Heterogeneity
Core Problem: Recovering a unique causal graph from observational data is an ill-posed problem, and existing methods separately utilize time-series dynamics or multi-environment heterogeneity, without fully integrating them or analyzing their statistical limits under various noise conditions.
Key Innovation: Integration of time-series dynamics and multi-environment heterogeneity for causal identification, yielding unified identifiability conditions and a rigorous analysis of statistical recovery limits under thin vs. heavy-tailed noise, demonstrating that temporal structure can compensate for missing environmental diversity and quantifying the sample complexity cost of robustness.
54. Supercharging Simulation-Based Inference for Bayesian Optimal Experimental Design
Core Problem: Bayesian optimal experimental design (BOED) is limited by the intractability of likelihood estimates for maximizing expected information gain (EIG), and existing simulation-based inference (SBI) connections are restricted to a single EIG bound, with optimization being a key bottleneck.
Key Innovation: Demonstrating that EIG admits multiple formulations leveraging modern SBI density estimators (neural posterior, likelihood, ratio estimation), defining a novel EIG estimator using neural likelihood estimation, and improving optimization with a multi-start parallel gradient ascent, leading to significant performance improvements in SBI-based BOED.
55. Time-uniform conformal and PAC prediction
Core Problem: Traditional uncertainty quantification methods like conformal prediction lack guarantees in sequential settings with streaming data and unfixed sample sizes, and cannot cope with sequentially updated predictions.
Key Innovation: Developing an extension of conformal and PAC prediction frameworks for sequential settings, providing 'anytime-valid' prediction sets whose expected coverage is maintained at the required level at any chosen time, even if the choice depends on the data.
56. 3D Object Detection for Autonomous Driving: A Survey
Core Problem: 3D object detection for autonomous driving faces challenges in visual appearance recovery, representation learning from occluded point clouds, and semantic alignment of heterogeneous features, with a lack of structured knowledge synthesis.
Key Innovation: Provides a comprehensive survey of 3D object detection for autonomous driving, structuring existing knowledge on sensors, datasets, metrics, state-of-the-art methods, and offering quantitative comparisons, runtime, error, and robustness analyses, along with future directions.
57. Continual-MEGA: A Large-scale Benchmark for Generalizable Continual Anomaly Detection
Core Problem: Existing benchmarks for continual anomaly detection do not adequately reflect real-world deployment scenarios, lacking in scale, diversity, and the ability to measure zero-shot generalization to unseen anomaly classes.
Key Innovation: Introduction of Continual-MEGA, a large-scale benchmark for generalizable continual anomaly detection, which includes a diverse dataset and a novel scenario for zero-shot generalization, along with a unified baseline algorithm that improves robustness and generalization.
58. Physics vs Distributions: Pareto Optimal Flow Matching with Physics Constraints
Core Problem: Physics-constrained generative modeling faces a fundamental trade-off between achieving physical consistency and distributional accuracy, as these objectives often conflict, leading to degraded generative fidelity or costly inference-time corrections.
Key Innovation: Introduction of Physics-Based Flow Matching (PBFM), a method that enforces physical constraints at training time using conflict-free gradient updates and unrolling, enabling simultaneous optimization of generative and physical objectives to achieve a Pareto-optimal trade-off without impeding inference performance.
59. WAFT: Warping-Alone Field Transforms for Optical Flow
Core Problem: Existing optical flow methods often rely on computationally expensive and memory-intensive cost volumes, limiting accuracy and efficiency.
Key Innovation: WAFT (Warping-Alone Field Transforms), a simple and effective optical flow method that replaces cost volume with high-resolution warping, achieving superior accuracy and speed (1.3-4.1x faster) with lower memory cost, challenging conventional wisdom.
60. MATTER: Multiscale Attention for Registration Error Regression
Core Problem: Existing point cloud registration (PCR) quality validation methods treat it as a classification task, limiting the fine-grained quantification of registration quality, and often struggle with diverse datasets and heterogeneous spatial densities.
Key Innovation: MATTER (Multiscale Attention for Registration Error Regression), a regression-based approach for PCR validation that uses multiscale feature extraction and attention-based aggregation to accurately and robustly estimate registration errors, significantly improving mapping quality in downstream tasks.
61. Spectral Compressive Imaging via Chromaticity-Intensity Decomposition
Core Problem: Hyperspectral image (HSI) reconstruction from coded aperture snapshot spectral imaging (CASSI) is a severely ill-posed inverse problem, and it's difficult to recover intrinsic spectral reflectance invariant to lighting conditions due to the entanglement of spatial and spectral information.
Key Innovation: A chromaticity-intensity decomposition framework that disentangles HSI into a spatially smooth intensity map and a spectrally variant chromaticity cube, and CIDNet, an unfolding network within a dual-camera CASSI system, which integrates a hybrid spatial-spectral Transformer and a degradation-aware noise estimation module to achieve superior performance in HSI reconstruction.
62. Adaptive Regime-Switching Forecasts with Distribution-Free Uncertainty: Deep Switching State-Space Models Meet Conformal Prediction
Core Problem: Regime transitions in time series routinely break stationarity, making it challenging to produce calibrated uncertainty alongside point accuracy in forecasts, especially under nonstationarity and model misspecification.
Key Innovation: Couples Deep Switching State Space Models with Adaptive Conformal Inference (ACI) and its aggregated variant (AgACI), introducing a unified conformal wrapper to produce online predictive bands with finite-sample marginal guarantees for regime-switching forecasts.
63. Inverse problems with diffusion models: MAP estimation via mode-seeking loss
Core Problem: Solving inverse problems with pre-trained unconditional diffusion models often relies on existing posterior sampling or MAP estimation methods that involve modeling approximations and can be computationally demanding.
Key Innovation: Proposes a new MAP estimation strategy using the variational mode-seeking loss (VML), which minimizes the Kullback-Leibler divergence between the diffusion posterior and the measurement posterior, guiding generated samples towards MAP estimates, and introduces the VML-MAP algorithm for efficient inverse problem solving.
64. DarkEQA: Benchmarking Vision-Language Models for Embodied Question Answering in Low-Light Indoor Environments
Core Problem: Existing benchmarks for Vision Language Models (VLMs) in embodied agents overlook their performance under challenging visual degradations, particularly low-light conditions, which are critical for robust 24/7 operation.
Key Innovation: DarkEQA, an open-source benchmark, evaluates EQA-relevant perceptual primitives of VLMs under multi-level low-light conditions, modeling physical fidelity of degradations in linear RAW space, thereby systematically revealing VLMs' limitations and enabling attributable robustness analysis in challenging visual environments.
65. Adaptive Attention Distillation for Robust Few-Shot Segmentation under Environmental Perturbations
Core Problem: Existing few-shot segmentation (FSS) models lack robustness to complex environmental factors (e.g., illumination, background, viewpoint, motion blur, small objects, camouflaged targets) encountered in real-world scenarios, leading to poor performance outside laboratory conditions.
Key Innovation: Introducing an environment-robust FSS setting and benchmark (ER-FSS) and proposing Adaptive Attention Distillation (AAD) method, which repeatedly contrasts and distills key shared semantics between known and unknown images to derive class-specific attention, significantly improving segmentation robustness under diverse environmental perturbations.
66. FreDN: Spectral Disentanglement for Time Series Forecasting via Learnable Frequency Decomposition
Core Problem: Spectral entanglement (overlap of trends, periodicities, noise) and computational burden of complex-valued learning in frequency-domain methods when applied to non-stationary time series forecasting.
Key Innovation: Proposing the Frequency Decomposition Network (FreDN) which introduces a learnable Frequency Disentangler module to separate trend and periodic components directly in the frequency domain; developing a theoretically supported ReIm Block to reduce the complexity of complex-valued operations; outperforming state-of-the-art methods on long-term forecasting benchmarks while reducing parameter count and computational cost.
67. Improved USV formation control with adaptive disturbance compensation and dynamic artificial potential field
Core Problem: Challenges in achieving precise formation control and autonomous obstacle avoidance for underactuated unmanned surface vehicles (USVs) in dynamic marine environments due to limitations of fixed guidance parameters and rigid potential fields.
Key Innovation: Proposed a multi-layer cooperative control framework featuring a Disturbance-Mediated Adaptive Artificial Potential Field (DMA-APF) integrated with an adaptive LOS guidance law and an adaptive sliding mode control (ASMC) scheme, significantly improving tracking accuracy, actuator efficiency, and safe navigation.
68. Energy consumption prediction for electric tugboats using GA-BiLSTM and adaptive operational mode recognition
Core Problem: Difficulty in accurately predicting energy consumption for electric tugboats due to frequent operational mode switching, leading to highly fluctuating patterns that general-purpose models struggle to capture, impacting efficiency and range anxiety.
Key Innovation: Proposed an operational-adaptive energy consumption prediction method for electric tugboats based on a genetic algorithm-optimized bidirectional long short-term memory (GA-BiLSTM) network, which effectively recognizes operational modes and achieves significantly higher prediction accuracy (MSE as low as 0.0897).
69. Integrating global optimum into learning-based energy management: a hybrid DRL-ECMS with behavioral cloning training and coordinated feedforward-feedback control
Core Problem: Limitations of conventional Deep Reinforcement Learning-based Equivalent Consumption Minimization Strategy (DRL-ECMS) for ship hybrid power systems, specifically poor training efficiency and suboptimal real-time optimality.
Key Innovation: Proposed a novel hierarchical framework integrating offline training (DPMP for global optimum, behavioral cloning for pre-training, adaptive policy entropy) and online feedforward-feedback control (imitation reinforcement learning, dual-state feedback), significantly improving DRL training efficiency, convergence speed, and overall performance.
70. An enhanced numerical model for predicting higher-harmonic wave loads based on weak-scatterer theory
Core Problem: Accurate and computationally efficient prediction of higher-harmonic wave loads on offshore structures under extreme wave conditions, as existing high-fidelity models are expensive and weak-scatterer theory has limitations in steep waves and numerical stability.
Key Innovation: An enhanced numerical implementation of weak-scatterer (WS) theory for predicting higher-harmonic wave loads, featuring a nonlinear waterline correction, a tailored weighted least-squares low-pass filter for robust stability, and a Morison-drag model, enabling stable and accurate simulations at high wave steepness for offshore structures.
71. Runoff evaluation in an Earth System Land Model for permafrost regions in Alaska
Core Problem: Substantial uncertainties in terrestrial runoff parameterization schemes in Earth system and land surface models, particularly in heterogeneous permafrost regions with scarce observational data.
Key Innovation: Developed a framework leveraging physics-based ATS simulations to evaluate and improve ELM's runoff parameterization, showing that minor adjustments to coefficients significantly improve runoff predictions in permafrost regions and better match streamflow observations.
72. Knowledge-guided graph machine learning improves corn yield mapping in the U.S. Midwest
Core Problem: Temporal deep learning models for large-scale crop yield mapping often fail to adequately capture crucial spatial dependencies, such as yield spatial autocorrelations and the influence of time-invariant variables (e.g., soil properties and topography).
Key Innovation: Proposes KGML-Graph, a knowledge-guided graph machine learning framework that integrates spatial learning with temporal structures and incorporates knowledge-guided edge weights (from historical yield correlations) to explicitly capture spatial dependencies, significantly improving corn yield mapping accuracy and transferability across years and unseen regions.
73. Improved prediction of winter wheat yield at regional scale with limited ground samples by unmanned aerial vehicle and satellite synergy
Core Problem: Traditional data-driven methods for large-scale winter wheat yield prediction face challenges due to limited ground sampling data, hindering model training and accuracy.
Key Innovation: Proposes a novel framework integrating ground, UAV, and satellite data with data-driven algorithms to improve regional-scale yield prediction by augmenting samples and fusing cross-scale information, achieving significantly higher accuracy and transferability compared to using satellite data alone, especially with an optimized UAV-derived upscaled sample strategy.
74. Imprints of terrestrial water fluxes on tropospheric stable water isotopes revealed by satellite observations and complex network analysis
Core Problem: Improving predictions of the hydrological cycle requires a better understanding of the interaction between terrestrial and atmospheric water fluxes.
Key Innovation: Strong positive correlations between satellite-observed atmospheric water vapor isotopes (δD004) and surface water balance (ET-P) were identified, and a complex network analysis revealed short- and long-range teleconnections, offering a novel diagnostic tool for climate and hydrological models.
75. Land cover influences microclimate and non-rainfall water inputs in temperate agricultural environment
Core Problem: The individual contribution of non-rainfall water inputs (NRWI) to the terrestrial water cycle is unclear due to a lack of suitable methods for identification and quantification.
Key Innovation: A refined method using weighing lysimeters with leaf-wetness and air visibility devices was developed to quantify and partition NRWI, demonstrating that land cover type significantly influences microclimate and dew formation, providing a more accurate assessment than established methods.
76. Small‐ and Medium‐Wavelength Dynamic Topography and Active Mantle Flow in Eastern China
Core Problem: The geodynamic mechanisms driving significant intracontinental extension and intraplate magmatism in Eastern China since the Cenozoic remain controversial.
Key Innovation: Used calculations of Residual Topography and Dynamic Topography, combined with 3D spherical mantle convection experiments, to reveal a low-west and high-east pattern of active mantle flow in Eastern China, linking positive DT to Quaternary intraplate volcanism and mantle upwelling from Pacific Plate subduction, and negative DT to mantle downwelling and lithospheric delamination.
77. Understanding Multiscale Hydrological Interactions From Spectral Perspective: A Large Sample Investigation Across the United States
Core Problem: A comprehensive understanding of multiscale hydrological interactions and their drivers (climate, landscape factors) is lacking, hindering improved hydrological modeling.
Key Innovation: Quantified multiscale hydrological interactions using spectral analysis and explored their relationship with climate/landscape factors using machine learning, revealing the joint influence of aridity and snow fraction, and providing insights for hydrological modeling.
78. Groundwater Age and Nonpoint Source Pollutant Mixing in Alluvial Aquifer Wells: Comparing the Role of Diffusion, Dispersion, Aquifer Heterogeneity, and Well Screen Length
Core Problem: Interpreting groundwater age tracer and nonpoint source pollutant data from production wells is challenging due to multiple mixing processes (diffusion, dispersion, heterogeneity, in-well mixing).
Key Innovation: Quantified the relative contributions of diffusion, mechanical dispersion, aquifer heterogeneity, and in-well mixing to groundwater age and pollutant mixing in alluvial aquifer wells using a Monte Carlo framework, identifying dominant processes for different well types.
79. COSMOS: Coherent Supergaussian Modeling with Spatial Priors for Sparse-View 3D Splatting
Core Problem: 3D Gaussian Splatting (3DGS) suffers from overfitting and structural degradation when trained with sparse input views, leading to poor generalization on novel views.
Key Innovation: Introduced COSMOS, a method that enhances 3D Gaussian Splatting for sparse-view 3D reconstruction by incorporating 3D structure priors through supergaussian groupings, global/local attention mechanisms, and intra-group positional regularization, leading to more consistent and stable results.
80. Pragmatic Curiosity: A Hybrid Learning-Optimization Paradigm via Active Inference
Core Problem: Many engineering and scientific workflows depend on expensive black-box evaluations, requiring decision-making that simultaneously improves performance and reduces uncertainty, but existing Bayesian optimization and experimental design methods treat goal-seeking and information-seeking separately.
Key Innovation: Proposes 'pragmatic curiosity,' a hybrid learning-optimization paradigm derived from active inference, where actions are selected by minimizing the expected free energy. This single objective couples pragmatic utility with epistemic information gain, consistently outperforming baselines on various real-world hybrid tasks.
81. MetaSSP: Enhancing Semi-supervised Implicit 3D Reconstruction through Meta-adaptive EMA and SDF-aware Pseudo-label Evaluation
Core Problem: Implicit SDF-based methods for single-view 3D reconstruction require large labeled datasets, limiting their scalability and practical application.
Key Innovation: MetaSSP, a novel semi-supervised framework that leverages unlabeled images through gradient-based parameter importance estimation for adaptive EMA updates and an SDF-aware pseudo-label weighting mechanism, achieving state-of-the-art 3D reconstruction performance on the Pix3D benchmark.
82. DroneKey++: A Size Prior-free Method and New Benchmark for Drone 3D Pose Estimation from Sequential Images
Core Problem: Existing methods for drone 3D pose estimation rely on prior information (e.g., physical sizes, 3D meshes) and current datasets are small and limited, hindering generalization.
Key Innovation: Proposes DroneKey++, a prior-free framework for joint keypoint detection, drone classification, and 3D pose estimation using ray-based geometric reasoning, and introduces 6DroneSyn, a large-scale synthetic benchmark.
83. POPL-KF: A Pose-Only Geometric Representation-Based Kalman Filter for Point-Line-Based Visual-Inertial Odometry
Core Problem: Mainstream Visual-Inertial Odometry (VIO) systems, relying on point features, degrade in challenging scenarios, and MSCKF-based systems suffer from linearization errors and delayed measurement updates, limiting localization accuracy.
Key Innovation: POPL-KF, a Kalman filter-based VIO system that employs a novel pose-only geometric representation for both point and line features, mitigating linearization errors and enabling immediate visual measurement updates, while outperforming state-of-the-art filter and optimization-based methods.
84. Principle-Evolvable Scientific Discovery via Uncertainty Minimization
Core Problem: LLM-based scientific agents are inefficient and restricted in discovering novel phenomena due to adherence to fixed initial priors and operating within a static hypothesis space.
Key Innovation: Proposes PiEvo, a principle-evolvable framework that treats scientific discovery as Bayesian optimization over an expanding principle space. It integrates Information-Directed Hypothesis Selection and an anomaly-driven augmentation mechanism, enabling agents to autonomously refine their theoretical worldview, leading to improved solution quality and speedup.
85. BrokenBind: Universal Modality Exploration beyond Dataset Boundaries
Core Problem: Existing multi-modal learning methods are restricted to modalities present within a single dataset, limiting their generalization to unpresented modalities and hindering their viability due to the high cost of acquiring comprehensive multi-modal datasets.
Key Innovation: Introduces BrokenBind, a method that binds modalities from different datasets by simultaneously leveraging multiple datasets with a shared modality. It captures relationships to generate pseudo embeddings for missing modalities, enabling flexible and generalized multi-modal learning beyond dataset boundaries.
86. LAB-Det: Language as a Domain-Invariant Bridge for Training-Free One-Shot Domain Generalization in Object Detection
Core Problem: Foundation object detectors degrade in specialized, data-scarce domains, and traditional fine-tuning is costly and prone to overfitting.
Key Innovation: Introduces LAB-Det, a training-free one-shot domain generalization method that uses linguistic conditioning to adapt a frozen detector to new domains with only one exemplar per class, achieving robust generalization without weight updates.
87. Efficient-LVSM: Faster, Cheaper, and Better Large View Synthesis Model via Decoupled Co-Refinement Attention
Core Problem: Existing transformer-based novel view synthesis models suffer from quadratic complexity and rigid parameter sharing due to full self-attention.
Key Innovation: Proposes Efficient-LVSM, a dual-stream architecture with a decoupled co-refinement mechanism that uses intra-view self-attention for input views and self-then-cross attention for target views, achieving faster training, inference, and better performance in novel view synthesis.
88. Instance-Free Domain Adaptive Object Detection
Core Problem: Most Domain Adaptive Object Detection (DAOD) methods assume sufficient foreground instances in unlabeled target data, making adaptation difficult when only background-only data is available.
Key Innovation: Introduces the Relational and Structural Consistency Network (RSCN), which pioneers an alignment strategy based on background feature prototypes and encourages consistency in the relationship between source foreground and background features, enabling robust adaptation without target instances.
89. Refining the Information Bottleneck via Adversarial Information Separation
Core Problem: Generalizing from limited data is critical, especially in domains like material science where task-relevant features are confounded by measurement noise and experimental artifacts, and standard regularization or existing adversarial adaptation methods fail to precisely separate meaningful features from noise without explicit labels.
Key Innovation: The Adversarial Information Separation Framework (AdverISF), which isolates task-relevant features from noise without requiring explicit supervision by introducing a self-supervised adversarial mechanism to enforce statistical independence between task-relevant features and noise representations, and employs a multi-layer separation architecture to progressively recycle noise information.
90. Which Graph Shift Operator? A Spectral Answer to an Empirical Question
Core Problem: Selecting the optimal Graph Shift Operator (GSO) for Graph Neural Networks (GNNs) remains largely empirical, lacking a principled criterion for selection prior to training.
Key Innovation: Introduces a novel alignment gain metric that quantifies the geometric distortion between input signal and label subspaces, theoretically connecting this alignment to generalization bounds via a spectral proxy for the Lipschitz constant, providing a principled criterion to select the optimal GSO.
91. Target noise: A pre-training based neural network initialization for efficient high resolution learning
Core Problem: Most existing neural network initialization methods rely on random sampling and do not exploit information from the optimization process, leading to slower convergence, especially for high-frequency components in implicit neural representations (INRs) and Deep Image Prior (DIP)-style networks.
Key Innovation: Proposes a simple, yet effective, initialization strategy based on self-supervised pre-training using random noise as the target, which leads to a structured and non-random parameter configuration that significantly improves convergence speed and enables earlier capture of high-frequency components in subsequent tasks.
92. An Integer Linear Programming Approach to Geometrically Consistent Partial-Partial Shape Matching
Core Problem: Establishing accurate correspondences between two 3D shapes, particularly in the challenging partial-partial setting where the overlapping region is unknown, remains a significant challenge in computer vision.
Key Innovation: Introduced the first integer linear programming approach for partial-partial 3D shape matching, leveraging geometric consistency to robustly estimate the overlapping region and compute neighborhood-preserving correspondences, demonstrating high-quality matching results and improved scalability.
93. Revisiting the Generic Transformer: Deconstructing a Strong Baseline for Time Series Foundation Models
Core Problem: The rapid advancement in Time Series Foundation Models makes it difficult to attribute performance improvements to architectural innovations versus data engineering due to heterogeneous training setups.
Key Innovation: Demonstrates that a standard patch Transformer, with a straightforward training protocol, achieves state-of-the-art zero-shot forecasting performance. It identifies key drivers of performance through a comprehensive ablation study and provides a transparent, reproducible baseline.
94. PANC: Prior-Aware Normalized Cut for Object Segmentation
Core Problem: Fully unsupervised segmentation pipelines often produce non-deterministic partitions sensitive to initialization and heuristics, and struggle in domains where dense labels are costly or intra-class differences are subtle.
Key Innovation: Proposes PANC, a weakly supervised spectral segmentation framework that uses a minimal set of annotated visual tokens and prior-coupled anchor nodes to produce stable, controllable, and reproducible object masks. It achieves state-of-the-art performance on various benchmarks, especially in fine-grained and texture-limited domains.
95. Inheritance Between Feedforward and Convolutional Networks via Model Projection
Core Problem: Efficiently transferring knowledge from pre-trained convolutional networks (CNNs) to downstream tasks while reducing the number of trained parameters, and clarifying the relationship between feedforward and convolutional network architectures.
Key Innovation: Introducing model projection, a parameter-efficient transfer learning method for CNNs that freezes per-input-channel filters and learns scalar gates, enabling CNNs to inherit feedforward techniques and achieve strong transfer learning performance with fewer trained parameters.
96. Are Deep Learning Based Hybrid PDE Solvers Reliable? Why Training Paradigms and Update Strategies Matter
Core Problem: Deep learning-based hybrid PDE solvers often stagnate at false fixed points, leading to unreliable convergence and large physical residuals, questioning their utility in scientific computing.
Key Innovation: Demonstrates that DL-HIM reliability is highly sensitive to training paradigms and update strategies, and introduces physics-aware Anderson acceleration (PA-AA) which minimizes physical residuals to restore reliable convergence in fewer iterations for nonlinear neural operators.
97. Multi-Order Wavelet Derivative Transform for Deep Time Series Forecasting
Core Problem: Existing frequency representation learning methods like Fourier Transform and standard Wavelet Transform in deep time series forecasting struggle to capture multi-scale, time-sensitive patterns and abrupt regime shifts effectively.
Key Innovation: Introduces the multi-order Wavelet Derivative Transform (WDT), which operates on the derivative of time series to magnify rate-of-change cues and expose abrupt regime shifts, embedding it into a multi-branch framework (WaveTS) to achieve state-of-the-art forecasting accuracy and efficiency on ten benchmark datasets.
98. STFlow: Data-Coupled Flow Matching for Geometric Trajectory Simulation
Core Problem: Simulating trajectories of dynamical N-body systems is challenging due to high sensitivity to perturbations, bifurcations, and multi-scale temporal/spatial correlations, making probabilistic simulation difficult.
Key Innovation: Introduction of STFlow, a generative model based on graph neural networks and hierarchical convolutions, which uses data-dependent couplings within the Flow Matching framework to denoise from conditioned random-walks, simplifying the learning task and improving simulation efficiency and accuracy.
99. Are Time-Indexed Foundation Models the Future of Time Series Imputation?
Core Problem: Missing values in time series data are a common problem, and existing imputation methods often require retraining or fine-tuning for new datasets, limiting their general applicability.
Key Innovation: The first large-scale empirical study demonstrating that time-indexed foundation models (TabPFN-TS and MoTM) are powerful and practical for general-purpose, zero-shot time series imputation, capable of integrating covariates without fine-tuning.
100. CompEvent: Complex-valued Event-RGB Fusion for Low-light Video Enhancement and Deblurring
Core Problem: Low-light video deblurring is challenging due to dim lighting and long exposures, and existing event-RGB fusion methods often use staged strategies, limiting their effectiveness against combined degradations.
Key Innovation: CompEvent, a complex neural network framework that enables holistic full-process fusion of event data and RGB frames using complex-valued convolutions and a complex space-frequency learning module, significantly outperforming SOTA methods in low-light video enhancement and deblurring.
101. Always Keep Your Promises: A Model-Agnostic Attribution Algorithm for Neural Networks
Core Problem: Existing Layer-wise Relevance Propagation (LRP) implementations are module-level, requiring architecture-specific rules and model modifications, which limits their generality and sustainability as neural network architectures continuously evolve.
Key Innovation: Introduces DynamicLRP, a model-agnostic LRP framework operating at the tensor operation level within computation graphs, utilizing a novel 'Promise System' for deferred activation resolution to achieve true architecture agnosticity and theoretical guarantees without model modification.
102. MRD: Using Physically Based Differentiable Rendering to Probe Vision Models for 3D Scene Understanding
Core Problem: It remains difficult to understand and explain the representations and decisions of vision models, particularly their implicit understanding of underlying 3D scene properties, despite being trained on 2D inputs.
Key Innovation: Introduces MRD (metamers rendered differentiably), an approach that uses physically based differentiable rendering to probe vision models' implicit understanding of 3D scene properties by finding 3D scene parameters that are physically different but produce the same model activation (model metamers).
103. FLAME: Flow Enhanced Legendre Memory Models for General Time Series Forecasting
Core Problem: Existing time series forecasting models may lack efficiency, robustness, or strong generalization capabilities for both deterministic and probabilistic forecasting.
Key Innovation: FLAME, a lightweight Time Series Foundation Model, utilizes Legendre Memory variants and a Normalization Flow based forecasting head to achieve efficient, robust, and state-of-the-art zero-shot performance in both deterministic and probabilistic time series forecasting.
104. Preserving Spectral Structure and Statistics in Diffusion Models
Core Problem: Standard diffusion models are computationally intensive and cumbersome for image generation because they destroy data into unstructured white noise, forcing the backward process to denoise from scratch.
Key Innovation: PreSS (Preserving Spectral Structure and Statistics) introduces a novel forward and backward process in spectral space, converging to an informative Gaussian prior that preserves spectral structure, leading to significant reductions in computational complexity and improved image generation quality and diversity.
105. DRMOT: A Dataset and Framework for RGBD Referring Multi-Object Tracking
Core Problem: Existing Referring Multi-Object Tracking (RMOT) models rely solely on 2D RGB data, making it challenging to accurately detect and associate targets characterized by complex spatial semantics and to maintain reliable identities under severe occlusion due to the absence of explicit 3D spatial information.
Key Innovation: Proposal of DRMOT, a novel task for RGBD Referring Multi-Object Tracking; creation of DRSet, a tailored RGBD dataset with depth-related language descriptions; and development of DRTrack, a MLLM-guided depth-referring tracking framework that fuses RGB, Depth, and Language modalities for robust 3D-aware target grounding and trajectory association.
106. Predicting the fatigue life of asphalt concrete using neural networks
Core Problem: Traditional methods for determining the fatigue life of asphalt concrete are resource-intensive and time-consuming, hindering efficient durability assessment.
Key Innovation: Employs artificial neural networks with a mean square logarithmic error loss function to accurately predict asphalt concrete fatigue life based on strain level, binder content, and air-void content, demonstrating the potential of ANNs for complex material modeling.
107. Scalable In-Context Q-Learning
Core Problem: Existing in-context reinforcement learning (ICRL) approaches face challenges in learning effectively from suboptimal trajectories and achieving precise in-context inference due to complex dynamics and temporal correlations.
Key Innovation: S-ICQL (Scalable In-Context Q-Learning), an innovative framework that leverages dynamic programming and world modeling with a prompt-based multi-head transformer architecture to steer ICRL towards efficient reward maximization and task generalization, especially when learning from suboptimal data.
108. FuSeFL: Fully Secure and Scalable Federated Learning
Core Problem: Existing secure Federated Learning (FL) approaches suffer from high computational and memory overheads and often overlook the confidentiality of the global model, limiting their practicality in privacy-sensitive domains.
Key Innovation: FuSeFL, a fully secure and scalable FL scheme that decentralizes training across client pairs using lightweight Multi-Party Computation (MPC), eliminating server bottlenecks and preserving full confidentiality of data, model, and updates, while achieving significant speedup and lower memory usage.
109. Reservoir Predictive Path Integral Control for Unknown Nonlinear Dynamics
Core Problem: The challenge of achieving fast online identification and robust control of unknown nonlinear dynamical systems using neural networks.
Key Innovation: RPPI (Reservoir Predictive Path Integral), which integrates echo-state networks (ESNs) and model predictive path integral (MPPI) control for fast learning and direct exploitation of nonlinearities, and URPPI (Uncertainty-aware RPPI) for robust stochastic control by accounting for identification errors, demonstrating improved control performance.
110. Estimating Semantic Alphabet Size for LLM Uncertainty Quantification
Core Problem: Underestimation of "true" semantic entropy by canonical discrete semantic entropy (DSE) in LLM uncertainty quantification, leading to less accurate estimation from few samples.
Key Innovation: Proposing a modified semantic alphabet size estimator to adjust DSE for sample coverage, resulting in more accurate semantic entropy estimation; demonstrating that two semantic alphabet size estimators (including the proposed one) effectively flag incorrect LLM responses.
111. Simulating the Visual World with Artificial Intelligence: A Roadmap
Core Problem: The need for video generation models to evolve beyond visually appealing clips to become implicit world models that simulate physical dynamics, agent-environment interactions, and task planning for coherent visual reasoning and long-term temporal consistency.
Key Innovation: Providing a systematic overview of the evolution of video generation towards implicit world models, conceptualizing them as a combination of a world model (encoding structured knowledge) and a video renderer; tracing the progression through four generations and discussing applications in robotics, autonomous driving, and interactive gaming, while outlining open challenges for next-generation world models.
112. Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations
Core Problem: Inverse problems are often ill-posed and require regularization or incorporation of prior information, and existing deep learning methods for learning priors can be complex or require inverting the prior after training.
Key Innovation: Novel deep learning architectures that leverage connections between proximal operators and Hamilton-Jacobi partial differential equations (HJ PDEs) to directly learn the prior for inverse problems, without needing to invert the prior after training, demonstrating efficiency in high dimensions.
113. Maneuvering motion prediction algorithm for AUV based on the HAM–BiTL combined network model
Core Problem: Effectively decoupling and predicting the complex, coupled motion data of Autonomous Underwater Vehicles (AUVs) to improve the safety and performance of maneuvering experiments, especially when vehicle mathematical models are uncertain.
Key Innovation: Development of HAM-BiTL, a novel multistate prediction model integrating BiTCN, BiLSTM, and a hybrid attention mechanism (HAM), which demonstrates outstanding prediction performance for AUV maneuvering motion, even under ocean current disturbance, by efficiently extracting spatial and temporal features.
114. Performance evaluation of near-field localization for artificial lateral line based on theoretical analysis and orthogonal test
Core Problem: Evaluating and optimizing the near-field localization performance of sensor arrays for miniaturized underwater vehicles, particularly for artificial lateral line systems, to overcome challenges faced by conventional acoustic and optical methods in complex environments.
Key Innovation: A quantitative evaluation and optimization framework integrating the artificial lateral line system, Cramer–Rao Lower Bound (CRLB) theory, and Multi-Island Genetic Algorithm (MIGA) to optimize sensor array configuration, revealing that increasing the interval between body sensors can compensate for reduced sensor numbers while maintaining localization accuracy.
115. AUV path planning based on crested porcupine optimizer and improved fuzzy DWA with collision risk assessment
Core Problem: Ensuring safe and intelligent path planning for Autonomous Underwater Vehicles (AUVs) in unstructured underwater environments with unknown static and dynamic obstacles, while adhering to kinematic constraints.
Key Innovation: A hybrid path planning algorithm (DFDWA) combining crested porcupine optimization (CPO) for global planning with an improved 3D dynamic window approach (DWA) that incorporates a 3D distance field, collision risk assessment (DCPA), and fuzzy logic for adaptive parameter tuning, demonstrating effective obstacle avoidance and reduced detours.
116. Finer-Resolution Long-Term Mapping of Plant Functional Types at 30-m Resolution and Corresponding Leaf Area Index for Earth System Modeling
Core Problem: Earth system models require long-term, high-resolution land surface data (Plant Functional Type (PFT) and PFT-specific Leaf Area Index (LAI)) to quantify the impact of land use and land cover change (LULCC) on climate, but such datasets are scarce and current derivation methods have uncertainties.
Key Innovation: Developed a global 30m PFT map (PFT30) for 1985–2020 and a monthly 500m PFT LAI dataset for the same period. This was achieved by integrating multiple high-resolution remote sensing products and using a remote-sensing-derived phenology scheme, resulting in finer representation of land surface characteristics and better distinction of short vegetation types compared to empirical approaches.
117. Annual carbon emissions from land-use change in China from 1000 to 2019
Core Problem: Estimates of China's historical carbon emissions induced by land-use change vary widely, and previous long-term estimates may have significantly underestimated the cumulative emissions, limiting accurate assessment of past and future terrestrial ecosystem carbon budgets.
Key Innovation: Quantified China's annual carbon budget from land-use change (1000-2019) using a bookkeeping method, driven by a millennial provincial-level land-use change dataset, comprehensive soil/vegetation carbon density datasets (from over 10,000 field samples), and updated disturbance-response curves. This approach revealed cumulative emissions of 19.61 Pg C, significantly higher than previous estimates, and identified critical turning points in carbon source/sink dynamics.
118. Super-sniffer aeroplane finds oil fields’ hidden emissions
Core Problem: Methane emissions from oil and gas producing areas are significantly underreported.
Key Innovation: Development and application of an airborne sensor ('super-sniffer aeroplane') capable of accurately detecting and quantifying methane emissions, revealing them to be up to five times higher than reported.
119. Atmospheric H<sub>2</sub> variability over the past 1,100 years
Core Problem: Predicting the atmospheric response to anthropogenic H2 perturbations and understanding H2 biogeochemistry over long timescales, given the limited modern instrumental record.
Key Innovation: An ice core record of atmospheric H2 spanning the past millennium, revealing a significant rise in H2 from pre-industrial to modern eras and indicating sensitivity of H2 levels to climate change.
120. Discovery Learning predicts battery cycle life from minimal experiments
Core Problem: The high time and energy costs of evaluating the lifetime of new battery designs, and the inability of existing forecasting methods to make reliable predictions before prototyping.
Key Innovation: Discovery Learning, a scientific machine learning approach integrating active learning, physics-guided learning, and zero-shot learning, which predicts battery cycle life from minimal experiments, significantly reducing prototyping requirements and saving time/energy.
121. Spatial variability of wave signatures and profile morphotypes of beaches in estuaries and bays across different tidal regimes
Core Problem: Understanding the morphodynamic processes governing changes in beach profile shapes in estuaries and bays under different tidal regimes and wave conditions.
Key Innovation: Demonstrating how estuarine geomorphology, fetch, tidal range, and exposure to offshore swell waves collectively drive BEB morphodynamics, and establishing a relationship between wave signatures (swell-dominated vs. sheltered) and distinct beach profile morphotypes (Convex vs. Concave) and sediment characteristics.
122. Harnessing the integrated statistical machine learning for traffic crash injury-severity modeling
Core Problem: Modeling traffic crash injury severity is complex due to inherent uncertainty, heterogeneity, and the inability of traditional statistical models to capture interactions and structural dependencies, or for machine learning methods to capture spatial and temporal dynamics effectively.
Key Innovation: The Latent Gaussian Process with Tree-Boosting Model (LGPBoost), an integrated statistical machine learning framework that accounts for spatial, temporal, and grouped dependencies while capturing nonlinear feature–outcome relationships, providing a rigorous benchmark for reliability-oriented crash severity modeling.
123. Multi-dimensional sequence embedding and improved Informer for prediction of industrial alarm events
Core Problem: Predicting industrial alarm events for early warning is challenging due to real-time fluctuations in alarm rates caused by varying operating conditions and abnormal states in continuous industrial processes, hindering satisfactory prediction performance.
Key Innovation: A new alarm event prediction method that adapts to variable alarm rates using a multi-dimensional sequence embedding (based on alarm tags and time intervals) and an improved Informer model, enabling precise and early alarm event prediction under both alarm flood and non-flood periods.
124. Testing the accuracy and transferability of remotely sensed biomass models across heterogeneous grasslands
Core Problem: Remotely sensed grassland biomass models show varying performance and transferability across heterogeneous sites with different management and ecological conditions, making scalable, agnostic inference challenging due to domain shift.
Key Innovation: Compares empirical, physically-based, and hybrid models for grassland biomass estimation from Sentinel-2 data, finding physically-based models show highest transferability, but no single model consistently outperforms others for domain generalization. Highlights the importance of transferability in performance assessment for scalable monitoring.
125. Scattering-feature-driven change matrix for crop growth monitoring based on the co-polar complex coherence
Core Problem: Isolating changes related to crop growth from external factors (e.g., precipitation, irrigation) that modify soil and vegetation moisture and consequently affect SAR backscattered power, making accurate crop growth monitoring challenging.
Key Innovation: Proposes a new method for extracting scattering features from the co-polar complex coherence based on distances on the complex plane, and defines a novel change matrix to characterize the magnitude, direction, and type of change between acquisition dates, leading to improved phenological clustering less affected by environmental fluctuations.
126. Predicting earthworm biogeography and climate-driven shifts on China's Loess Plateau
Core Problem: Poor understanding of earthworm distribution and underlying mechanisms in arid and semiarid regions, specifically on China's Loess Plateau, and their potential changes under future climate scenarios.
Key Innovation: Conducted a systematic field survey and employed a random forest model to map earthworm distribution on the Loess Plateau, quantifying their abundance and biomass, and predicting significant increases under future climate change, highlighting their widespread presence in ecologically fragile areas.
127. Transformer-based surrogate models for predicting flow velocities in downstream approach channels of multi-line ship locks
Core Problem: The need for accurate and efficient prediction of flow velocities in downstream approach channels (DACs) of multi-line ship locks to ensure navigation safety, without relying on historical velocity measurements which are often inaccessible.
Key Innovation: Developed Transformer-based surrogate models (Informer and Flowformer) that accurately predict both transverse and longitudinal flow velocities in DACs using only boundary conditions (upstream discharge and downstream water levels), demonstrating high R2 values and engineering applicability for navigation safety assessment.
128. Influence of variable-speed traffic loads on dynamic response and long-term performance of layered pavements on saturated subgrade
Core Problem: Previous studies on pavement systems under aircraft loads typically neglect the critical influence of variable-speed traffic (acceleration/deceleration), leading to underestimation of dynamic responses and premature pavement failure, especially for pavements on soft soil foundations.
Key Innovation: A semi-analytical Green's function framework is developed to explicitly incorporate variable-speed loads, integrated with a dynamic shakedown theorem. This reveals that braking and acceleration significantly amplify dynamic shear stress (over 30% increment), contributing to premature pavement failure, and that neglecting these effects overestimates the shakedown limit, compromising long-term service performance.