Auto Driving & AI Research

Autonomous Driving Research Overview

This research focuses on vision-based perception for autonomous driving, with a particular emphasis on visibility degradation and occlusion-aware understanding in complex traffic environments. The goal is to enhance the reliability of perception systems under challenging conditions such as visual obstruction, limited field of view, dynamic occlusions, and adverse environments.

A central challenge addressed in this work is reasoning beyond direct visibility. Real-world driving scenarios frequently involve pedestrians, cyclists, or vehicles that are partially or fully occluded by other objects, infrastructure, or environmental factors. To tackle this, the research investigates occlusion-aware perception models that infer hidden agents and scene structure by leveraging temporal continuity, multi-view geometry, and contextual cues.

Technically, the research explores multi-modal and multi-view fusion frameworks, integrating camera-based vision with depth cues, motion dynamics, and spatial priors. Temporal modeling is employed to maintain consistent scene understanding over time, enabling more robust detection, tracking, and prediction even when visual information is incomplete or uncertain.

The ultimate objective is to develop robust and interpretable autonomous driving perception systems that can operate safely under visibility-limited conditions, contributing to improved decision-making, risk assessment, and overall driving safety.