Autoregressive End-to-End Planning with Time-Invariant Spatial Alignment and Multi-Objective Policy Refinement

Jianbo Zhao1,2,*, Taiyu Ban1,2,*, Xiangjie Li2,*, Xingtai Gui2, Hangning Zhou2, Lei Liu2, Hongwei Zhao2, Bin Li1
1University of Science and Technology of China, 2Mach Drive,

Abstract

The inherent sequential modeling capabilities of autoregressive models make them a formidable baseline for end-to-end planning in autonomous driving. Nevertheless, their performance is constrained by a spatio-temporal misalignment, as the planner must condition future actions on past sensory data. This creates an inconsistent worldview, limiting the upper bound of performance for an otherwise powerful approach. To address this, we propose a Time-Invariant Spatial Alignment (TISA) module that learns to project initial environmental features into a consistent ego-centric frame for each future time step, effectively correcting the agent's worldview without explicit future scene prediction. In addition, we employ a kinematic action prediction head (i.e., acceleration and yaw rate) to ensure physically feasible trajectories. Finally, we introduce a multi-objective post-training stage using Direct Preference Optimization (DPO) to move beyond pure imitation. Our approach provides targeted feedback on specific driving behaviors, offering a more fine-grained learning signal than the single, overall objective used in standard DPO. Our model achieves a state-of-the-art 89.8 PDMS on the NAVSIM dataset among autoregressive models.

Video

BibTeX

@article{zhao2025autoregressiveendtoendplanningtimeinvariant,
      title={Autoregressive End-to-End Planning with Time-Invariant Spatial Alignment and Multi-Objective Policy Refinement},
      author={Jianbo Zhao and Taiyu Ban and Xiangjie Li and Xingtai Gui and Hangning Zhou and Lei Liu and Hongwei Zhao and Bin Li},
      journal={arXiv preprint arXiv:2509.20938},
      year={2025}
    }
}

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.