I am currently a Tech Lead Manager at Goertek Alpha Labs, where I lead research and development in XR (AR/VR/MR) and AI, with a particular focus on Spatial Content Generation and Spatial Intelligence technologies. My research interests lie in deep learning, with applications in 3D vision, robotic vision, and spatial AI.
Previously, I was a Staff Research Engineer/Scientist at InnoPeak Technology, Inc. (also known as OPPO US Research Center), working on advanced XR-related R&D projects. Prior to that, I served as a Postdoctoral Researcher at the University of Adelaide, collaborating with Prof. Ian Reid and Dr. Hamid Rezatofighi at Monash University. During this time, I was affiliated with both the Australian Institute for Machine Learning (AIML) and the Vision & Learning for Autonomous AI Lab (VL4AI).
I received my Ph.D. from the University of Adelaide, where I was part of the Australian Centre for Robotic Vision and was advised by Prof. Ian Reid. Earlier in my academic journey, I earned a B.Eng. in Electronic Engineering (First Class Honours) from The Chinese University of Hong Kong (CUHK), advised by Prof. Xiaogang Wang. I was also a visiting researcher in the Unmanned Systems Research Group at The National University of Singapore, working under Prof. Ben M. Chen.
NEWS
* 08/2025: One preprint is online: **RLGS: Reinforcement Learning-Based Adaptive Hyperparameter Tuning for Gaussian Splatting** [[arxiv]](https://www.arxiv.org/abs/2508.04078) * 06/2025: One preprint is online: **Understanding while Exploring: Semantics-driven Active Mapping** [[arxiv]](https://arxiv.org/abs/2506.00225) * 03/2025: One preprint is online: **Semantic Consistent Language Gaussian Splatting for Point-Level Open-vocabulary Querying** [[arxiv]](https://arxiv.org/abs/2503.21767) * 03/2025: One paper is accepted to LNCS 2025: **Dynamic voxel grid optimization for high-fidelity rgb-d supervised surface reconstruction** [[arxiv]](https://arxiv.org/pdf/2304.06178) * 02/2025: One paper is accepted to CVPR 2025: **ActiveGAMER: Active GAussian Mapping through Efficient Rendering** [[project page]](https://oppo-us-research.github.io/ActiveGAMER-website/) [[arxiv]](https://arxiv.org/pdf/2501.06897) [[code]](https://github.com/oppo-us-research/ActiveGAMER) [[video]](https://www.youtube.com/watch?v=2sfVMuZq92Y) * 01/2025: Start a new role at [Goertek Alpha Labs](https://www.goertek.com/en/). * 01/2025: One paper is accepted to ICRA 2025: **PlanarNeRF: Online Learning of Planar Primitives with Neural Radiance Fields** [[arxiv]](https://arxiv.org/pdf/2401.00871) * 03/2024: One paper is accepted to CVPR 2024: **NARUTO: Neural Active Reconstruction from Uncertain Target Observations**. [[project page]](https://oppo-us-research.github.io/NARUTO-website/) [[paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_NARUTO_Neural_Active_Reconstruction_from_Uncertain_Target_Observations_CVPR_2024_paper.pdf) [[arxiv]](https://arxiv.org/abs/2402.18771) [[code]](https://github.com/oppo-us-research/NARUTO) [[video]](https://www.youtube.com/watch?v=SsWdB-_5XM0) * 12/2023: ~~__We are hiring [2024 research interns (US)](https://apply.workable.com/innopeaktech/j/BC7EE44D37/) and 2024 research interns (China) to work on research projects related to 3D Vision! Feel free to drop me an email if you're interested.__~~ * 10/2023: One paper is accepted to T-PAMI: **SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for Dynamic Scenes** [[paper]](https://ieeexplore.ieee.org/document/10273446) [[arxiv]](https://arxiv.org/abs/2211.03660) [[code]](https://github.com/JiawangBian/sc_depth_pl) * 02/2023: ~~We are hiring 2023 summer interns to work on research projects related to 3D vision! Feel free to drop me an email if you're interested.~~ * 12/2022: It is my pleasure to be recognised as “Outstanding Reviewer” for ACCV 2022 for having provided helpful, high-quality reviews. * 11/2022: Four preprints are online: **SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for Dynamic Scenes** [[arxiv]](https://arxiv.org/abs/2211.03660) **ActiveRMAP: Radiance Field for Active Mapping And Planning** [[arxiv]](https://arxiv.org/abs/2211.12656) **Predicting Topological Maps for Visual Navigation in Unexplored Environments** [[arxiv]](https://arxiv.org/abs/2211.12649) **What Images are More Memorable to Machines?** [[arxiv]](https://arxiv.org/abs/2211.07625) * 10/2022: Move to CA, USA and start my new position in InnoPeak Technology, Inc (a.k.a. OPPO US Research). * 12/2021: One paper is accepted to 3DV: **NVSS: High-quality Novel View Selfie Synthesis** [[paper]](https://ieeexplore.ieee.org/document/9665938) * 12/2021: One paper is accepted to TPAMI: **Auto-Rectify Network for Unsupervised Indoor Depth Estimation** [[paper]](https://ieeexplore.ieee.org/document/9655489) [[arxiv]](https://arxiv.org/abs/2006.02708) [[code]](https://github.com/JiawangBian/sc_depth_pl) * 05/2021: One paper is accepted to IJCV: **Unsupervised Scale-consistent Depth Learning from Video** [[paper]](https://link.springer.com/article/10.1007/s11263-021-01484-6) [[arxiv]](https://arxiv.org/abs/2105.11610) [[code]]([[code]](https://github.com/JiawangBian/sc_depth_pl)) * 03/2021: The extended report for our ICRA2020 (DF-VO) is online: **DF-VO: What Should Be Learnt for Visual Odometry?** [[arxiv]](https://arxiv.org/abs/2103.00933) [[code]](https://github.com/Huangying-Zhan/DF-VO) * 08/2020: Start my Postdoc position in The University of Adelaide. * 06/2020: One preprint is online: **Auto-Rectify Network for Unsupervised Indoor Depth Estimation** [[arxiv]](https://arxiv.org/abs/2006.02708) * 01/2020: One paper accepted to ICRA 2020: **Visual odometry revisited: What should be learnt?** [[paper]](https://ieeexplore.ieee.org/abstract/document/9197374) [[arxiv]](https://arxiv.org/abs/1909.09803) [[code]](https://github.com/Huangying-Zhan/DF-VO) [[video]](https://www.youtube.com/watch?v=Nl8mFU4SJKY) * 10/2019: One paper accepted to ICCV-Workshop (Deep Learning for Visual SLAM) 2019: **Camera relocalization by exploiting multi-view constraints** [[paper]](https://openaccess.thecvf.com/content_ICCVW_2019/html/DL4VSLAM/Cai_Camera_Relocalization_by_Exploiting_Multi-View_Constraints_for_Scene_Coordinates_Regression_ICCVW_2019_paper.html) * 09/2019: One paper accepted to NeurIPS 2019: **Scale-consistent depth and ego-motion learning** [[paper]](https://papers.neurips.cc/paper_files/paper/2019/hash/6364d3f0f495b6ab9dcf8d3b5c6e0b01-Abstract.html) [[arxiv]](https://arxiv.org/abs/1908.10553) [[code]]([[code]](https://github.com/JiawangBian/sc_depth_pl)) * 05/2019: Attend ICRA 2019 @ Montreal, Canada * 01/2019: One paper accepted to ICRA 2019: **Self-supervised depth and surface normal learning** [[paper]](https://ieeexplore.ieee.org/abstract/document/8793984) [[arxiv]](https://arxiv.org/abs/1903.00112) * 07/2018: Join HoloLens team @ Microsoft Redmond as a Research Intern * 07/2018: One paper accepted to ECCV 2018: **Efficient dense point cloud object reconstruction** [[paper]](https://openaccess.thecvf.com/content_ECCV_2018/html/Kejie_Li_Efficient_Dense_Point_ECCV_2018_paper.html) * 06/2018: Attend CVPR 2018 @ Salt Lake City, USA * 02/2018: One paper accepted to CVPR 2018: [Unsupervised monocular depth and visual odometry learning](https://openaccess.thecvf.com/content_cvpr_2018/html/Zhan_Unsupervised_Learning_of_CVPR_2018_paper.html) * 06/2017: One paper accepted to IROS 2017: **Deep learning for 2D scan matching and loop closure** [[paper]](https://ieeexplore.ieee.org/abstract/document/8202236) * 02/2017: Start my Ph.D in The University of Adelaide