Job Responsibilities:
• Develop high-precision maps, positioning, and perception systems using multi-sensor data from LiDAR, cameras, GPS, IMU, and radar.
• Design and implement multi-sensor fusion algorithms for mapping, localization, and object tracking.
• Develop and optimize real-time multi-sensor data processing and filtering techniques (e.g., EKF, UKF, Particle Filters).
• Implement and improve object detection, classification, and tracking algorithms using machine learning and deep learning techniques.
• Work on large-scale data processing, storage, and 3D visualization.
• Perform sensor calibration, data association, and matching to improve system accuracy.
• Conduct debugging, testing, and performance optimization of perception and mapping algorithms.
• Perform code reviews, pair programming, and prepare technical documentation.
Job Requirements:
• Master’s degree or above in Computer Science, Robotics, Electrical Engineering, or related fields.
• Strong foundation in mathematics, algorithms, and statistical methods.
• Proficient in C++ and familiar with Python/MATLAB for algorithm development.
• Experience with Linux systems, and familiarity with ROS is preferred.
• Hands-on experience in one or more of the following areas:
• SLAM, VIO, and 3D computer vision
• Multi-sensor fusion and real-time data processing
• State estimation techniques (EKF, UKF, PF, Bayesian filtering, association algorithms)