Payments and Security
Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.
- Powered by NVIDIA Jetson Nano and based on the Robot Operating System (ROS)
- Supports Lidar and depth camera for mapping and navigation
- Optional 7 inch touch screen for parameter monitoring and debugging
- Optional 6 microphone array for voice interaction
- Three versions available: Starter Kit (with Lidar), Standard Kit (with Lidar and depth camera), and Advanced Kit (with Lidar, depth camera, LCD screen, and 6 microphone array)
The Hiwonder Jetauto ROS Robot Car w/ Jetson Nano, Lidar Depth Camera & Standard Kitis equipped with NVIDIA Jetson Nano, high performance encoder motor, rotatable pan tilt, Lidar, 3D depth camera and 7 inch screen, which opens up a range of functionalities. It is capable of robot motion control, mapping and navigation, path planning, tracking and obstacle avoidance, autonomous driving, human feature recognition, somatosensory interaction and voice interaction.
The combination of the hardware makes JetAuto an ideal platform to learn and verify robotic SLAM functions, as well as get the solution to ROS development. It comes with massive ROS learning materials and tutorials to help you get started quickly.
Jetson Nano Control System:
NVIDIA Jetson Nano is able to run mainstream deep learning frameworks such as TensorFlow, PyTorch, Caffe/ Caffe2, Keras, and MXNet, providing powerful computing power for massive AI projects. Powered by Jetson Nano, JetAuto can implement image recognition, object detection and positioning, pose estimation, semantic segmentation, intelligent analysis and other powerful functions.
2D Lidar Mapping and Navigation: JetAuto is loaded with high performance Lidar that supports mapping with diverse algorithms including Gmapping, Hector, Karto and Cartographer. It is also capable of path planning, fixed point navigation and obstacle avoidance during navigation.
Single point Navigation and Multi point Navigation: JetAuto employs Lidar to detect the surroundings in real time to achieve single point navigation as well as multi point navigation.
TEB Path Planning and Obstacle Avoidance: It supports TEB path planning, and is able to monitor the obstacle in real time during navigation, allowing it to replan the route to avoid the obstacle and continue moving.
RRT Autonomous Exploration Mapping: Adopting RRT algorithm, JetAuto can complete exploration mapping, save the map and drive back to the starting point autonomously, without manual control.
Lidar Tracking: By scanning the front moving object, Lidar makes robot capable of target tracking.
Lidar Guarding: Guard the surroundings and ring the alarm when detecting intruder.
RTAB VSLAM 3D Mapping and Navigation: Depth camera supports 3D mapping in two ways, pure RTAB vision and fusion of vision and Lidar, which allows JetAuto to navigate and avoid obstacle in 3D map, as well as re locate globally.
ORBSLAM2 + ORBSLAM3: ORB SLAM is an open source SLAM framework for monocular, binocular and RGB D cameras, which is able to compute the camera trajectory in real time and reconstruct 3D surroundings. And under RGB D mode, the real dimension of the object can be acquired.
Depth Map Data and Point Cloud: Through the corresponding API, JetAuto can get depth map, color image and point cloud of the camera.
Deep Learning and Autonomous Driving:
With JetAuto, you can design an autonomous driving scenario to put ROS into practice, which enables you to better understand core functions of autonomous driving. This includes road sign detection, lane keeping, automatic parking, and turning decision making.
MediaPipe Development and Upgraded AI Interaction:
Based on MediaPipe framework, JetAuto can carry out human body recognition, fingertip detection, face detection, 3D detection and more. This includes fingertip trajectory recognition, human body recognition, 3D detection, and 3D face detection.
AI Deep Learning Framework: Utilize YOLO network algorithm and deep learning model library to recognize the objects.
KCF Target Tracking: Relying on KCF filtering algorithm, the robot can track the selected target.
Color/Tag Recognition and Tracking: JetAuto is able to recognize and track the designated color, and can recognize multiple April Tags and their coordinates. It also supports augmented reality (AR) after selecting the patterns on the APP, which can be overlaid on the April Tag.
Far field Microphone Array:
This 6 microphone array is adept at far field sound source localization, voice recognition and voice interaction. In comparison to ordinary microphone module, it can implement more advanced functions such as sound source localization, voice interaction, and voice navigation.
Depending on multi machine communication, JetAuto can achieve multi vehicle navigation, path planning and smart obstacle avoidance. This includes intelligent formation, where a batch of JetAuto cars can maintain the formation, including horizontal line, vertical line and triangle, during movement, and group control, where a group of JetAuto can be controlled by only one wireless handle to perform actions uniformly and simultaneously.
ROS Robot Operating System:
ROS is an open source meta operating system for robots. It provides some basic services, such as hardware abstraction, low level device control, implementation of commonly used functionality, message passing between processes, and package management. It also offers the tools and library functions needed to obtain, compile, write, and run code across computers. It aims to provide code reuse support for robotics research and development.
JetAuto employs ROS framework and supports Gazebo simulation. Gazebo brings a fresh approach to control JetAuto and verify the algorithm in simulated environment, which reduces experimental requirements and improves efficiency. This includes JetAuto simulation control, where the kinematics algorithm can be verified in simulation to speed up algorithm iteration and reduce the experiment cost, as well as visual data, where rviz can visualize the mapping and navigation result to facilitate debugging and improving algorithm.
Various Control Methods:
JetAuto can be controlled by WonderAi APP, Map Nav APP (Android Only), or Wireless Handle.
1x Jetauto (with Lidar)
1x Astra Pro Depth Camera
1x Camera Bracket
1x PS2 Wireless Handle
1x 12. 6V 2A Charger
1x Card Reader
3x Tag (6
5x 6. 5)
3x Block (
1x Screw Bag
1) JetAuto Robot Car:
Material: Full metal hard aluminum alloy bracket (anodized)
Battery: 11.1V 6000mAh Lipo battery
Continuous working life: 60 minutes
Hardware: ROS controller and ROS expansion board
Operating system: Ubuntu 18.04 LTS + ROS Melodic
Software: iOS/Android APP
Storage: 32GB TF card
Servo: HTS 20H serial bus servo
Control method: Phone/handle control
Package size (advanced kit): 335*320*225mm
Package weight (advanced kit): Approximately 4.5kg
2) Hall Encoder Geared Motor:
Rated voltage: 12V
Rated power consumption: 2.4W
Motor type: Permanent magnet brush
Stall current: 3A
Stall torque: 15kgf.cm
Rated current: 0.2A
Rated torque: 10kgf.cm
Reduction ratio: 1 : 90
Rotation speed (before reduction):
Rotation speed (after reduction): 110 10rpm
Output shaft: D type eccentric shaft of 6 mm diameter
Encoder type: Hall encoder
Interface: PH2.0 6P
Power supply: 3.3 5V
3) Lipo Battery:
Plug: DC 5.5 * 2.5 female/ SM 2P male.