JetAcker ROS Education Robot Car with Ackerman Structure Support SLAM Mapping Navigation Learning (Ultimate Kit/Raspberry Pi 5 8GB/EA1 G4 Lidar)

HiwonderSKU: RM-HIWO-07G
Manufacturer #: JetAcker Ultimate Kit/Raspberry Pi 5 8GB/G4 Lidar

Price  :
Sale price $1,209.99

Shipping calculated at checkout

Stock  :
In stock (100 units), ready to be shipped

Payments and Security

American Express Apple Pay Diners Club Discover Google Pay Mastercard PayPal Shop Pay Venmo Visa

Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.

Description

  • Built on an Ackermann chassis, ideal for learning and validating steering-based robots
  • Powered by Raspberry Pi 5, JetAcker supports ROS, deep learning, MediaPipe, YOLO, and TensorRT for advanced 3D vision tasks
  • Equipped with a 3D depth camera and Lidar, JetAcker enables remote control, 2D mapping, TEB path planning, and dynamic obstacle avoidance
  • Features include an aluminum alloy frame, CNC steering, 100mm wheels, Hall encoder motors, and a 6000mAh battery
  • Control JetAcker via the WonderAi app (iOS/Android), wireless handle, ROS, or keyboard

Product Description:

JetAcker is powered by NVIDIA Jetson Nano and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration. This combination delivers a diverse range of 3D machine vision applications, including autonomous driving, somatosensory interaction, and KCF target tracking. Moreover, with JetAcker, you can learn and validate various robotic SLAM algorithms.

1) Ackerman Steering Structure Pendulum Suspension

The rear wheels of the chassis are always in a parallel state. When turning, the inner wheel rotation angle is greater than the outer wheel rotation angle. Steering through the difference in rotation angle of the inner and outer wheels is called Ackermann steering.

2) Equipped with Lidar & Supports SLAM Mapping Navigation

JetAcker is equipped with lidar, which can realize SLAM mapping and navigation, and supports path planning, fixed-point navigation and dynamic obstacle avoidance.

3) CNC Steering System

Full-metal CNC high-precision components combined with strong-bearing intelligent servo provides exceptional rotational force.

4) High-density Solid Wheel

Payload capacity, deformation resistance, reduced friction coefficient and minimized mechanical wear, resulting in an extended lifespan.

5) Pendulum Suspension Structure

High-precision pendulum suspension structure balances the force, enabling good adaptability to uneven surfaces while preventing any impact on motor.

6) 240° High-performance Pan-tilt

It is driven by an serial bus servo which provides over-temperature protection. Its up to 240° rotation range extends JetAcker's exploration ranges.

1. Dual-Controller Design for Efficient Collaboration

1) Host Controller

- ROS Controller (JETSON, Raspberry Pi, etc.)

- Simultaneous Localization and Mapping (SLAM)

- Human-Machine Voice Interaction

- Advanced Al Algorithms

- Deep Neural Networks

- AI Visual Image Processing

2) Sub Controller

- STM32 Robot Controller

- High-Frequency PID Control

- Motor Closed-Loop Control

- Senvo Control and Feedback

- IMU Data Acuisition

- Power Status Monitoring

2. Provide ROS1 & ROS2 System Image

ROS2, the upgraded version of ROS1, retains all its functions while supporting more operating systems and compilation environments. It offersreal-time control enhanced modular development, and testing, delivering more powerful features and broader applications than ROS1.

3. Lidar Mapping Navigation

JetAcker is equipped with lidar, which supports path planning, fixed-point navigation, navigation and obstacle avoidance, multiple algorithm mapping, and realizes radar guard and radar tracking functions.

1) Lidar Positioning

Combining a Lidar self-developed high-precisionencoder and IMU accelero meter sensor data, JetAuto can achieve accurate mapping and navigation.

2) Various 2D Lidar Mapping Methods

JetAcker utilizes varous mapping algorithns such as Gmapping, HectorKarto, and Cartographer. In naddition, it supports path planning, fixed-point navigation and, and obstacle avoidance during navigation.

3) Multi-point Navigation, TEB Path Planning

JetAcker employs Lidar to detect the surroundings and supports fixed point navigation, multi-point continuous navigation and otherrobot applications.

4) RRT Autonomous Exploration Mapping

Adopting RRT algorithm, JetAcker can complete exploration mapping, save the map and drive back to the starting point autonomously, so there is no need for manual control.

5) Dynamic Obstacle Avoidance

JetAcker can detect obstacles in realtimeduring navigation andre-planthepathtoavoid obstacles.

6) Lidar Tracking

By scanning the front moving object, Lidar makes robot capable of of target tracking.

4. 3D Vision Al Upgraded Interaction

JetAcker is equipped with a 3D depth camera, supports 3D vision mapping and navigation, and can obtain 3D point cloud images. Through deep learning, it can realize more AI vision interactive gameplay.

1) 3D Depth Camera

Equipped with a Astra Pro Plus depth camera, JetAcker can effectively perceive environmental changes, allowing for intelligent Al interaction with humans.

2) RTAB-VSLAM 3D Vision Mapping and Navigation

Using the RTAB SLAM algorithm, JetAcker creates a 3D colored map, enabling mavigation and obstacle avoidance in a 3D environment. Furthermore, it supports global localization within the map.

3) ORBSLAM2+ORBSLAM3

ORB-SLAM is an open-source SLAM framework for monocular, binocular and RGB-D cameras, which is able to compute the camera trajectory in real time and reconstruct 3D surroundings. And under RGB-D mode the real dimension ofthe object can be acquired.

4) Depth Map Data, Point Cloud

Through the corresponding APl, JetAcker can get depth map, color image and point could of the camera.

5. Deep Learning, Autonomous Driving

Through deep learning, JetAcker can implement autonomous driving functions, which is a perfect platform to learn core features of autonomous driving.

1) Road Sign Detection

Through training the deep learning model library, JetAcker can realize autonomous driving with Al vision.

2) Lane Keeping

JetAcker is capable ofrecognizing the lanes on both sides to maintainsafe distance between it and the lanes.

3) Automatic Parking

Combined with deep learning algorithm, JetAcker can recognize the parking sign, then steers itself into the slot automatically.

4) Turning Decision Making

According to the lanes, road signs and traffic lights, JetAcker will uses and the traffic lights, JetAuto will estimate the traffic and decide whether to turn.

6. MediaPipe Development, Upgraded AI Interaction

JetAcker utilizes MediaPipe development framework to accomplish various functions, such as human body recognition, fingertip recognition, face detection, and 3D detection.

1) Fingertip Trajectory Recognition

2) Human Body Recognition

3) 3D Detection

4) 3D Face Detection

7. AI Vision Interaction

By incorporating artificial intelligence, JetAcker can implement KCF target tracking, line following, color/ tag recognition and tracking, YOLO object recognition and more.

1) KCF Target Tracking:

Relying on KCF filtering algorithm, the robot can track the selected target.

2) Vision Line Following:

JetAcker supports custom color selection, and the robot can identify color lines and follow them.

3) Color/ Tag Recognition and Tracking

JetAcker is able to recognize and track the designated color, and can recognize multiple April Tags and their coordinates at the same time.

4) YOLO Object Recognition

Utilize YOLO network algorithm and deep learning model library to recognize the objects.

8. 6CH Far-field Microphone Array

This 6CH far-field microphone array is adroit at far-field sound source localization, voice recognition and voice interaction. In comparison to ordinary microphone module,it can implement more advanced functions.

1) Sound Source Localization:

Through the 6-microphone array, high-precision positioning of noise reduction sources is achieved. With radar distance recognition, Hiwonder can be summoned at any location.

2) TTS Voice Broadcast

The text content published by ROS can be directly converted into voice broadcast to facilitate interactive design.

3) Voice Interaction

Speech recognition and TTS voice broadcast are combined to realize voice interaction and support the expansion of iFlytek's online voice conversation function.

4) Voice Navigation

Use voice commands to control Hiwonder to reach any designated location on the map, similar to the voice control scenario of a food delivery robot.

9. lnterconnected Formation

Through multi-aircraft communication and navigator technology, JetAcker can realize multi-aircraft formation performances and artificial intelligence games.

1) Multi-vehicle Navigation

Depending on multi-machine communication, JetAcker can achieve multi-vehicle navigation, path planning and smart obstacle avoidance.

2) Intelligent Formation

A batch of JetAcker can maintain the formation, including horizontal line, vertical line and triangle during moving.

3) Group Control

A group of JetAcker can be controlled by only one wireless handle to perform actions uniformly and simultaneously

8. ROS Robot Operating System

ROS is an open-source meta operating system for robots. It provides some basic services, such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. And it also offers the tools and library functions needed to obtain, compile, write, and run code across computers. It aims at providing code reuse support for robotics research and development.

10. Gazebo Simulation

JetAcker is built on the Robot Operating System (ROS) and integrates with Gazebo simulation. This enables effortless control of the robot in a simulated environment, facilitating algorithm prevalidation to prevent potential errors. Gazebo provides visual data, allowing you to observe the motion trajectories of each endpoint and center. This visual feedback facilitates algorithm enhancement.

1) Simulation Control

Through robot simulation control, algorithm verification of mapping navigation can be carried out to improve the iteration speed of the algorithm and reduce the cost of trial and error.

2) URDF Model

Provide an accurate URDF model, and observe the mapping navigation effect through the Rviz visualization tool to facilitate debugging and improving algorithms.

11. Various Control Methods

1) Python Programming

2) WonderAi APP

3) Map Nav APP (Android Only)

4) Wireless Handle

Advanced Kit Packing List:

1* JetAcker robot car (Include EA1 G4 Lidar, Raspberry Pi 5 8GB and microphone array, assembled)

1* 12.6V 2A charger

1* Card reader

1* Wireless handle

1* 3D Depth Camera

1* Screwdriver

1 set* Tags (6.5*6.5cm) & Blocks (3*3cm)

1* Screw bag

1* 7-inch LCD screen

1* Sound card and spearker (Installed)

1* Manual

302*260*256mm

Size: 302*260*256mm

Product weight: 3500g

Material: Full-metal hard aluminum alloy bracket (anodized)

Battery: 11.1V 6000mAh lithium battery

Continuous working life: 60min

Hardware: ROS controller and ROS expansion board

Operating system: Ubuntu 18.04 LTS + ROS Melodic

Software: iOS/ Android app

Communication: USB/ WiFi / Ethernet

Programming language: Python/ C/ C++/ JavaScript

Storage: 32GB TF card

Servo: HTS-20H serial bus servo

Control method: Phone/ Handle control

Package size (advanced kit): 375* 305* 230mm

Package weight (advanced kit): About 5kg

Customer Reviews

Be the first to write a review
0%
(0)
0%
(0)
0%
(0)
0%
(0)
0%
(0)

Estimate shipping

You may also like