Let's Make Robots! | RobotShop

Drawing a Bots Path

I recently got the idea that we could draw the path that a bot would take in a area.

This is just the start of a piece of software and is still quite rough.

What the software will give you the ability to do is.

1) Draw the path that you want the bot to take

2) While drawing it records the X and Y into a Database

3) Load the data back into the software to play it back.

Currently it allows you to draw a line on the screen and while drawing save the X and Y.

You can erase all the data and start fresh.

It will have the ability to save several different paths that you can name for loading back into the software.

It will also have the ability to scale the path to fit the area you are in.

I have made some changes to the software.

1) Added the ability to change the color that is drawn on the screen.

2) Added the ability to repeat the path

3) Changed from drawing dots on the screen to drawing small elipses on the screen, easier to see.

4) Added the ability for it to determine which direction you are moving. (Up/Down & Left/Right)

Next is to implement the servo function to control the bots movements


Here are some images of the wheeled test base I will be using for this.

Drive Train: 2 MG995 Servos converted to continuous rotation Purchased on Ebay for $10.25

Tires: Foam Rubber tires designed for model planes purchased from local hobby shop $6.00

Chasis: 1/8" PVC Plastic cut and heated and bent to fit.

Servo Controller: Veyron Servo Control Board 24 port purchased from DFRobot for $29.00

Power Supply: 3.7v LiPo 2800 mAH flat battery taken out of old android tablet.

Computer: Dell 8" Tablet bought from Ebay with cracked screen for under $50.00 with Windows 8.1.

Pivot Wheel: Caster wheel purchased from Harbor Freight for under $3.00


I have implemented the servo portion of the software and when the play button is pressed the data is loaded and the line gets drawn onto the screen. The servos respond to the line as it is drawn. Next is adding tolerance to the variance of a straight line to what would be turn left or right.


So moving along I am now working with the Ultrasonic sensor attached to a Arduino Nana that is then sending a signal to my .NET software. The software is quite crude at this point but is getting the job done.

It will detect the distance and I can adjust the distance level using a slider. When the bot is = or < the distance it will first stop and then backup and turnt o the right at this point it check for any objects in the way. If no object exist it will then go forward again. If a object does exist it will the turn left and once again check for a object and if no object is in the way it will go forward.

Here is a screen capture of the software.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Some time back I made a toolpath tracer for my machines so when you moved the axis it was traced onto a TFT dsiplay.


Not exactly what you are talking about, but you may be able to make it really small and portable.,,Add a transmitter/bluetooth etc and you could use it in controller mode, load and store destinations etc.

Just a thought.

Great idea there in making it portable.

I have a bluetooth module for the Veyron Servo Control board which would be easy to incorporate into this.

Thank you for the idea Mark..

I would run a bot and it recorded PWM, time and direction from a RC controller. This could be played back the bot would attempt to stay on the same path. It would get off course fast, due to no encoders on the bot. It would do a "close but no cigar" run, but in the end was way off course, especially on longer paths. I think if I had encoders and code to keep the motors normilized, it would have done a lot better. A stepper motor bot could do better then plain ole DC motors as well me thinks..

I can see the bot going off course without some way to detect its position.

Not sure what I am going to do there yet I have several ideas just not sure if they will work.

I do have a wheeled platform that uses continious servos for the drive that I might try it with first before moving on to the walking bots.

Any input on this will be greatly appreciated.

Thank you


Though, they can lead and lag just like normal DC motors.

I have been thinking about this a lot as of late odd enough. My current setup is a subsumtion based navigation system. This is for an outside bot with GPS and a compass (well an Android phone), but I want it to work indoors as well.  I do have the outside bot navigating around obsticals, and still making it to a target location. though not the same way twice :) It does work most of the time. Inside navigation is a mess, but my ultimate goal.

For inside, I am going to set up two bots. One a "Path-finder" and the other a "drone-follower". The path finder will try and find the best path though a room and send that info to the follower after the path is found and traveled by the leader. I am thinking of ways to let the bots know where they are in a room. That may be by IR beacons on each side of the room and or RFID check points.

The RFID marking I did do last year. I marked one door in a hallway with a RFID tag. The bot follwed the wall with ultra sonic sensors on both sides and when a door was found it tried to get an RFID scan on either side of the door found. If not found it moved on to the next and when found ented the door. 

I would like to have the rooms mapped out in some way, and have the maps sent to the bot on entering the room. This way the bot can be light in the data load and only need to know the room it's in. For maping I have been thinking on a cam referance object method along with IR and sonic maps. i.e. The referance object is the end of a bed, the IR/sonic map has an object on the oppisite wall (a desk) and there is an opening between the two. That opening is the path to take. That is simplified I know, but a goal of my home nav bot.





This is a tough problem to solve.

One idea to consider is possibly looking at Robot Operating System (ROS.ORG). It has all the capabilities you are looking for and then some. There is some learning curve and the documentation could be better.  There are many high-level robots which use ros and many libraries that you can leverage on a robot you are building. It is most supported on Linux, and can be programmed in many languages including C++ and Python.I have played with this and got some of it working But definitely a lot to learn and never enough time to get to learn it. You can also run this with a raspy pretty easily which really gives you a lot of control and power.There have been a number of projects that people have done which have been put on this forum so that might be worth checking out as well.

Basically, you have to describe your robot in an XML format. Then every time it moves you have to send a message to ros to let it know it has moved and how much. It then has a mapping software available that you can use that will show its path or whatever you decide you want to see. The reason it would be worthwhile is that there are also a lot of other things that Ros can do for you. You can tell it where to go and it will go there using the program MoveIt. The program would then pick the best path to get there based on the map It is given.

This also assumes that you know pretty accurately where you are. Encoders are really kind of necessary part of this. Even then, you are still going to get inefficiencies and inaccuracies that over time will add up. You will probably need to have some kind of centering function so it will know where it is definitely at some point so can reset itself.

Anyways, something to think about. Have fun no matter what you do!

Thank you much for the insight on ROS, I have heard of it before but never ventured into learning anything about it.

My thought is to use a nano and a ultrasonic sensor connected to the pc to give it feedback as to distance to an object.

I also plan on using a webcam connected to the PC to detect objects based on color and then do a distance to them and compare that against the Ultrasonics readings to give a happy medium.

For starters I am going to put this on a wheeled bot that I have made for testing. Once I am able to get some reliable results from it then move the software to the walking bot.

Thank much for the helpful advice, any other information you can think of that would help I would appreciate.

Since you are going to use a WebCam anyways, how much do you know about open CV? I know I've seen example somewhere where they have used the difference in relative position for things to define how far way objects are. This is based on the idea that you move in a room a certain amount, every object that is closer is going to appear to move more than the object that is farther away.  So, you know how much you move since the last picture frame, so based on that you can then calculate with some geometry the distance between objects within the field of view.

The problem with using the ultrasonic Sensor is that you may not be sure exactly which target you are getting a return value on. Also, there are differences in values based on what types of material And what shape the object is actually pointing the ultrasonic sensor at. For instance, an ultrasonic sensor that is pointing at a angle to an object will Likely give you erroneous return value. Some materials are also better at absorbing while others will give you a better return. All of these will contribute to inaccurate readings to some degree. It probably would make sense to combine the values that you're getting with other values from another sensor so you get a better feel for exactly what you're getting.Also, at certain ranges some sensors are pretty accurate while others are not so you really have to have a sliding scale as to how much you trust particular sensor and its input.That is kind of how the driverless cars work.

Anyways, maybe some more food for thought. This is something I put quite a bit time and thought into but have never really solved to my satisfaction. I hope you will do much better than I did!

The depthcamera Intel RealSense might also be an option. I haven't any practical experience with it yet, but it is documented to support ROS and there is a robot developer kit available for it.

I would recommend to at least wait for ZR300 to become available, though, because there are no software updates for the older model anymore.

Mouser mentions some other options, including a 400 camera series and an Euclid development kit, but none of them seem available right now. Overall the offer seems to be a bit chaotic and not clear where exactly this will go and which model would be the best choice. http://www.mouser.fr/new/Intel/intel-realsense/

But the hardware definitely looks powerful and capable.


Turns out Euclid is going to be an all in one system and the easiest to use of them all. It will contain the ZR300 depth camera. The 400 series will not be in Euclid, and will require a separate processor/board.

Since most of the processing is done on the camera module itself, within a custom chip, it is not entirely clear why another board is necessary. Probably processing of the 3D data, to detect faces, people, etc. is not done on the camera module.

Found an interesting video: https://www.youtube.com/watch?v=pvXJSn22ujU

And a presentation: http://roscon.ros.org/2016/presentations/ROSCon2016_Intel_RealSense.pdf

Dear Sir,


"4) Added the ability for it to determine which direction you are moving. (Up/Down & Left/Right)

Next is to implement the servo function to control the bots movements"


To implement this,the servo is used for steering ?

btw, can the software be implemented into this kind of steering ?


it used only one servo in front for the steering meanwhile the back wheel is dead stiff wheel. and how to implement it for this kind of steering ? is it possible ?