Let's Make Robots! | RobotShop

AVA

Makes Conversation, Answers Questions, Explores

This is Ava.  Ava is a friendly internet connected home companion robot that mostly likes to talk and learn new things.  To do this, she uses natural language processing, several databases of with millions of memories, and hundreds of software agents.  While she has some IQ in that she can answer many kinds of questions from her memory or using many web services, I plan to focus on improving her EQ (Emotional Intelligence Quotient) in the coming months and years.  This involves emotions, empathy, curiosity, initiative, comprehension, and an extensive DB of personal data and social rules.  Lately she has been learning to create her own web apps and to discover and catalog other devices all over the world.  She is not currently talking to most of those devices directly but that is just a matter of time at this point.

9/18/17 Update - Ava will be Coming Back to Life

This project gathered dust for awhile but is going to slowly come back to life.  The following are the highlights I am working on and planning right now.

  1. Remove all dependence on USB...tired of having issues with it.
  2. Swap out Arduino Uno in head with Arduino Uno Wifi.  This is going to give me an ability for the server to call the robot, rather than the robot continually polling the server...did a proof of concept on this, looks good so far.
  3. Use Bluetooth to communicate from body mega adk to phone in face...seems to work if I don't push too much data.
  4. I am going to stop pushing all the sensor data (massive because of Pixy, Thermal, Mics, Sonars) to the server.  I am going to have the server call the robot instead (via Uno Wifi), only when it is needed (when someone wants to look at it).  This is another way I am trying to lessen my need of USB/BLE, and try to get something that I can work with moving forward.
  5. Move gesturing code from phone to microcontroller to remove all the necessary chatter to move all the servos, to reduce bandwidth over bluetooth...this will put more work on the Mega and be harder to maintain.
  6. I am redesigning all the databases from the ground up...moving away from a small number of generic tables to a large number of specific tables with a lot more referential integrity.  This is now feasible because Ava can create the UIs for me to maintain everything, something she couldn't do until this year.  This should allow me to scale better.
  7. Install the brain on Amazon Web Services, so it will always be available...this is going to cost a lot, that's life sometimes.  My AWS account runs a WebFarm of 1-4 servers and 1 DB Server, so this is going to force me to redesign how I cache things and refresh them, as there is no guarantee that each server will have the same version of something cached unless I redesign it.
  8. My current codebase has an ability to run recurring background processes.  I hope to utilize these for some kind of dream state learning, cache cleaning, syncing?, archiving, etc.
  9. I am genersizing the metadata defining the hardware interface so that other people's robots can use the brain API, even if their bot has completely different servos/actuators than mine.
  10. Add more context awareness to small talk engine so the bot can act differently in situations that are more formal, less formal, etc., depending on the situation and language, and so that others can create robot specific smalltalk.
  11. While the old version had some abilities to do multi-language, the new one is getting it more designed in to every aspect.  I expect to be able to better handle things like the fact that grammar is different when you convert from Chinese to English, for example, something the translation sites do not handle.
  12. I would like to document the brain in pseudo-code as simple as possible, so a Raspberry Pi version could be written that closely matches the core features and could run locally on a bot, with data downloaded from server.  My Pi skills are lacking though, so this will take time.
  13. When all this is done, sadly she will be less capable than before, but I believe she will be in a better position for the future.  Sometimes you have to take a step back to move forward.

2/20/17 Update - Memory Upgrades, Cyber Security Features

I have been making significant upgrades to the server part of her brain, most notably her memory system.  Whereas in the past she used a generic database to store over a hundred different types of memories, she can now incorporate other databases of any structure, and form her own memories about the structure of those databases, and use them as is.  Because of this, I now plan on creating separate fully normalized databases of her existing memory sets for various portions of her brain...like her verbal features.  She can automatically talk to any new database and figure out how to search, retrieve, update, delete data and relationships by interrogating the dbs system tables.  She can also learn structural relationships in the data that do not explicitly exist with the help of a human trainer.

Automatic Web User Interfaces - Ava is now the world's fastest and cheapest web developer!

She also has the ability to generate intelligent user interfaces on the fly for any database she knows about, interfaces which while very good, can also "learn" and improve as people use them and give further input...training.  This has many commercial applications as this basically drives the cost of producing web database applications to near zero...no coding...Ava just does it.  Whether it is search, forms, parent child, whatever, she creates whatever is needed at runtime for whatever data structures are being looked at.  I plan to add verbal features on top of this new capability so she can learn to talk and answer questions about newly incorporated databases with little training.

Shodan API Interface - Access to the Internet of Things

She now has the ability to find millions of other computers, devices, (potentially robots on day) on the internet by using the Shodan search engine via an API.  Shodan crawls the internet 24/7 and catalogs open ports and much more detail that can then be queried.

Google Maps Javascript API Interface

She now can map various other memories (like computers found on Shodan) on nice map displays that can be sliced and diced for various data purposes.

GeoNames Data

She now has access to hundreds of thousands of locations like countries, administrative divisions, cities, etc. that can be used for mapping purposes.

National Vulnerability Database

She now has access to an extensive database of all the worlds known security exploits and related metadata.  This, combined with her Showdan access and mapping, makes for a very powerful Cyber Security robot.

Android USB Recognition Issues

I have been having a lot of trouble getting the Android phone to recognize the Arduino Mega ADK consistently and startup.  Because of this, I haven't had the full Ava running very much to make more videos...been spending most of my time working on server "brain"  features anyway.

7/25/16 Update - Physical Complete Milestone

I am proud to announce, Ava is now "physically complete", so I can now focus on software and making her smarter, which is my real passion.

After two months of having the head torn apart, I was so happy to close up the head and take some new pictures (below).  I ended up using I2C to communicate from body to arduino Uno in brain.  The 3 Arduinos, phone, computer, SSC-32U, Motor Controller, Pixy Cam, Thermal Cam, and many other sensors are all working together!  The Nano puts off blue light (from the side of her head), and the Uno gives off red light (from the top of her head glowing through the thin plastic), so in the dark she looks really cool!  Note the new IR Tracker camera on her right cheek.  

If anyone is doing Android Speech-To-Text and not able to get it working on later versions, its a Google problem, but I have a workaround so contact me if you want.

Front

7/7/16 Update

I have been on a quest to finish hooking up all the head sensors.  Its been a huge challenge to get various sensors to work together without interfering and crashing one of the processors.

1.  Installed Arduino Uno R3 in head to handle all the head sensors except the thermal camera and phone, communicates with Mega ADK through SoftwareSerial.  I am trying to get a 2nd SoftSerial connection to work with a Nano, without success.

1.1  Installed and Tested Pixycam to Uno and wrote software interface to pipe data back to Mega through Uno for color blobs (color, x, y, width, height.  I will coordinate this data with compass and neck servo to build a radar map of color blobs found around the bot.

1,2  Installed and Tested 2 Microphones in ears and wrote software interface to pipe data back to Mega through Uno for volume level.  Will need to do a lot more signal processing to recognize patterns or timing difference to determine approximate direction of sound, or trigger voice listening on a particular event like a loud sound.  To insall mics, I finally had to learn to solder!

1,3  Installed and tested 2 sharp IR distance sensors in ears and wrote software interface to pipe data back to Mega through Uno.  Will need to correlate this data with compass, neck servo, and ear servos (timing will be complicated), to build a radar type map of all the distances.  The ears move quite quickly and will be taking in a lot of data very quickly.

1,4  Tested new IR tracking camera and wrote interface.  This sensor outputs x,y coordinates of up to 4 IR sources in its view.  I eventually would like to build a localization system of IR emitters (at power outlets), so that the bot can locate itself precisely in a room.  I did this before with visual objects (OCR) so I feel like the technique will work.  There is no room for the IR Tracking camera in the head so I hope to mount it under one of the lasers in the cheeks.

2.  Installed Arduino Nano in head (very tight on space now).  This Nano outputs data via SoftwareSerial.

2,1  Installed and Tested Thermal Camera (16x4 Thermal array sensor) to the Nano.  I had to dedicate the Nano to the thermal sensor as I lost more than a week trying to combine it with anything else on the Uno.  Out of simplicity I was tempted to give up on having this sensor and the Nano to support it, but I really need this sensor for tracking people/pets.

Still have to work out link with Nano and rest of bot...out of UARTs, SoftSerial, and I2C is not an option.

3.  Designed new custom red side skirts to replace the stock lynxmotion tri-track ones, also new parts for back of head (to allow acccess to Uno ports) and other mods to head to support all the sensors.

Side Skirt

5/11/16 Update - This update is all about Ava's new brain transplant...from a very large PC to a very small LattePanda SBC.  I have transplanted all her brain software to the Panda.  In the photo below, I have keyboard, monitor, mouse, network, and power cables hooked up,  If I install it inside the bot (it now runs outside via wifi/http), I will only need power and maybe a few wires to the onboard Arduino headers.

Panda

I got the Panda with 4GB RAM and 64GB eMMC.  It runs Windows 10.  I also installed Visual Studio 2015, SQL Server Express 2014, and IIS (Web Server).   It also has an onboard Arduino which has all the pins to allow it to talk directly to the other boards on the bot.  There are APIs to talk to the Arduino from .NET, but I haven't used those yet.  Right now I have it running a website and communicating through http to the android phone as it did with the server previosly.  I could switch to bluetooth or use a TxRx wired connection into the Mega in the body.  Still much to figure out about the best way to integrate the Panda into the rest of the hardware on the bot.  Right now, I am happy to report that the Panda can run my brain software...although I haven't tested SharpNLP yet.   While it is slower than my desktop PC, it works and stays well within the memory and CPU constraints of the board.  This is something considering I am running 15 .NET projects and caching tens of thousands of memories.  My next challenge will be to get the Panda into the bot and power it with 2Amps+ through a mini USB jack.

The old brain software was designed to support multiple robots at the same time, and thus complicated by having to maintain separate state for each bot/session.  With a little redesign, I could make it run dedicated to a single bot and simplify/optimize things in the process.  The onboard arduino also raises many new possibilities for having the Panda (the higher level brain) talk directly to other arduinos, motor controllers, sensors, etc. on the robot directly.  Previously, the brain could only talk to the phone, which passed messages through USB to the Mega and got delegated from there.  Another possibility is to get rid of the phone altogether and use an HDMI touch display hooked up to the Panda.

2/13/16 Update - This update is all about supporting multiple languages, including Mandarin Chinese.  I've been working on making this robot converse in Chinese and German for starters (in addition to English), but in theory she could converse in a lot more languages with some training.  The basic idea that I implemented was to add a few more layers around her brain.  Some of these layers will "think" in the native language, while others "translate" so that she can think in English, but listen and speak in other languages.  This allows me to mostly leverage her existing thought processes in multiple languages.
Context-Aware Concept/Chat Layer:  this layer stores many language specific and situation specific versions of common verbal concepts like greetings, goodbyes, thankyous, etc.  This layer will be able to answer the question "What is the most appropriate way to express a given concept at a point in time?"  based upon situational factors, politeness level, gender, age, etc. This layer was necessary because a greeting like "What's up?" might be an ok greeting in English in an informal setting, but might mean something very different that is not ok in another language or situation.
 
Input Translation Layer - this layer converts native language input into english output using a cache or an API.  This layer uses a third party translation API for translation.  Other translation APIs could be added later.  My goal is to find an offline translation engine and/or to use multiple 3rd party APIs.

Translation Cache Layer - this layer remembers all english to native language translations so that common translations do not need to be performed repetitively.  This layer is also responsible for remembering many different versions.  I intend to have some feedback mechanism so that better versions for different situations can be learned over time.  I haven't figured out how this will woirk just yet, as she is just starting to grow her vocabulary of foreign language translations.

Brain Layer - this layer does what Ava's (and Anna's brain) used to do, with a few tweaks to support the new translation layers.  I'm now in the process of modifying software agents and regular expression "patterns" so that my software is more tolerant of the differences between common English grammar and the grammar that results from machine translation from other languages, which results in less than the perfect English grammar Ava was previously used to.

Output Translation Layer - this layer converts english language output into native language output using a cache or an API.  This layer also uses a third party translation API for translation.  Other translation APIs could be added later.

Results so far:
I have tested her out in Mandarin using the simplifed Chinese charactrer set, as well as German.  There is a great deal more work to do, but the initial progress is very encouraging.
________
1/8/16 Update - This update is all about making some first videos.  She is a little rough right now, still missing one of her brains (in her head) and her head sensors are not hooked up yet, but she is talking and moving.  The videos illustrate autonomous talking, gesturing, joking, sentiment analysis, empathy, and use of NLP to have much improved comprehension skills.  Hope you watch and enjoy!

Hardware Overview

Hardware Overview
12/27/15 Update - There have been a lot of new changes recently...might as well start with some new pictures.
A lot of new ABS, and some new laser pods...

Head Side

11 new head and ear sensors...7 installed, 4 to go.  This took some quite some cramming... 

 

Sensors

The little button sticking through is the button for training the Pixy cam.

Head Top

12/27/15 update in words...

  1. Designed and printed new pieces for the top of the head, sides of the head, front and back compartments, grills, bumpers, ear sensor holders, head sensor holders, laser holders.  Still need new rear head panel to allow access to head arduino USB port.
  2. Installed Pixy Cam, Thermal Camera, and Long Range Sharp IR distance sensor in head, still need to install IR receiver and transmitter, and wire up all head sensors to arduino.
  3. Installed Microphone and Sharp IR distance sensors in ears.  Still need to install IR receiver in each ear and wire up the ears.
  4. Got autonomous ear movements working
  5. Got autonomous arm gestures working whilte talking.
  6. Got autonomous head movements while talking working...until she broke her neck...waiting for some new servo horns.
  7. Got the arms to move down out of the way of all the sonars while driving forward or reverse and return when not driving.
  8. Got her weather features working so she can forecast and talk about the weather.
  9. Got her to put famous movie lines and or religious verses into her speech when turned on...which is weird when she quotes a verse one second and then cracks an off color joke about religion in the next!  Sadly, her news features are not working...looks like feedzilla is no longer online.  I need to find a new free news API.  Recommendations?
  10. I am working on a new tracking agent that keeps track of all known objects, heading, elevation, size, type, color, etc.  I plan on using this to drive autonomous head movements soon.  The idea is, she will tend to look at people when they show up, talk, or see is talking, but after a short period of time, her focus will wander to other objects in the room.

As soon as her neck is repaired, I plan on making some videos.

12/3/15 Update

  1. Got the Mega talking to the Servo Controller through serial interface.
  2. Got the Mega talking to the Motor Controller through Sabertooths Simplified Serial Interface.
  3. Got all the servos moving and calibrated...mostly.
  4. Installed the compass, had to rearrange the insides to get the accuracy up, moved power distribution and switches from front to back (bummer).  Thanks goodness I had 3D printed interchangeable parts I could swap front/back, this was easy.
  5. Built all the wiring harnesses for the 12 sonars and installed.  Tested the full sonar array for the first time and everything looks good!  I should be up and driving in my new 12 sector force field algorithm very soon!  Between the servos and sonars, the amount of wiring is getting up there, thank goodness for zip ties, labels, and Polulu build your own cables.
  6. Built a mechanism for defining and storing "poses" on the server and a way to download them and execute them on the bot on demand.  Started off by creating a few dozen poses for various body positions or positions of particular body parts.  Each pose can reference 1, a few, or all the servos at the same time, with a speed for each.  I built poses for things like "Ears-Front-Quickly" or "Head-Nod-Down" or "Point-Left", which moves all servos in a coordinated action.  Things were erratic at first until I worked out my coordinate system and calibrations correctly for each servo.
  7. Build a mechanism for defining narrative "missions" on the server.  A narrative "mission" is a series of verbal commands to be performed simultaneously, in sequence, or a combination of both.  Among many other uses, this can be used to animate the robot from one "pose" or position to another, while talking or doing other things like playing music.  My favorite part is, I can write a mission as a paragraph like this example for doing a little dance...pose Default. say lets dance. wait 3000. pose Point-Right. say take it to the right. wait 4000. pose Point-Left. say now take it to the left. wait 3000. pose Default. say now stretch it out. wait 3000. pose Arms-Up. say take it up high. wait 3000. pose Drive. say now set it down low. wait 3000. say great job. pose Default. mute be happy. say great job. wait 2000. say that was fun.
  8. Using the narrative missions, I'm defining animations for lots of behaviors like nodding head yes, shaking head no, ear movements, etc.  I intend to start tying the ears and head movements to the sonar array and emotions, and get her moving her arms and gesturing when she talks, which is a lot now.

11/13/15 Update

  1. Added 1st version of 3D Printed Head.  Note, these are the aesthetic ears, not the camera ears, which I am still designing.  Laser pods will be added on each side, and a sensor pod will be added on top to hold the thermal camera and IR emitter and detector among other things.
  2. Installed new Sony 8-Core Phone for Face and on board brain.
  3. Got first version of new android app written and exchanging messages with Arduino Mega ADK via USB.  It was built with Android Studio and is updated to use OpenCV 3.0 instead of 2.4.5.
  4. Got new version of voice remote app working.  The remote appm runs on another phone or tablet and is used to communicate with Ava or control her so I don't have to touch her face.
  5. Got face part of app to automatically scale to smaller or larger devices.  Also made app configurable to tie into any brain on my Lucy shared brain API.
  6. Made first conversation, listening, emotional, eye dilation, and color tracking tests.   Talked with Ava on smalltalk and a few other topics.  She has reached "First Talk" milestone!  Also tested question answering and Wolfram Alpha integration.  Had to turn off curiousity, motivation, and weather features for now.
  7. Some new pics...

10/29/15 Update

  1. Added new Ava Logo in Red...I think I need to make it bigger!
  2. Added new Sonar Panels in Black and Red with Sonars Installed
  3. Added New Compartment and Cover in Lower Front to gain space for power supplies, hold driving lights, and reduce boxy shape.
  4. I chose new Sony 8-Core Phone for Face with Side Power Jack instead of 4-Core Moto G with end jack.  This should let me use USB instead of Bluetooth for Arduino to Android interface, as jack is in a better location not to block lasers.
  5. Purchased 2 new Pixy CMUCam5 Sensors for Ears and am in process of testing them out to see what they can do.  The ears will have to be made larger to fit them inside.

10/19/15 Update

Added the arm and the catlike ears over the weekend.  Still working on designing the head...it will be red and not square!  Cramming all the sensors in will be the challenge.  Planning on putting a Pixy CMUcam5 in each ear.

Say hello to my little friend.  Ava will be my second robot. Ava will be a sister to my other robot, Anna,with some notable improvements.  Almost all of Ava's body is 3D printed, about 50 parts and counting.  Ava will have 2 arms with 4-5 DOF each, and 2 articulated ears (like a cat's) with 2 DOF each.  In addition, the entire head will pan and tilt, so that the robot can better interact with people and also localize by identifying markers on the ceiling. Thus far, I have concentrated on the 3D design and printing of the body.  I have already written most of the software for Ava, as this robot is reusing most of the code base from Anna and the shared internet brain I call Project Lucy. This means she will talk and have a personality from day one, she just won't know how to use her arms and articulated ears for a bit.  I plan to utilize her for entertainment and to voice control my TV and household lights.  Think of her by my side, hanging out, watching TV, answering questions, telling jokes, and changing channels for me.  I have most of the software for this written already, so this is not much of a stretch, its really just a start.

I'll post more as she comes together. 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

what's the material for the body ? is it viprabot ?

The material is 1.75mm ABS filament for 3D Printers.

Good work both mechanically and from the AI perspective. Nice to see such progress on that front.

How do you run the data back and forth from the PC? I believe you had a 6 core on Anna. Same here?

 

It will be bluetooth from one of the Arduinos to the Phone (instead of USB), and then HTTP requests via Wifi from the phone to the PC.  Down the line, I may use XBee outdoors.  I have had several lying around way too long.  I hope to add one in.

Same PC as with Anna.  The server is setup to handle multiple bots simultaneously while supporting separate identity, memories, opinions, and reflex behaviors for each bot.  My true love is on the software side, so I plan to get back to that as soon as I can put this bot together.  I'd like to experiment with talking to both or getting the robots to talk to each other.  I wanted the additional actuators so Ava could be more physically expressive.  I am excited about the moving ears I am planning.

 

Great work, Martin!

The square body with tank treads reminds me a bit the WALL-E, but I guess the similarities will end here.

Can't wait to see more on this project and the ears, and to see those arms in action!

Thanks so much Dickel.  For those of you that don't know, Dickel is one of our resident gods that inspire others en masse.

Yes, Ava it is similar to WALL-E, by design,  The reason is, my robots are designed to hold a lot of sensors and processors, and I don't know up-front what I will want to pack into them later.

The square shape will change when i add the new front panel, a 3-D trapeziod like the one in the back.  It will hold all the regulators.  The result will be a body a little like a duck's.  The logic of the box or the duck shape is to maximize internal space.  I may yet cram not one, but 2 Raspberry Pi 2's inside with 2 more cameras.  Space for power supply and wires is the challenge.

I am really looking forward to the cat-like ears, which are really just a concept in my head right now.  I picked out the servos...some small 7.4 volt ones from E-Z Robot.  I might put a camera or thermal array sensor in each ear.  My goal, as always, is to have maximum situational awareness.  With scanning ears (with cameras or thermal array), Ava could really have a major improvement in situational awareness, being able to scan 360 degree without even moving the head.  

I really hope this translates to Portugeese.  Good luck on your projects.  I always know that when you are quiet, some new awesome MDI is on the way.

Regards,

Martin

"For those of you that don't know, Dickel is one of our resident gods that inspire others en masse."     hehehhh

And for those of you that don't know, mtriplett is our resident god of AI!

Your work with Anna is impressive, and I guess putting together this experience and a bot with arms will be an awesome companion robot.

Well... I'm working on a few projects, more artistic (but robotics related).

But, man, you really inspired me with AVA... wanting to build a mid sized companion bot.

 

See you!

Dickel

I hope I didn't embarrass you with my comment.  I was drinking that night and got a little over exuberant!

Your work has inspired me for a long time, I'm glad I could return the favor with AVA.  I look forward to seeing your take on a companion robot.

Yay!!!  The doorbell just rang...my head and ear servos have just arrived!

By the way, great work on that fiberglass, another technique I have always been curious about trying.

Regards,

Martin

I'm printing many modular pieces to make a medium size robot like AVA. Tracks are better than wheels indoor? It will go rogue like in Deus ex Machina?

I would like to release AVA's stl files and code as an open source project.  I have no idea yet about how to handle licensing, but that is my goal once I feel like I won't be doing any major redesign.

I would love to see what you are doing, the earlier the better.  My ego made me hold back on Ava for a couple months.  I would hate to unduly influence your creativity by releasing my design too early.

I think the question of how to build an endoskeletion or exoskeleton for a bot in this size range is one of the most interesting issues we face.  AVA uses a combination of endo and exo-skeleton to get strength.  The tradeoffs of design/weight/flexiblity of change is something I lose sleep over constantly.  In retrospect, Ava is over-built and as to need, she could lose a few grams.  I used 30% infill with the 3D prints.  The inner beams (non corner) are entirely questionable.  I am thinking about going to 1/16 ABS skin instead of the 1/8 inch skin I am using for almost everything now.

What do you think?  Please elaborate.  I would love to hear as my conclusions are arrived at in solo.

Regards,

Martin