DIY: A Robot That Can Dream!

A first-of-its-kind robot that can recognize face and see dreams – like a human.

We humans have the capability to speak, recognise feelings and emotions, and even see dreams, which makes us different from machines. But does it make us more intelligent than them? Can machines like robots also acquire this capability? If yes, then this can blur the line between how humans and robots think. 

As artificial assistants in the workplace, as companions for the elderly and children, for welcoming guests in hotels – these intelligent robots can have multiple uses.

Very few present day robots are emotionally intelligent. The already available ones are quite expensive, costing upto several lakhs of INR. 

Keeping this in mind, I have decided to make an intelligent robot using several open source-tools and libraries. And so can you. I have used an InMoov robot face mask. You can also use this robotic face or any other of your choice. 

Using Google’s open-source Deep Dream library, a camera takes a picture and then creates  dreams based on it. 

Next, I have used an Ai and Python to make a robot to recognise the face and talk to you 

Our project includes two different features, for which we need to create a basic setup of both deep dream and Face recognition. 

For emotion recognition, we need to setup the following libraries in Raspberry Pi.

To install the above library, open the terminal window in Raspberry Pi and then run

After this we need to swap the file size. To do so, run the following command:

Now, we will install the Face recoginition  library. To do so, run the following command in the terminal:

 sudo  pip3 install face recoginition

Run the sudo apt-get update command again. Now, our basic setup for emotion recognition is done. 

 Deep dream requirements/setup

For deep dream, we need to install the following modules in Python.  

  • Caffe
  • Protobuf (also known as Protocol Buffers)
  • PsyCam

Open your linux terminal and then run the above commands to install the modules ( Refer Fig 5, Fig 6,Fig 7, Fig 8, Fig 9, Fig 10, Fig 11). 

You can also follow this link  for deep dream installation on Raspberry Pi.

DIY: A Robot That Can Dream!
Fig 5.

DIY: A Robot That Can Dream!
Fig 6

DIY: A Robot That Can Dream!
Fig 7.

DIY: A Robot That Can Dream!
Fig 8.

DIY: A Robot That Can Dream!
Fig 9.

DIY: A Robot That Can Dream!
Fig 10

DIY: A Robot That Can Dream!
Fig 11.

DIY: A Robot That Can Dream!
Fig 12.

We have now completed the basic setup of the libraries so now, let’s write the code for real-time Face recognition. 


First, we need to download OpenCV, OpenCV contribution and Face recognition library folder. To do so, open the terminal window and then run the following commands in the terminal:

git colone

git colone _contrib

This is to recognize the person in front of the robot (known or unknown). In this code, we will import 3 modules: face recognition, cv2 and numpy. We will create different arrays for recognising faces and names. Make sure to write the image file name of that member for correct face recognition.

In the next part of code, we will try to match the face that has been captured by the camera with the array of known faces. If the face matches, then the code will run the espeak synthesizer to speak the person’s name using the syntax ‘espeak.synth ()’ as in the pic below (See Fig.12)

You can refer the Face Recoginition Robot project for setup the robot head and understand face recognition system in better way

Preparing  Dreaming Code

Now, let’s do deep dream code setting.

Download the deep dream pycsam library using the following command:

git clone

Then open the PsyCam folder and change the file path for dream creating image in to the same path where we save the captured frame of image.Now in the PsyCam folder, create a new Python file and then create a script that runs the file in certain intervals of time and then makes it go on sleep mode ie off/on .

To see the dream image created by the robot, open the dream folder that is situated in the psycam folder.

Our coding is now done.

Connect the camera to Raspberry Pi and the OLED display with Arduino for eye motion, as given in Part 1 of this project. 


Connect the Raspberry Pi Type-C USB port with 5V cable. After that, connect the Raspberry Pi camera using a ribbon cable to the Raspberry Pi camera port.

DIY: A Robot That Can Dream!
Fig 13.


After setting and saving all the codes, it’s time now to test the features of our robot. Run the and scripts, and wait for after a few seconds. The screen will then start showing the face  and the robot will start talking according to the face it will detect (Refer Fig 14).

DIY: A Robot That Can Dream!
Fig 14.

Testing Dreams 

To check what our robot has dreamt of, open the dreams folder and see the saved picture of the robot’s dream (See Fig 15, 16, 17, 18, 19, 20, 21).

DIY: A Robot That Can Dream!
Fig 15 Random robot dream – 1

DIY: A Robot That Can Dream!
Fig 16 Random robot dream – 2

DIY: A Robot That Can Dream!
Fig 17.Robot’s dream generated from that pic – 2

DIY: A Robot That Can Dream!
Fig 18.Original pic

DIY: A Robot That Can Dream!
Fig 19.Robot’s dream generated from that pic – 1


DIY: A Robot That Can Dream!
Fig 20.Original pic

DIY: A Robot That Can Dream!
Fig 21.Original pic

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker