Interactive Wall Using Servo Motors and Kinect Sensor

First Internship Project: This summer holiday, I started to work on a project to develop an Interactive Wall using Kinect Camera Sensor and Servo Motors. An Interactive wall is constructed of several servo motors (around 400) embedded in the wall connected to a microcontroller and an RGB Camera with a depth sensor, whole setup connected to a microprocessor. Object or human, if detected in the threshold using the RGB camera, servo in front of the object, gets initialized. The servo updates with the change in the captured image creating a beautiful pattern.

Electronics related to the project included –

  1. Microsoft Kinect
  2. SG90 Servo Motors.
  3. Arduino UNO or Mega
  4. Raspberry Pi 3 Model B
  5. 16 Channel PWM/Servo Driver

Microsoft Kinect Sensor – Kinect sensor features an RGB camera, a depth sensor consisting of an infrared laser projector and an infrared CMOS sensor, and a multi-array microphone. It also contains a LED light, a three-axis accelerometer, and a small servo controlling the tilt of the device.


Kinect Theory – Kinect’s IR projector sends out a pattern of infrared light beams, which bounces on the objects and is captured by the standard CMOS image sensor. This captured image is passed on to the onboard PrimeSense PS1080 chip to be translated into the VGA sized depth image.

Servo Motors –  Servos are DC motors that include a feedback mechanism that allows you to keep track of the position of the motor at every moment. The way you control a servo is by sending it pulses with a specific duration. Generally, the range of a servo goes from 1 millisecond for 0 degrees to 2 milliseconds for 180 degrees. You need to refresh your control signals every 20ms because the servo expects to get an update every 20ms, or 50 times per second.

16 Channel PWM/Servo Controller – An i2c-controlled PWM driver with a built-in clock used to control 16 free-running PWM outputs! You can even chain up 62 breakouts to control up to 992 PWM outputs.

Software Implementation  –

Setting Up Raspberry Pi 3 for Kinect Support: Install Raspbian Jessie and boot-up Raspberry-Pi and follow the following steps to setup libfreenect open-source driver for Kinect interfacing –

#Installing Dependencies:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install cmake libudev0 libudev-dev freeglut3 freeglut3-dev libxmu6 libxmu-dev libxi6 libxi-dev

#Setting Up libfreenect Driver
git clone git://
cd libfreenect
mkdir build
cd build
cmake -L ..
sudo make install
sudo ldconfig /usr/local/lib64/

#To use Kinect as a non-root user
sudo adduser $USER video
sudo adduser $USER plugdev
sudo nano /etc/udev/rules.d/51-kinect.rules

#Paste the following lines in the newly created file
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666"
# ATTR{product}=="Xbox NUI Audio"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666"
# ATTR{product}=="Xbox NUI Camera"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02c2", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02be", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02bf", MODE="0666"

#LogOut and Log back in, open terminal and test with:

WhatsApp Image 2017-07-04 at 3.24.17 AM

Approach 1 – Python + Raspberry Pi 3B + 16 Channel PWM Servo Driver

Approach 2 – Processing IDE + Raspberry Pi 3B + 16 Channel PWM Servo Driver

Approach 3 – Processing IDE + Raspberry Pi 3B + Arduino(Serial Comm.) + 16 Channel PWM Servo Driver

Approach 4 – Processing IDE + Raspberry Pi 3b + Arduino Firmata + 16 Channel PWM Servo Driver 

Repo –

References –

Persistence of Vision Display

POV display spins 360 degrees. The purpose of POV display project is to create a small apparatus that will create a visual using only a small number of LEDs as it spins in a circle. When the LEDs rotate several times around a point in less than a second, the human eye reaches its limit of motion perception and creates an illusion of a continuous image.

The pictures generated by the spinning LEDs are coordinated by an Arduino UNO microcontroller. A Hall Effect sensor is used in conjunction with a strong magnet so that the microcontroller can receive a reference point as to when it should start outputting the visual during each rotation. The RPM of motor, diameter of LED’s decides the POV phenomena.

  1. Arduino UNO
  2. Hall Effect Sensor
  3. Magnet Piece
  4. 8 Single Color LEDs
  5. 9V Battery with Arduino Cap
  6. 12V 1000RPM motor
  7. 10k ohm resistor
  8. 8 * 330 ohm resistors

Arduino Connections :

model.pngMy Model :

How does it produce letters?

To display a particular character, the LEDs would light at a certain time and produce a particular letter as shown above.

Have a look at my Propeller Clock :

Python Script to print E-Lab Reports

Being a Python enthusiast and a bit lazy to click on Evaluate button and then Print Report button to print all my Elab reports, idea struck my mind to use Selenium WebDriver API to automate a python script which asks MathsLab No. , Register Number and Password over a python GUI and further prints all reports to a folder created on respective PATH.

What we need :

1. Tkinter – Python module to create GUIs
2. Selenium WebDriver API
3. Chrome Driver or PhantomJS with PATH.
4. Extract web-elements ID’s, X-Paths, ClassName.

How to install Python Modules?
pip install tkinter
pip install -U selenium

How to download Chrome Driver?
Choose your OS. Download and copy its local path and paste in the script.

Steps to run script –

1. Save the script on desktop and open in IDLE or your desired Python editor.

2. Change the path of ChromeDriver with the path of your ChromeDriver
3. Make a folder and copy the path in the script. This is the folder where all prints would be stored.
3. Run the script. 😀
4. 1 min 35 sec and its done 😀

Code –

My Printed Reports 😀

Watch it work 😀

Codechef Question Forwarded to Slack

Competitive Programming is the base of computer science as it indicates the efficiency of the code mixed with algorithms and applied mind.

In order to make a practice to solve a competitive programming question every day, I created this python script to send a question from code chef to my slack account every day.

Installation :

1.Selenium WebDriver API
2.Chrome Driver or PhantomJS with PATH
3.Extract web-elements ID’s, X-Paths, ClassName.

How to run the script?

Link To Script.

1. Save the script on the desktop and open in IDLE.
2. Change the path of ChromeDriver with the path of your ChromeDriver.
3. Enter your Slack API token.
3. Run the script. 😀