Ambient Light TV with a Raspberry Pi and a Webcam (Part2)

In this post I’ll go into more details about how I built the ambient light rig. Here is the previous part in case you missed it.

I’m not the first one to make one of these, so this is not going to be a perfect instructable. Instead, I’ll go deeper into the things I did differently (like the camera), and link to the resources I used for all the other stuff. Here we go!

Most rigs out there use an Arduino linked to a pc, which has the benefit of being relatively easy to implement but the downside is that you can only use it with a video stream directly from the pc’s graphics card (for most people, this is acceptable). You can buy ready-made systems that work in a similar way: LiveLight.

I decided to go full option. I wanted every feed to work: PC, Cable TV, Apple TV and my gaming consoles.

For this to work, I would need access to all the feeds going into my TV. Intercepting a HDMI feed before it goes into the TV is technically possible, but the gear I needed would have been way too expensive. So the only other cheap alternative was to use a webcam, knowing that frame rate, lag and color inaccuracies were going to be an issue.

The camera is best placed pointing directly at the screen to avoid glare and viewing angle color inaccuracies. In my case, I only had two viable options:

As you see in the second picture, I had to close the curtains because at that angle the TV reflects everything from the window adjacent to it. No biggy, I prefer to close the curtains when watching a movie or playing games anyway.

Pure blacks tend to be more blueish too, but I can live with that. This actually has a nice effect when the screen goes dark, as the whole rig glows blue providing some light in the room.

I used a Logitech C250 (because I already owned one). It can capture at different resolutions and has a manual focus ring. The manual focus ring is great, because you can intentionally set it to out of focuse to get color averaging for free!

AMBI_0003

The Adalight tutorial was a good guide for building the frame. This is what I used:
– 1x Raspberry Pi (any version works)
– 2x 12mm Diffused Flat Digital RGB LED Pixels (Strand of 25)
– 1x Stontronics 5V DC 4A Power Supply Unit T2502ST (Powers both the LEDs and the Pi)
– 1x 100x75x40mm Housing for the Pi
– 1x 2.1mm Power Socket (Rated 5A)
– 1x Rocker Switch
– 1x Micro USB
Some grommets
– 4x Angled aluminium profiles
– Small nuts and bolts to attach the profiles

At this point, I started to work on the server code for the Raspberry Pi. The first implementation was a Python program running in Debian linux LXDE gui mode. This was a failed approach for two reasons:
– The GUI mode was using too many resources on the Pi, so the experience would be quite sluggish
– The Python GPIO library (wiringPi) I was using wasn’t fast enough to update 50LEDs.

So I decided to go for a full-on C implementation using the wiringPi C++ library. I wrote all the code on my pc and pushed it to the Pi using git and an ssh client. The source code for the server is hosted on github. I’m working on a readme, but the code should be pretty comprehensive if you’re familiar with C++.

I used OpenCV to capture frames from the webcam, jansson as a JSON library for saving settings, Log4CPlus as a logging library, and Boost for network communication with a client program running on my PC.

The client program is used for tweaking camera color settings and to set the capture bounds of the frame. It communicates directly with the server on the raspberry Pi. I’ll go into more details of the client app in Part3. Here’s how it looks:

At this point I had everything pretty much working, and the main task now was to clean everything up and attach the Pi to the frame.

Capturing at full 640×480 resolution was really slow, I expect this is due to the limited USB bandwidth on the Pi. So I had to go all the way down to the lowest capture size: 160×120. This gives me a relatively stable capture framerate of 20FPS.

The LED frame is lagging around 2 frames behind the TV. This is noticeable, but I’ve found that you eventually get used to it. Blending previous frames together with the current one also helps a lot for smoother color transitions on the LEDs.

The other reason for the slow framerate is the Auto Exposure setting on the webcam. Basically, when the image on screen is dark, the webcam takes longer to capture the frame (longer exposure). The driver I’m currently using on the Pi (using Debian Linux) doesn’t allow me to set this to a fixed exposure. I’m still trying alternatives and will post once I have a solution.

All in all, I’m pretty pleased with the results. If I manage to turn off Auto Exposure, I think I might be able to hit 30FPS.
Ninja Edit: I finally managed to set a fixed Exposure setting, and I’m getting a smooth 30fps now!

I’m eventually going to write Part3, which is more about the client. All the code is on github, so don’t wait up.

If you’re interested in building one of these, here are some resources I used to get me started:
http://siliconrepublic.blogspot.de/2011/02/arduino-based-pc-ambient-lighting.html
http://learn.adafruit.com/adalight-diy-ambient-tv-lighting/overview

  1. Hi Waldo! I just want to say thanks to you for posting this! I have always been waiting for this 2nd tutorial. Like I mentioned, I pretty much checked your site everyday! I actually ‘pinned’ this website in my Chrome browser.

    I found about your method during the summer break, and currently I am taking several classes this semester, so I can not jump on this project right away! But I promise I will follow your instructions and make one like this soon!

    Thank you very much again! Keep the good work and have an enjoyable day!

    Best,
    Chen

      1. Has there been any progress using the Raspberry Pi Camera Module?

        I’ve really enjoyed reading your progress in these two articles.

        1. Thanks Alan, I haven’t really had the time to get back into it yet. I’m currently just enjoying it as-is, since I got it to run at 30fps smoothly.

  2. Waldo, this is awesome! I’m new to all of this but I’m looking to follow your process. I ran into some problems trying to turn the github repo for the PyClient into an executable to run. Do you have any tips on how to best get this client running? Something like py2exe?

    Thanks! Keep up the awesome projects.

  3. I’m just got all HW for the project, but since I’m not very good at Linux, it would be highly appreciated if you could make an image of your Pi SW and put it somewhere for download.

    I know that’s much to ask, but it would certainly be a shortcut that makes the project come true for me.

    Otherwise I will try to do it the hard way, and once in a while ask you for some tips :-)

    Thanks anyway for a very interesting project. It was just what I was looking for!!

  4. Hi again!
    I have now compiled everything and all software seems to run fine. I have one problem though, and my camera is not initialized so /dev/video0 is not found.
    I have looked around and I’m trying to modprobe cbm2835-v4l2, but that gives me error.
    I have the first version of Pi and wonder if you have any hints making the camera work.

  5. So I’ve tried this with ArchLinux, Raspbian, and Ubuntu Mate. I got all necessary components updated/installed on each distro (gcc, pkg-config, git, make, cmake, jannson, boost, wiringpi, opencv and libwebcam) but for some reason compiling AmblLIghtServer fails at some point or another. I am not a strong Linux guy but have done my homework to get everything installed without issue.

  6. Hey Waldo.

    is there any new progress on the project? Do you still use your camera for ambilight?
    I’m very interested in this project and would like to adapt it on my own ambilight setup.

  7. hello. I have my ambilight system using HDMI pass through worked perfectly.
    but now I decide to make the system using a webcam like you
    i have my webcam
    which has 3.6mm of focal length. and my ambilight system using cvbs video input. and then calculate the color and send out to the led strip
    I just want to know some part of your work
    with you camera, definitely present fisheye video. how did you straighten the video ( or if you put the camera at the side of the screen, how did you get the straight image ))
    because im gonna use dirrectly the video from my webcam to feed the ambilight system
    thanks

    1. I wrote an app that processed the video from the webcam and sampled the colors is specific areas on the picture. The article links to the code.

  8. I got this compiled and running on raspbian lite. I couldn’t get it on Arch Linux as there was a dependency for opencv that I couldn’t overcome. I will post the complete steps directly after this post. (I happen to have the exact same camera you used)

    The server gives me the following when I start it.

    pi@raspberrypi:~/AmbientLightServer$ sh run.sh
    [2017-09-13 22:11:21.086] [INFO] Loading Preferences from file:
    [2017-09-13 22:11:21.087] [DEBUG] – boundsTopLeft = (0.2,0.2)
    [2017-09-13 22:11:21.087] [DEBUG] – boundsTopRight = (0.8,0.2)
    [2017-09-13 22:11:21.087] [DEBUG] – boundsBottomRight = (0.8,0.8)
    [2017-09-13 22:11:21.087] [DEBUG] – boundsBottomLeft = (0.2,0.8)
    [2017-09-13 22:11:21.087] [DEBUG] – totalFadeTimeMS = 200
    [2017-09-13 22:11:21.087] [DEBUG] – fixedColorEnabled = false
    [2017-09-13 22:11:21.088] [DEBUG] – fixedColor = (1,0,0)
    [2017-09-13 22:11:21.088] [DEBUG] – camBrightness = 30
    [2017-09-13 22:11:21.088] [DEBUG] – camContrast = 74
    [2017-09-13 22:11:21.088] [DEBUG] – camSaturation = 117
    [2017-09-13 22:11:21.088] [DEBUG] – camGain = 20
    [2017-09-13 22:11:21.088] [INFO] Saving Preferences
    [2017-09-13 22:11:21.089] [INFO] LEDController: setting up with ClockPin1=0 DataPin1=1 ClockPin2=3 DataPin2=4
    HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP
    HIGHGUI ERROR: V4L: setting property #5 is not supported
    HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP
    [2017-09-13 22:11:21.309] [INFO] TrapezoidSampler.SetSize: Rows = 16 Cols = 9
    [2017-09-13 22:11:21.310] [INFO] TrapezoidSampler.UpdatePoints
    [2017-09-13 22:11:21.311] [INFO] TrapezoidSampler.UpdateSamplerAreas
    [2017-09-13 22:11:21.312] [INFO] Server: staring on port 13555
    [2017-09-13 22:11:21.313] [DEBUG] Server: accepting new connections…
    [2017-09-13 22:11:22.366] [DEBUG] Average deltaTime: 56.6691ms (18 FPS)
    [2017-09-13 22:11:23.366] [DEBUG] Average deltaTime: 33.3339ms (30 FPS)
    [2017-09-13 22:11:24.398] [DEBUG] Average deltaTime: 33.2877ms (31 FPS)
    [2017-09-13 22:11:25.398] [DEBUG] Average deltaTime: 33.3554ms (30 FPS)
    [2017-09-13 22:11:26.430] [DEBUG] Average deltaTime: 33.2875ms (31 FPS)
    etc…

    My question is, can you tell me more about the client you spoke of? I also need the led pin connections but I am going to dive into the code to see if I can determine them myself.

  9. As promised, here are my install steps using raspbian lite.

    ***begin***
    install raspbian lite

    sudo apt-get update
    sudo apt-get upgrade

    sudo apt-get install build-essential cmake pkg-config

    #Install OpenCV
    # next two are for OpenCV (first required, second optional but I installed them just in case)
    sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
    sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
    cd ~
    wget https://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.4.13/opencv-2.4.13.zip
    unzip opencv-2.4.13.zip
    rm opencv-2.4.13.zip
    cd opencv-2.4.13/
    mkdir build
    cd build

    #below I had to add “-D ENABLE_PRECOMPILED_HEADERS=OFF” to overcome a dependency issue
    cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D ENABLE_PRECOMPILED_HEADERS=OFF ..
    make
    sudo make install

    #Install Jansson
    cd ~
    wget http://www.digip.org/jansson/releases/jansson-2.10.tar.gz
    tar -zxvf jansson-2.10.tar.gz
    rm jansson-2.10.tar.gz
    cd jansson-2.10/
    ./configure
    make
    sudo make check
    sudo make install

    #Install log4cplus
    wget https://sourceforge.net/projects/log4cplus/files/log4cplus-stable/1.1.3/log4cplus-1.1.3-rc8.tar.gz
    tar -zxvf log4cplus-1.1.3-rc8.tar.gz
    rm log4cplus-1.1.3-rc8.tar.gz
    cd log4cplus-1.1.3-rc8/
    ./configure
    make
    sudo make install

    #install boost
    wget https://dl.bintray.com/boostorg/release/1.65.1/source/boost_1_65_1.tar.gz
    tar -zxvf boost_1_65_1.tar.gz
    rm boost_1_65_1.tar.gz
    cd boost_1_65_1/
    ./bootstrap.sh
    sudo ./b2 install

    #install wiringpi
    sudo apt-get install wiringpi

    #install libwebcam
    #libxml2-dev was needed as a dependency of libwebcam so it was installed first
    sudo apt-get install libxml2-dev
    cd ~
    wget https://sourceforge.net/projects/libwebcam/files/source/libwebcam-src-0.2.5.tar.gz
    tar -zxvf libwebcam-src-0.2.5.tar.gz
    rm libwebcam-src-0.2.5.tar.gz
    cd libwebcam-0.2.5/
    mkdir build
    cd build
    cmake ..
    make
    sudo make install

    #install ambilightserver
    cd ~
    git clone https://github.com/waldobronchart/AmbientLightServer
    cd AmbientLightServer/
    mkdir build
    make

    #to start the server run the following from the AmbientLightServer directory
    sh run.sh

    ***end***

    I haven’t gotten any further than this. I hope to figure out the client and get the LEDs connected but I am still researching this part.

Leave a comment

Your email address will not be published. Required fields are marked *