ISE Blog

Hands-On With Amazon AWS DeepLens

I recently received my AWS DeepLens device.  I’m by no means a machine learning expert.  However, Andy Jassy’s announcement of the device at 2017’s AWS re:Invent implied that the DeepLens would put Machine Learning and Computer Vision in the hands of non-experts and make it easy.  So, let’s try out one of AWS’s pre-trained samples to see just how easy this device is to use.

DeepLens1

Device Setup

Setup was a little bit tricky.  First you must activate the device through your AWS Console, where you will get a certificate package, which you must save.  The device comes with its own Wi-Fi SSID and password which you connect to and visit https://deeplens.amazon.net/ to get started (this address routes you to a local address registered on the device).  Then you configure the device to connect to your regular Wi-Fi and upload the certificate package that you downloaded while registering.

So far, we’re a little more challenging setup-wise than your standard consumer-grade device, but not too much. When the device reboots it will connect to AWS, and it should now be visible in the console.  This means you’re ready to load a model and start using the DeepLens!

sauron

[Source: AWS Console]

Loading a project

In a future article I may go into more depth on how to train a model using AWS SageMaker, but for today I wanted something easy.  Fortunately, AWS provides some pre-made projects with trained models ready to deploy to your device.  It’s as easy as picking a project from a menu to get started and see what the device can do!

AWS Console - DeepLens

[Source: AWS Console]

There are a few different templates to choose from, and they show some of the potential of the DeepLens device.   Object and face recognition are expected starter projects for such a device, but the “Artistic Style Transfer” shows how DeepLens can dynamically transform the video it takes in, and the “Action Recognition” and “Head Pose Detection” projects show how video CV can detect complex sets of motion rather than just objects. While creating one of these project templates, other resources such as Lambda functions are provisioned for doing the recognition work. 

DeepLens4

[Source: AWS Console]

Once you’ve created your project, you simply deploy the model to the DeepLens with a few clicks.  The console will announce when the deployment is complete and your DeepLens is working. 

Seeing the results

Next, you’ll probably want to see some output. Here’s where I ran into a tiny bit of trouble.  To view the stream in your browser you will need to install the streaming certificates.  I couldn’t do this successfully.  Instead, I connected the device to a keyboard and mouse using the USB ports, and a micro HDMI cable to connect it to my TV.  The computer inside the DeepLens is running a customized Ubuntu.  By issuing some mplayer commands found in the  DeepLens Developer Guide, I was able to see the output of the running model.

DeepLens5

Either my cat isn’t very good at being a cat, or the “Cat and Dog Detection” model could use some improvement.  Either way, I think the DeepLens has some real potential and will continue to experiment with it. It certainly seems like a device such as this could be used in some powerful innovation projects, especially with a well-trained model.


Have you had a chance to try out DeepLens? What experiments would you do with one? Comment below and join the conversation!

Samuel Thurston, Software Engineer

Samuel Thurston, Software Engineer

Samuel Thurston is a Software Engineer and Cloud Practice Lead for ISE, architecting and implementing cloud solutions for enterprise clients. He enjoys running, yoga, and cooking, and is frequently found on the disc golf course.