Back from the hackathon


Matthieu Boussard
Oct 26, 2015 EventIoT

Back from the hackathon

We made it, the hackathon is over! After 3 days at work, we finished the HackTheEcho Hackathon, and we won 2nd in the Best Project category! Our Team was composed by four members of craft ai as IoT and IA expert and a member from Radio France Innovation as audio fiction and sound expert.

Tell me more about "The Teller"

The Teller helps you to tell a story, while your connected objects, like connected light bulb, and speakers, create the atmosphere. In this previous post [Hackathon "Hack The Echo"]({% posturl 2015-10-16-hacktheecho %}), we've introduced the concept of _The Teller. Now, after this Hackathon, we have a full presentation of this immersive story telling app.

The Pitch

The goal of this hackathon was not only to create a cool demo, but also to convince a jury that this was a good and commercially viable idea. Thus we had 5 minutes to present the product, the business model and also make demo. You can check the slides of the presentation (in french).

Technical aspects

The workflow is to design moods, that defines atmosphere like a dark forest, campfire, or thunder and lightning. Then craft will select the perfect mood according to the story.

The mood designer

We build a backend to design moods, where a mood is a collection of lights, sounds, musics and pictures. For instance for a storm, you can select a collection of sound, light, color, music. The mood designer

The behavior

A scene is the scenarization of sounds, lights etc. In The Teller, craft ai is playing the role of the director. The presets are given through the "moods" and craft selects and organize them. For instance, when the story deals with thunders, craft selects the thunder mood as the current mood. The thunder behavior tree (here under) is called. This BT plays in parallel:

  • The sound of the given mood,
  • Strobe the light,
  • Display a random picture from this mood.

This gave us a very flexible way of designing a wide variety of atmosphere. Thanks to the moods, we can also provide a non repetitive but coherent atmosphere by selecting random items from the mood. A thunder Scene


We define interaction model in Alexa skills Kit to trigger and select the proper story.

- Me: "Alexa, tell me an AWESOME story!"
- Alexa: "What kind of story would you like?"
- Me: "about a forest"
- Alexa: "Here's your story: the dark forest."
...The Teller creates the show ...
- Me: "Alexa, I begin to fall asleep."
- Alexa: "Sleep well."

This dialog triggers a lambda that makes a request to a start on The Teller.

Connection to the LIFX

The LIFX provides a REST API, we simply had to create four craft actions: LiFXChangeColor/LiFXStrobe/LiFXBreath/LiFXOff.

Future of "The Teller"

We would like to continue on this project. First, we would like The Teller to be able to follow a live voice, not a recorded one, like in this demo. The architecture is made for this so we just have to find the proper speech to text API. Also, we used a limited number of connected objects, and we would like at least to extend to specialized sound. Our Team