Apps that learns with agent-based modeling
craft ai enables developers to create applications that learn from their users.
At least that’s what we’re saying! In this article, I’ll try to explain why that’s the case, and more importantly why it is a feature that is specific to our platform.
Basically, craft ai is about automating how an application uses contextual data such as the output of a sensor or the result of a webservice query to provide intelligent services to its user. At its core, craft ai allows a developer to implement rules defining which action to take when a specific set of conditions is met. But there’s a twist, in craft ai you don’t define the rules, you create the behavior of an agent that will apply the rules.
Let me explain how that’s different.
Let’s consider we have a motorized blind and a presence detector. We want the blind to automatically open when our user is entering the room.
With a simple rule system such as IFTTT or Zapier, it’s fairly straightforward. When a “someone’s there” event is triggered, we ask the blind to open. Of course you probably want the opposite rule that closes the blind when an event “everyone’s left” is triggered.
In craft ai you create an agent that will wait until someone enters the room then open the blind. Then, it’ll wait until the last person leaves the room and closes the blind. Instead of two distinct rules, only one agent’s behavior is created.
Not that different? Let’s see about that…
Learning from the user
Let’s go back to our IFTTT-like system, over the course of one person entering then leaving the room, the two rules are instantiated by the event then destructed once the command is sent to the blind. It’s not straightforward to keep information from one execution to the other, it’s even difficult to know that the closing of the blind is related to its opening.
In craft ai, the same agent is responsible for both actions and any future one. Because the opening and closing are scheduled by the same “object”, it is able to keep track of what happened over time, collect data, and use what it learned.
In our simple use case, if the user manually lowers or raises the blind to her taste; the craft agent can memorize her preference and apply it the next time she enters the room. With no additional complexity, our blind-opener application is now personalized to the room user’s taste!
Of course this can be easily extended, maybe the user’s tastes depend on the weather or the time of day, so with a simple connection to the relevant webservices, we can store separate personalized settings for a sunny afternoon or for a foggy morning. With a simple implicit setting, the existing manual controls of the blind, we have reached Nest-level of magic. This would have been much more difficult to achieve using rules.
We believe that thinking in terms of agents enables the easy development of applications that appear to “understand” their user. You can try it by yourself by registering to our beta today!