Startups that succeed are those that succeed in creating value. And the value that we seek is the betterment of our human processes. We join accelerators to speed up the adventure, but entrepreneurs discover obstacles, and it’s the overcoming of those blocks that help us uncover the value of our endeavors. Here’s Chooch’s own example. (Blog post written & contributed by Chooch.ai)

When Chooch AI joined Founders Space, we thought we were doing Visual Search, providing a way to search for products in videos and images. We thought we were creating an advertising platform with some simple machine learning that would result in a way to serve ads, among other things, because we could identify things and then add tags to images. We noticed, though, that training an AI to learn had bottlenecks, many bottlenecks. In fact, solving one bottleneck after another was the key to unlocking our Computer Vision + AI solution we call Visual AI As A Service or simply, Visual AI. Chooch AI can now handle any visual task.

We focused on AI training and layering the different types of neural networks because although some people may be smart enough to build an AI by slapping together TensorFlow or similar frameworks, the problem really is, how do you layer the different neural networks, train them, make sure it works in synchrony, and then scale it? An AI is just an empty shell, a model with no content, a brain with no learning, a robot as dumb as an empty tin can. Even now, four years in building Chooch AI we are still teaching our AI content. We are also teaching it to learn, making it more and more trainable.

The models we keep building relate visual data to perceptions, and there are multiple neural networks that generate predictions and validate each other, representing the visual world in some form. This means that we are not just in the process of teaching Chooch everything visual in the world, content of any kind, but we are also teaching Chooch how to see the world in context, creating models that represent perceptions of an anomaly, radar data, counting, among many other things the human brain knows how to do naturally.

We just completed a face realness model known as liveness detection to make sure facial authentication is being performed on a live face. This solution was then benchmarked by a large international software development company for a top 100 global bank, both of which shall remain unnamed for now. Chooch AI was literally the only Visual AI system that succeeded not only in flawless identification of people it was trained to recognize, but our liveness detection model also knew when the camera was not being exposed to a live face, even when a mask or videos were so real it fooled humans.

Liveness detection is a bottleneck in the widespread adoption of Facial Authentication and we’ve got it working. If you are a software developer you can check out our facial authentication documentation posted on our site. If you work in startups but you are not a developer, you can check out our AI iPhone app demo. Even with our app, you can train Chooch to recognize your face. Here’s a demo video in which we train Chooch to recognize a sculpture.

A sculpture’s face would never pass a liveness test, and likewise, startups that don’t find breakthroughs in bottlenecks won’t pass the value creation test. Why would anyone seek to build any startup? To unlock some kind of bottleneck – a better process, a better product, a better service that meets a need in the market. This is called product-market fit, and we found a need for an end-to-end, highly trainable computer vision AI, both in visual content and models.

Now go unlock your bottlenecks like liveness detection.