Bringing state of the art AI to the edge.

Full stack edge vision

01

LABEL

Our platform enables rapid, distributed, data-set collection and annotation. No matter the vertical we can help you scale your training data in the most accurate and affordable way possible.

Data quality and diversity can have an enormous impact on your end model quality. Our team comes from leadership roles in data collection at companies such as Google AI -- let us help you collect data right the first time around.

02

DEPLOY

All of our models are optimized to run efficiently on the edge. Whether you're deploying to a power constrained robot or in a connectivity constrained environment, our mobile models run in real time on the edge.

 

If you bring your own data set all architecture selection, hyper-parameter tuning, quantization, and hardware compatibility is handled seamlessly under the hood.

03

ITERATE

Our platform automatically includes human in the loop feedback to refine and improve your models over time. This means you know exactly how accurate your models are in the real world and without any action on your end, models will adapt, learn, and improve.

Full stack edge vision

01

LABEL

Our platform enables rapid, distributed, data-set collection and annotation. No matter the vertical, we can help you scale your training data in the most accurate and affordable way possible.

Data quality and diversity is frequently the limiting factor in final model accuracy. Our team is made up of seasoned industry leaders from companies such as Google AI, who will help you identify pitfalls before you even start.

02

DEPLOY

All of our models are optimized to run efficiently on the edge. Whether you are deploying to a power constrained robot or in a connectivity constrained environment, our mobile models run in real time.

 

If you bring your own data-set, all architecture selection, hyper-parameter tuning, model quantization, and hardware compatibility will be handled seamlessly under the hood.

03

ITERATE

Our platform automatically includes human-in-the-loop feedback to refine your models over time. As models are used in new environments they will adapt, learn, and improve.

Human-in-the-loop feedback enables live tracking of production accuracy in the field. Use our iteration tools to track how well your models are really preforming on user data.

 
 

PRODUCTS

DATA

Our mobile optimized cloud platform enables you to manage and scale a distributed workforce collecting and annotating computer vision training data. All of our tools work in the browser on any device with a camera, meaning you can send out a single link to temporary workers with a smart phone and they are ready to start collecting data. Monitor aggregate statistics from the summary page and arrive at a diverse, valuable data set in no time at all. Once you're done our platform enables single click model training, right from the web dashboard.

EDGE VISION

Our edge vision SDK is specifically designed for low cost, widely accessible edge hardware (e.g. Raspberry Pi with optional edge TPU). Our software will automatically optimize your models to run efficiently on-device. No need for an internet connection, you'll get real time vision signals without pinging the data center.

ACTIVE LEARNING

Our active learning product enables you to add humans in the loop to any on-device vision signal. We intelligently choose which images to send to workers for evaluation, so that you spend less and get more. A live production image stream is the best training data you can buy -- watch as your models get better over time, without any wasted developer time.

Our dashboard will track model accuracy on the latest stream of production data, enabling you to identify issues in new environments that a normal test set cannot see coming.

 

STAY IN TOUCH

Santa Monica, CA

REQUEST DEMO

To schedule a product demo with one of our product consultants, please fill in your contact details
 

© 2019 by PerceptionLabs, Inc.