10 Rules for Collaborative Artificial Intelligence
LevatasLevatas
While the exact number that analysts and AI experts assign to it may vary, the “rule of thumb” statistic is that only 1 out of every 10 Artificial Intelligence initiatives ever makes it into production. Having worked in this field for years, alongside countless clients trying to make AI a reality, it’s not a statistic that is difficult to accept.Â
The specific reasons for this difficulty are vast and nuanced, but, in my opinion, can be captured in three big buckets: translating the academic nature of data science to business, having the right data, and collaborating (or lack of collaboration) on building a solution that provides value. Â
Interested in collaborative AI? Check out these 10 rules for building an AI solution.
Expanding on all of the above is another piece altogether. Instead, I’d like to share some insight into what’s possible, specifically related to the third bucket above: building collaborative AI, as a collaborative framework, really is the key to mitigating the risks of bias, distrust, and concept drift in AI deployments. The problem is that collaborative AI is often misunderstood or misapplied, causing organizations to miss out on AI’s full potential.Â
So, with that in mind, here are my 10 rules for selecting, designing, and building collaborative AI solutions.Â
For many Applications, humans and AI have complementary strengths and weaknesses. Building collaborative AI is like diversifying your investment portfolio; every asset has its place, but too much of a good thing can be bad. It’s therefore important to build balance. The best solutions leverage complementary components to create a better whole than any of the individual parts and sometimes even better than the sum of the parts.Â
To build collaborative AI, it’s critical to select the right Applications. Consider the data you have available and what insights you can glean from it. Consider management/organizational buy-in and ask questions like who needs to sign off on this kind of project? Also, choose a problem where a partial solution is valuable; it will make the buy-in easier for the team. If possible, choose a problem where offline review is feasible/valuable to prove ROI benefit.Â
When it comes to AI, people respond on different ends of the spectrum: sometimes there’s hype or fear, and it’s important to avoid both. While AI has incredible capabilities, it isn’t perfect, so don’t expect perfection. Also, don’t expect full automation either; there will usually be corner cases that require human oversight, even for the best models.
Before you begin your project, think carefully about how to measure success. Don’t default to tech performance metrics, i.e., Precision, Recall, F1 Score, etc. Instead, set measurable success milestones that clearly show the effectiveness or ineffectiveness of your model in terms that matter the most to the business. Often, this translates into either efficiency or cost savings metrics.Â
Don’t wait until you think everything is finished with your model. If you’ve defined incremental success well, ship at the first successful milestone. This gets value earlier, builds trust, and gets critical feedback on performance! This is invaluable feedback to help models learn and improve performance in future iterations.
Mistakes will happen in building AI models, and that’s okay if you plan for them. Define the cost of mistakes and the value of successful predictions. This way, there are no surprises to the team when there’s a setback. Moreover, your team needs to get the model running, so define the cost of human review and think about how that scales.
The most successful AI models for businesses don’t operate completely on their own. So design your model with the E2E system in mind, meaning It’s one piece of the solution. Design your model to complement the strengths/weaknesses of human subject matter experts. Often, the optimal model is skewed towards high recall and lower precision.
Don’t expect inference time performance to match testing, especially over time. Plan to use human review to verify and benchmark model performance.
Every human review is another data point to improve future performance. This is true of all data, but ESPECIALLY for instances where a subject matter and AI disagree. Build a feedback loop, and automate that feedback and retraining process as much as possible, so your model can rapidly incorporate new data points and improve future predictions.
Use people to analyze the efficacy of the model where it may have messed up. The best collaborative systems optimize subject matter expert time around the tough cases. For example, if there’s low light or pictures taken at night. Create rules, or better yet, use a tool that does this automatically. This was the primary driver for a proprietary model we built called Vinsa, which uses multiple data inputs and a human-in-the-loop approach to prioritizing active next steps and continuous optimization of the AI model.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Related Articles