So, what next? Remember the business objective, how are we going to transfer what our model has just learnt to user experience. Machine Learning is a tool which adds value to your business use case and not every problem needs to be solved using Machine Learning if a simple rule based or analytical model can achieve an equivalent amount of performance, why complicate things?
Just like other engineering systems, for a Machine Learning based solution to add value to a business, it needs to touch base with the users.
Secondly, it is also important to understand that a Machine Learning system is a derived one unlike an instruction-based software driven on deterministic methods. While an instruction-based software would perform the way the instructions are written forever, a ML system on the other hand would perform at best based on the data it is trained upon.
In a perfect static environment, the once trained ML system would be like an instruction-based system, but we live in an ever changing world! Isn’t it?, and with the volatility getting intensified in the healthcare ecosystem.
The way users interact with the system change, and hence, the user events or strategic decisions which are produced.
Thirdly, as an engineering proponent, I often find the lack of modularity and reproducibility in ML workflow. Due to Machine Learning's iterative nature, it makes these principles even more important to be incorporated.
While an evaluation metric helps us decide which ML model, configuration, features are performing better and should always be chosen carefully, but often the chosen metric doesn't translate directly to the business KPI. This calls up for an evaluation approach which measures the KPI directly. Wait what, should not the performant model be decided and not based on the production data. While this still holds true, there can be multiple candidate performant models and choosing one out of them for deployment directly might not be the best foot forward. Hence, passing them through a split test experimentation funnel wherein the control group should always be the one without the aid of any ML model, is a better approach. The control group here is an important one to help answer if Machine Learning is useful at all for the use case or not.
Lastly, it stands out without saying that the data which ML model is trained upon originates based on how user interacts with the environment, product and there are likely other factors into play which impacts user behaviour. This results in a data drift as the training data differs in its distribution to what the user activities produce. Therefore, one should always update model either on a periodic interval or on an event driven signal when performance drops below a certain threshold. Even best of your system supports training on-the-fly and your use case needs it.
Integration is the foremost one which kind of makes way to the other principles and basically reiterates the fact that for Machine Learning as a tool to be useful, it needs to be incorporated into an application which the users interact with directly or indirectly based on the outcomes produced by the ML system.
Automation follows from the iterative nature along the ML workflow and promotes rapid iterations across various configurations, features, models and of course the underlying historical data.
Reproducibility brings in modularity into the workflow and result an ease in debugging, process management and provenance across data lineage. Again, it's modularity which supports the rapidness in the second pillar and provides much needed transparency across teams. Moreover, it further facilitates agile development into ML solutions. Often different teams work on similar features, and therefore this pillar removes redundancy along the pipeline.
Experimentation assures that the ML system developed indeed fulfils the business use case which it is derived upon. It serves as a checkmark to the earlier decided baseline and performance threshold.
Audit is an important step along the pipeline. The performance of your ML system keeps deteriorating over time as the training data keeps on growing older and older. For the system to keep performing, it is as important to not just consider the fresh data over time, but research upon features to capture new kind of user interactions.
On an ending note, the pillars compose a major chunk to what seems from outside a ML code responsible for all the magic, and I would keep on saying that it's never a magic and enabling the infra is the greatest good to building many more Machine Learning Applications.
-By Saransh Kumar, Data & Applied Scientist