Training Models for TinyML
Training Models for TinyML¶
When training models for large systems, we almost always focus on improving the accuracy of the model. Two methods that will generally increase your model’s accuracy are to either use more high quality data or to use larger model architectures with complex layers, operations and data flows. In the first case, we have already talked about how data can be collected and preprocessed as well as how we can apply feature engineering techniques to improve the performance of TinyML models.
However, unlike MLOps, when we train models for TinyML we cannot use large and complex custom layers. In fact, we cannot use most of the commonly available model architectures or layers and operations that are typically used in large models since they are not supported on many TinyML devices. This creates a dilemma since we need to train models such that they can maintain appreciable accuracy while still being computationally simple and occupy less memory.
In this chapter, we will learn more about how we can design and train models such that we can get the least accuracy degradation while still being computationally cheap when we deploy it to a TinyML device.