Skip to content

User Guide

User Guide

This is the user guide for Scaledown. It will walk over basic concepts used in TinyML which is used in the package.

Quantization:

Quantization is the process of converting larger floating bit values to integer values. This helps to do complex computations on resource- constraint devices. There are two types of quantization:

  • Weight Quantization
  • Activation Quantization

Pruning:

Neural Networks in their nature are overparametrised networks. With any given network, you can remove or prune some parts of it and still maintain the same metrics. But where do you begin pruning? Weights or Neurons? Pruning can be quantified and heuristically approached. To learn more, check out the guide on Pruning

Knowledge Distillation:

Knowledge Distillation as the word suggests, is the process of trickling down information from a larger(teacher) model to aa smaller(student) model. These student models can be deployed to microcontrollers. But how do you use such models? The idea comes from training a large models and learning parts of the information important to the data. And then deriving a smaller model which would learn from the output generated by the large model. This way, you don't overwhelm the student model and it is able to learn representions necessary to the dataset in a clear and concise manner. Check the guide to know more.