Federated Learning is a machine learning technique that moves the machine learning model from a central server to the end user's device. Unlike Process on Device, Train Centrally (PDTC), the learning is performed directly on the device and only the learning result is pushed to the central server.
Federated learning is a five-step process:
Similar to PDTC, you start by uploading a Data Schema to Begin’s service. This outlines the data points to reference. This is used with an outline of your data pool and additional instructions for your learning model, such as how often it learns and how deep the data pool is.
This is then translated as an SDK to be inserted into your app’s codebase.
The training model is distributed among your end devices and the learning begins. It starts off as basic at first, generally gathering high-level user segments.
The training algorithm will continue to learn and adapt to the users’ profiles directly from the device. This enables machine learning to continue without a network connection.
After a set period of time, the training results are collected and merged to form one global trained model. Only the results are pooled, the data itself remains on the device.
The new model is sent back to the device for further training. These steps repeat to continuously learn and adapt as your user base grows and shifts.
Pro Tip: Federated learning is best performed on devices with large storage and processing power, or a very large population of mobile devices with users heavily interacting with the system. It may be in most cases more effective to use a smaller population with larger data pools (100 computers instead of a million mobile devices) as larger populations may be more challenging to manage from an information bias perspective. In most cases our PDTC is more effective to implement, or a hybrid mix of FL and PDTC may deliver optimal outcomes.
Federated learning is useful for many different projects, including:
Because federated learning is performed exclusively on the device, your software can continue learning and improving without frequent check-ins to the central server. Since only the learning results are pushed to the central server, learning remains private.
Federated learning works directly on the host device, meaning it can train off of massive amounts of data. This includes videos, music, images, self-driving cars, and anything that uses large files or massive input sources.
Federated learning continues on the device without a need for frequent check-ins with the central server. Check-ins are useful to improve the algorithm and push updated learning models to the device, but your device can continue to tailor itself to your users’ preferences even without a connection.