Google writes about the name change for the runtime that supports models other than those solely authored with TensorFlow:
“Since its debut in 2017, TFLite has enabled developers to bring ML-powered experiences to over 100K apps running on 2.7B devices. More recently, TFLite has grown beyond its TensorFlow roots to support models authored in PyTorch, JAX, and Keras with the same leading performance.”
“The name LiteRT captures this multi-framework vision for the future: enabling developers to start with any popular framework and run their model on-device with exceptional performance.”
It will be part of the Google AI Edge suite of conversion and optimisation tools. And active development will continue on the runtime, says the company.
Note, however, that the main TensorFlow brand will not be affected. And nor will apps already using TensorFlow Lite.
What’s happening to the .tflite file extension and file format? No changes are being made, says Google. Conversion tools will continue to output .tflite flatbuffer files, and .tflite files will be readable by LiteRT.
Google says the name change will roll out progressively. For example, with the LiteRT name increasingly being reflected in developer documentation. The documentation at tensorflow.org/lite now redirects to corresponding pages at ai.google.dev/edge/litert.
TensorFlow has faced increased competition from the likes of PyTorch. This open source machine learning (ML) framework was originally developed by Facebook’s AI research group.