- CRITICAL FIX: Properly implement Android 16KB page size support
- Migrated from TensorFlow Lite 2.12.0 to Google AI Edge LiteRT 1.4.0
- LiteRT provides native libraries with 16KB page alignment required by Google Play
- Added verification script (scripts/verify_16kb.sh) to check page size alignment
- Note: Version 0.12.0 claimed 16KB support but used TensorFlow Lite 2.12.0 (incompatible)
- Android 16KB page size support attempted (incomplete - fixed in 0.12.1)
- Updated Android Gradle Plugin to 8.6.1
- Updated Compile SDK to 36 (Android 15+)
- Modernized Gradle build system with plugins DSL
- Updated Kotlin to 1.8.10
- Fixed deprecated withOpacity warnings across examples
- All 14 example projects verified working
- FFI update, Dart/Flutter version updates
- MacOS desktop support added!
- Additional samples added
- Various bug fixes
- Use ffi for binding to iOS dependencies. Use the ios/tflite_flutter.podspec file to specify tensorflow lite library version. Dependencies are automatically downloaded without user intervention (no need for releases/download folder)
- Use ffi for binding with Android dependencies. Use the android/build.gradle file to specify tensorflow lite library version. Dependencies are automatically downloaded without user intervention (no need for releases/download folder)
- Use ffigen to generate the binding code in dart
- Enable delegates in text_classification example
- Add conversion for tensors of type uint8
- Add image classification example using mobilenet
- Add super resolution example using esrgan
- Add style transfer example
- IsolateInterpreter to run inference in an isolate. Related to Use isolates to run inference #52
- Support for melos added
- Text classification example fixed
- Support for Windows, Mac and Linux platforms.
- Improved gpu delegate support and bug fixes.
- Support for CoreML and XnnPack delegates.
- Null-safety major bug fix in tensor.dart
- Expose byte-object interconversion APIs
- Stable null-safety support
- Update to Dart 2.12 and package:ffi 1.0.0.
- Expose interpreter's address
- Create interpreter by address.
- Optimize getTensors and getTensor by Index
- Update readme
- Bug fix, output values copy to bytebuffer
- run supports UintList8 and ByteBuffer objects
- Bug fix, resize input tensor
- Improved efficiency
- New features
- multi-dimensional reshape with type
- Bug fixes
- extension flatten on List fixed.
- error on passing not dynamic type list to interpreter output fixed
- Direct conversion support for more TfLiteTypes
- int16, float16, int8, int64
- Pre-built tf 2.2.0 stable binaries
- update usage instructions
- fixed analysis issues to improve score
- fixed warnings
- longer package description
- TfLite dart API