- cross-posted to:
- opensource@lemmy.ml
- cross-posted to:
- opensource@lemmy.ml
cross-posted from: https://lemm.ee/post/43866352
Tidy- Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine
Features
- Text-to-Image search: Find photos using natural language descriptions.
- Image-to-Image search: Discover visually similar images.
- Automatic indexing: New photos are automatically added to the index.
- Fast and efficient: Get search results quickly.
- Privacy-focused: Your photos never leave your device.
- No internet required: Works perfectly offline.
- Powered by OpenAI’s CLIP model: Uses advanced AI for accurate results.
Oh interesting
Last update was 12 months ago, are there any newer models out now that it could use?
edit:
Wow this is decent. I didn’t have too many photos on my phone, but it’s able to identify some basic animals and plants