Apple released AIMv2 🍏 a family of state-of-the-art open-set vision encoders apple/aimv2-6720fe1558d94c7805f7688c > like CLIP, but add a decoder and train on autoregression 🤯 > 19 open models come in 300M, 600M, 1.2B, 2.7B with resolutions of 224, 336, 448 > Load and use with 🤗 transformers
For anyone who struggles with NER or information extraction with LLM.
We showed an efficient workflow for token classification including zero-shot suggestions and model fine-tuning with Argilla, GliNER, the NuMind NuExtract LLM and SpanMarker. @argilla