--- library_name: transformers tags: [] --- # Model Card for AKALI AKALI (Aggressive Knowledge Augmenter and Language Interface) is a library for language model augmentation and interfaces, designed to enhance AI model capabilities through strategic data augmentation and efficient task management. ## Model Details ## Model Description - Developed by: Ali Eren Ak - Funded by: [More Information Needed] - Shared by: Ali Eren Ak - Model type: Language model trained with augmented data - Language(s) (NLP): Multiple (supports various language models) - License: Proprietary and confidential - Finetuned from model: `google/gemma-2-2b-it` using AKALI is a framework) ### Model Sources [optional] - **Repository:** https://github.com/alierenak/akali ## Direct Use 1. Load and interact with various language models. 2. Perform knowledge augmentation to improve model performance. 3. Manage different NLP tasks. 4. Make predictions using loaded models. ### Downstream Use [optional] AKALI can be integrated into larger AI systems or applications for: 1. Enhancing existing language models through data augmentation. 2. Creating custom NLP tasks and processors. 3. Building more robust and accurate AI systems. ### Out-of-Scope Use AKALI should not be used for: 1. Generating or promoting harmful, biased, or misleading content. 2. Unauthorized access to proprietary language models. 3. Violating data privacy or intellectual property rights. ## Bias, Risks, and Limitations 1. AKALI's performance depends on the quality and biases of the underlying language models used. 2. The effectiveness of augmentation strategies may vary depending on the specific task and dataset. 3. Users should be aware of potential biases in the generated or augmented data. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ```python from akali import LanguageInterface # Load a model li = LanguageInterface.load_model("alierenak/gemma-7b-akali") # Set the task li.set_task("EntitySentimentReasoner") # Make a prediction result = li.predict(system_text=None, user_message="Turkcell hiç güzel çeken bir hat değil o yüzden Vodofone'u tercih ediyorum hem de daha ucuz") print(result) ``` ## Training Details AKALI itself is not a trained model, but a framework for augmenting and interfacing with language models. The training data would depend on the specific models and tasks used with AKALI. This model is trained on data augmented by `Meta-Llama-3.1-70B-Instruct` and fine-tuned version of `google/gemma-2-2b-it`. ### Training Data Can be accessed from [Github repo](https://github.com/alierenak/akali) ## Evaluation Evaluation of AKALI would depend on the specific use case, models, and tasks it's applied to. Users are encouraged to perform task-specific evaluations. ## Environmental Impact The environmental impact of using AKALI would vary based on the specific models and compute resources used. Users are encouraged to use the Machine Learning Impact calculator to estimate the carbon emissions for their specific use case. ## Model Card Authors Ali Eren Ak ## Model Card Contact akali@sabanciuniv.edu