--- base_model: AicoresSecurity/Cybernet-Sec-3B-R1-V0 tags: - text-generation-inference - transformers - unsloth - llama - trl license: llama3.2 language: - en --- ## Model Description AI Cores LLM that is fine tune with cybersecurity dataset for cybersecurity applications. It provides safety advise for python code, especially for Gen AI developers. ## Intended Use - **Intended users**: Application Engineers, Software Engineers, Data scientists and developers working on cybersecurity applications. - **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems. # How to use Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers and bitsandbytes installation via pip install --upgrade transformers. pip install --upgrade bitsandbytes accelerate torch Inference: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # Create a BitsAndBytesConfig for 8-bit quantization bnb_config = BitsAndBytesConfig( load_in_8bit=True ) import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AicoresSecurity/Cybernet-Sec-3B-R1-V0.1") if torch.cuda.is_available(): from transformers import BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "AicoresSecurity/Cybernet-Sec-3B-R1-V0.1", quantization_config=bnb_config, device_map="auto" ) else: # Fallback for CPU-only systems model = AutoModelForCausalLM.from_pretrained("AicoresSecurity/Cybernet-Sec-3B-R1-V0.1") # Define your system prompt and user prompt system_prompt = """You will be provided with a piece of Python code, and your task is to provide ideas for efficiency improvements ans safety. Only code ... """ user_prompt = """ import os def process_filename(filename): # Sanitize the filename to prevent path traversal attacks allowed_chars = "abcdefghijklmnopqrstuvwxyz0123456789_-" sanitized_filename = ''.join(c for c in filename if c in allowed_chars) if sanitized_filename != filename: raise ValueError("Invalid filename: only alphanumeric characters, underscores, and hyphens are allowed.") filepath = os.path.join("/user_data", sanitized_filename) try: with open(filepath, 'r') as file: content = file.read() return content except FileNotFoundError: return "File not found." # Example usage filename = "user_report.txt" print(process_filename(filename)) filename_attack = "../../../etc/passwd" try: print(process_filename(filename_attack)) except ValueError as e: print(e) """ full_prompt = system_prompt + user_prompt # Tokenize the full prompt and move it to the model's device input_ids = tokenizer.encode(full_prompt, return_tensors="pt").to(model.device) # Generate output from the model output_ids = model.generate( input_ids, max_length=100, # Adjust as needed do_sample=True, # Use sampling for more varied output temperature=0.7, # Adjust for creativity ) # Decode the generated tokens back into a string output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output_text) ``` # Uploaded model - **Developed by:** AicoresSecurity - **License:** apache-2.0 - **Finetuned from model :** AicoresSecurity/Cybernet-Sec-3B-R1-V0 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)