ysn-rfd commited on
Commit
c00c444
·
verified ·
1 Parent(s): 488e3ff

Password list generator, auto

Browse files
password-list-generator-complete-main/LICENSE ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2025 ysnrfd
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy
4
+ of this software and associated documentation files (the "Software"), to use,
5
+ copy, modify, and distribute the Software, subject to the following conditions:
6
+
7
+ 1. The copyright notice, this permission notice, and all attribution information
8
+ regarding the original author (ysnrfd) must be preserved in their entirety
9
+ and must not be removed, altered, or obscured in any copies or derivative works.
10
+
11
+ 2. Any modifications or derivative works must be clearly documented in a "CHANGELOG" or
12
+ "NOTICE" file included with the Software. This documentation must include a detailed
13
+ description of the changes made, the date of the modification, and the identity of
14
+ the modifier.
15
+
16
+ 3. The Software is provided "as is", without warranty of any kind, express or implied.
17
+ The author shall not be liable for any damages arising from use of the Software.
18
+
19
+ 4. Any attempt to remove or alter the original attribution or copyright information
20
+ constitutes a violation of this license and may result in legal action.
password-list-generator-complete-main/PLG_ysnrfd.py ADDED
@@ -0,0 +1,1769 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import string
3
+ import re
4
+ import math
5
+ import json
6
+ import os
7
+ import time
8
+ import random
9
+ import hashlib
10
+ from datetime import datetime, timedelta
11
+ from collections import Counter, defaultdict
12
+ import nltk
13
+ from nltk.corpus import wordnet as wn
14
+ from nltk.probability import FreqDist
15
+ import requests
16
+ from tqdm import tqdm
17
+ import pandas as pd
18
+ from sklearn.feature_extraction.text import TfidfVectorizer
19
+ from sklearn.metrics.pairwise import cosine_similarity
20
+ import Levenshtein
21
+
22
+
23
+ """
24
+
25
+ Developer: YSNRFD
26
+ Telegram: @ysnrfd
27
+
28
+ """
29
+
30
+
31
+ nltk.download('wordnet', quiet=True)
32
+ nltk.download('punkt', quiet=True)
33
+ LANGUAGE_DATA = {
34
+ 'en': {
35
+ 'name': 'English',
36
+ 'common_words': ['love', 'password', 'welcome', 'admin', 'sunshine', 'dragon', 'monkey', 'football', 'baseball', 'letmein',
37
+ 'qwerty', 'trustno1', '123456', '12345678', 'baseball', 'football', '123456789', 'abc123', '1234567', 'monkey',
38
+ 'iloveyou', 'princess', 'admin123', 'welcome1', 'password1', 'qwerty123', '12345', '123123', '111111', 'abc123'],
39
+ 'special_chars': ['@', '#', '$', '%', '&', '*', '!', '_', '.', '-'],
40
+ 'number_patterns': ['1234', '12345', '123456', '1111', '2023', '2024', '0000', '123123', '7777', '9999', '123', '321', '01', '13', '23', '24', '99', '00'],
41
+ 'cultural_events': ['Christmas', 'Halloween', 'Thanksgiving', 'Easter', 'New Year', 'Independence Day', 'Valentine', 'Super Bowl', 'Memorial Day', 'Labor Day', 'Cinco de Mayo', 'St. Patrick\'s Day', 'Mardi Gras', 'Fourth of July'],
42
+ 'zodiac_signs': ['Aries', 'Taurus', 'Gemini', 'Cancer', 'Leo', 'Virgo', 'Libra', 'Scorpio', 'Sagittarius', 'Capricorn', 'Aquarius', 'Pisces', 'Ophiuchus'],
43
+ 'celebrity_names': ['Beyonce', 'Taylor', 'Adele', 'Brad', 'Angelina', 'Elon', 'Oprah', 'Trump', 'Bieber', 'Ariana', 'Kardashian', 'Perry', 'Rihanna', 'Drake', 'Kanye', 'Kim', 'Zendaya', 'Tom', 'Jennifer', 'Leonardo'],
44
+ 'sports_teams': ['Yankees', 'Cowboys', 'Lakers', 'Giants', 'Patriots', 'Warriors', 'Cubs', 'Eagles', 'Knicks', 'Rangers', 'Red Sox', 'Jets', 'Dolphins', 'Steelers', 'Packers', 'Broncos', 'Bears', 'Seahawks'],
45
+ 'universities': ['Harvard', 'Yale', 'Stanford', 'MIT', 'Princeton', 'Columbia', 'Berkeley', 'UCLA', 'Oxford', 'Cambridge', 'Cornell', 'Duke', 'NYU', 'USC', 'Chicago', 'Penn', 'Brown', 'Dartmouth'],
46
+ 'common_dates': ['0101', '1231', '0704', '1031', '1225', '0214', '0911', '1111', '0505', '0317'],
47
+ 'leet_mappings': {
48
+ 'a': ['@', '4', 'A', 'À', 'Á'],
49
+ 'e': ['3', 'E', '&', '€', 'È', 'É'],
50
+ 'i': ['1', '!', 'I', '|', 'Ì', 'Í'],
51
+ 'o': ['0', 'O', '*', 'Ò', 'Ó'],
52
+ 's': ['$', '5', 'S', 'Š', '§'],
53
+ 't': ['+', '7', 'T', 'Ţ', 'Ť'],
54
+ 'l': ['1', '|', 'L', '£', '₤'],
55
+ 'g': ['9', '6', 'G', 'Ģ', 'Ĝ'],
56
+ 'b': ['8', 'B', 'b', 'ß'],
57
+ 'z': ['2', '7', 'Z', 'Ž']
58
+ },
59
+ 'keyboard_patterns': [
60
+ 'qwerty', 'asdfgh', 'zxcvbn', '123456', 'qazwsx', '1q2w3e', '123qwe',
61
+ 'zaq12wsx', '1qaz2wsx', 'qwerasdf', '1234qwer', '!@#$%^&*()', '1q2w3e4r',
62
+ 'qwe123', '123asd', 'qaz123', '1qazxsw', '1q2w3e4', 'qazxsw'
63
+ ],
64
+ 'common_suffixes': ['123', '1234', '12345', '007', '2023', '2024', '!', '@', '#', '$', '%', '&', '*', '_', '.'],
65
+ 'common_prefixes': ['my', 'i', 'the', 'new', 'old', 'super', 'mega', 'ultra', 'best', 'cool']
66
+ },
67
+ 'de': {
68
+ 'name': 'German',
69
+ 'common_words': ['hallo', 'passwort', 'willkommen', 'admin', 'sonne', 'drache', 'affe', 'fussball', 'baseball', 'einfach',
70
+ 'qwertz', 'geheim1', '123456', '12345678', 'fussball', '123456789', 'abc123', '1234567', 'affe',
71
+ 'liebe', 'prinzessin', 'admin123', 'willkommen1', 'passwort1', 'qwertz123', '12345', '123123', '111111', 'abc123'],
72
+ 'special_chars': ['@', '#', '$', '%', '&', '*', '!', '_', '.', '-'],
73
+ 'number_patterns': ['1234', '12345', '123456', '1111', '2023', '2024', '0000', '123123', '7777', '9999', '123', '321', '01', '13', '18', '42', '77', '88'],
74
+ 'cultural_events': ['Weihnachten', 'Halloween', 'Erntedank', 'Ostern', 'Neujahr', 'Tag der Deutschen Einheit', 'Valentinstag', 'Oktoberfest', 'Karneval', 'Silvester', 'Muttertag', 'Vatertag', 'Schützenfest'],
75
+ 'zodiac_signs': ['Widder', 'Stier', 'Zwillinge', 'Krebs', 'Löwe', 'Jungfrau', 'Waage', 'Skorpion', 'Schütze', 'Steinbock', 'Wassermann', 'Fische', 'Schlangenträger'],
76
+ 'celebrity_names': ['Angela', 'Merkel', 'Bundesliga', 'Bayern', 'Dortmund', 'Schumi', 'Schumacher', 'Lindemann', 'Rammstein', 'Klum', 'Helene', 'Fischer', 'Thomas', 'Gottschalk', 'Heidi', 'Klum', 'Böhmermann'],
77
+ 'sports_teams': ['Bayern', 'Dortmund', 'Schalke', 'BVB', 'FCB', 'Werder', 'Hoffenheim', 'RB Leipzig', 'Bayer', 'Leverkusen', 'Hamburg', 'Frankfurt', 'Wolfsburg', 'Stuttgart', 'Union', 'Köln'],
78
+ 'universities': ['LMU', 'TUM', 'Heidelberg', 'Humboldt', 'FU Berlin', 'KIT', 'RWTH', 'Goethe', 'Tübingen', 'Freiburg', 'Jena', 'Konstanz', 'Bonn', 'Halle', 'Marburg', 'Göttingen'],
79
+ 'common_dates': ['0101', '1231', '1003', '3110', '2512', '1402', '0911', '1111', '1508', '0310'],
80
+ 'leet_mappings': {
81
+ 'a': ['@', '4', 'Ä', 'ä', 'Â'],
82
+ 'e': ['3', 'E', '&', '€', 'È', 'É'],
83
+ 'i': ['1', '!', 'I', '|', 'Ì', 'Í'],
84
+ 'o': ['0', 'O', 'Ö', 'ö', 'Ò', 'Ó'],
85
+ 's': ['$', '5', 'S', 'ß', 'Š', '§'],
86
+ 't': ['+', '7', 'T', 'Ţ', 'Ť'],
87
+ 'l': ['1', '|', 'L', '£', '₤'],
88
+ 'g': ['9', '6', 'G', 'Ģ', 'Ĝ'],
89
+ 'b': ['8', 'B', 'b', 'ß'],
90
+ 'u': ['µ', 'ü', 'Ü']
91
+ },
92
+ 'keyboard_patterns': [
93
+ 'qwertz', 'asdfgh', 'yxcvbn', '123456', 'qaywsx', '1q2w3e', '123qwe',
94
+ 'zaq12wsx', '1qaz2wsx', 'qwerasdf', '1234qwer', '!@#$%^&*()', '1q2w3e4r',
95
+ 'qwe123', '123asd', 'qaz123', '1qazxsw', '1q2w3e4', 'qazxsw', 'qay123'
96
+ ],
97
+ 'common_suffixes': ['123', '1234', '12345', '007', '2023', '2024', '!', '@', '#', '$', '%', '&', '*', '_', '.'],
98
+ 'common_prefixes': ['mein', 'meine', 'der', 'die', 'das', 'super', 'mega', 'ultra', 'gut', 'cool']
99
+ },
100
+ 'fa': {
101
+ 'name': 'Persian',
102
+ 'common_words': ['سلام', 'پسورد', 'خوش آمدید', 'مدیر', 'خورشید', 'اژدها', 'میمون', 'فوتبال', 'بیسبال', 'بگذار',
103
+ '123456', '12345678', 'فوتبال', '123456789', 'abc123', '1234567', 'میمون',
104
+ 'عشق', 'شیرینی', 'admin123', 'خوش آمدید1', 'پسورد1', '12345', '123123', '111111', 'abc123'],
105
+ 'special_chars': ['@', '#', '$', '%', '&', '*', '!', '_', '.', '-'],
106
+ 'number_patterns': ['1234', '12345', '123456', '1111', '2023', '2024', '0000', '123123', '7777', '9999', '123', '321', '01', '13', '88', '99', '110', '313', '5', '7', '14', '22'],
107
+ 'cultural_events': ['نوروز', 'تاسوعا', 'عاشورا', 'یلدا', 'عید نوروز', 'عید فطر', 'عید قربان', 'شب یلدا', 'سیزده به در', 'رحلت پیامبر', 'ولادت علی', 'عید سعید', 'عید ده', 'عید دو'],
108
+ 'zodiac_signs': ['حمل', 'ثور', 'جوزا', 'سرطان', 'اسد', 'سنبله', 'میزان', 'عقرب', 'قوس', 'جدی', 'دلو', 'حوت', 'مارپیچ'],
109
+ 'celebrity_names': ['محمد', 'رضا', 'احمد', 'علی', 'حسین', 'فاطمه', 'زهرا', 'شادی', 'پوریا', 'سحر', 'سارا', 'مریم', 'رضا', 'صدر', 'حسین', 'پور', 'خان', 'علوی', 'محمدی', 'حسینی'],
110
+ 'sports_teams': ['استقلال', 'پرسپولیس', 'تراکتور', 'سپاهان', 'فولاد', 'ذوب آهن', 'سایپا', 'ملی', 'ملی ایران', 'سپاهان', 'ذوب', 'آهن', 'تراکتور', 'ذوب', 'سپاهان'],
111
+ 'universities': ['تهران', 'شریف', 'امیرکبیر', 'صنعتی', 'شهید', 'بهشتی', 'فردوسی', 'مشهد', 'اصفهان', 'شیراز', 'عمران', 'مکانیک', 'علوم', 'پزشکی', 'تربیت'],
112
+ 'common_dates': ['0101', '1231', '2103', '1302', '2206', '1402', '0911', '1111', '0104', '0102', '1102', '1301'],
113
+ 'leet_mappings': {
114
+ 'a': ['@', '4', 'آ', 'ا', 'أ'],
115
+ 'i': ['1', '!', 'ی', 'ي', 'ئ'],
116
+ 'o': ['0', '*', 'او', 'ؤ'],
117
+ 's': ['$', '5', 'ث', 'س'],
118
+ 'l': ['1', '|', 'ل', 'لـ'],
119
+ 'g': ['9', '6', 'گ', 'گـ'],
120
+ 'b': ['8', 'ب', 'بـ'],
121
+ 'p': ['9', 'پ', 'پـ'],
122
+ 't': ['7', 'ط', 'ت'],
123
+ 'j': ['7', 'ج', 'چ']
124
+ },
125
+ 'keyboard_patterns': [
126
+ 'ضثص', 'شیس', 'آکل', '123456', 'ضشی', '1ض2ث3ص',
127
+ 'ضشیثص', '1ض2ش3ی', 'ضثشی', '1234ضث', '!@#$%^&*()',
128
+ 'ضصثق', 'یسش', 'آکل', '12345', 'ضشی', '1ض2ث3ص4',
129
+ 'ضشیثص', '1ض2ش3ی', 'ضثشی', '1234ضث', 'ضثص123'
130
+ ],
131
+ 'common_suffixes': ['123', '1234', '12345', '007', '2023', '2024', '!', '@', '#', '$', '%', '&', '*', '_', '.'],
132
+ 'common_prefixes': ['من', 'منو', 'عشق', 'دوست', 'دوست دارم', 'مثل', 'خیلی', 'خیلیی', 'عزیزم', 'عزیزمم']
133
+ },
134
+ 'fr': {
135
+ 'name': 'French',
136
+ 'common_words': ['bonjour', 'motdepasse', 'bienvenue', 'admin', 'soleil', 'dragon', 'singe', 'football', 'baseball', 'simple',
137
+ 'azerty', 'secret1', '123456', '12345678', 'football', '123456789', 'abc123', '1234567', 'singe',
138
+ 'amour', 'princesse', 'admin123', 'bienvenue1', 'motdepasse1', 'azerty123', '12345', '123123', '111111', 'abc123'],
139
+ 'special_chars': ['@', '#', '$', '%', '&', '*', '!', '_', '.', '-'],
140
+ 'number_patterns': ['1234', '12345', '123456', '1111', '2023', '2024', '0000', '123123', '7777', '9999', '123', '321', '01', '13', '14', '75', '89', '42', '18'],
141
+ 'cultural_events': ['Noël', 'Halloween', 'Action de Grâce', 'Pâques', 'Nouvel An', 'Fête Nationale', 'Saint-Valentin', 'Tour de France', 'Bastille Day', 'La Fête de la Musique', 'Carnaval', 'Fête du Travail'],
142
+ 'zodiac_signs': ['Bélier', 'Taureau', 'Gémeaux', 'Cancer', 'Lion', 'Vierge', 'Balance', 'Scorpion', 'Sagittaire', 'Capricorne', 'Verseau', 'Poissons', 'Ophiuchus'],
143
+ 'celebrity_names': ['Zidane', 'Del Piero', 'Depardieu', 'Johnny', 'Hallyday', 'Sarkozy', 'Macron', 'Amelie', 'Poulain', 'Audrey', 'Tautou', 'Gad', 'Elkabbach', 'Deneuve'],
144
+ 'sports_teams': ['PSG', 'OM', 'OL', 'Bayern', 'Real', 'Barça', 'Marseille', 'Lyon', 'Paris', 'Monaco', 'Saint-Étienne', 'Nantes', 'Lille', 'Lens'],
145
+ 'universities': ['Sorbonne', 'Polytechnique', 'Sciences Po', 'HEC', 'ENS', 'Dauphine', 'Panthéon', 'Aix-Marseille', 'Lyon 2', 'Toulouse 1', 'Grenoble', 'Strasbourg'],
146
+ 'common_dates': ['0101', '1231', '1407', '1111', '2512', '1402', '0911', '1111', '0104', '0102', '1107', '0803'],
147
+ 'leet_mappings': {
148
+ 'a': ['@', '4', 'À', 'Á', 'Â'],
149
+ 'e': ['3', 'E', '€', 'È', 'É', 'Ê', 'Ë'],
150
+ 'i': ['1', '!', 'I', '|', 'Ì', 'Í'],
151
+ 'o': ['0', 'O', 'Ö', 'Ò', 'Ó'],
152
+ 's': ['$', '5', 'S', 'Š'],
153
+ 't': ['+', '7', 'T', 'Ţ'],
154
+ 'l': ['1', '|', 'L', '£'],
155
+ 'g': ['9', '6', 'G', 'Ĝ'],
156
+ 'b': ['8', 'B', 'b']
157
+ },
158
+ 'keyboard_patterns': [
159
+ 'azerty', 'qsdfgh', 'wxcvbn', '123456', 'aqwzsx', '1&2é3"', '123&é"',
160
+ '1234az', 'qsdfaz', '1234qwer', '!@#$%^&*()', '1&2é34"',
161
+ 'qwertz', '12345', '1q2w3e', '123qwe', 'qaz123', '1qazxsw'
162
+ ],
163
+ 'common_suffixes': ['123', '1234', '12345', '007', '2023', '2024', '!', '@', '#', '$', '%', '&', '*', '_', '.'],
164
+ 'common_prefixes': ['mon', 'ma', 'mes', 'le', 'la', 'les', 'super', 'mega', 'ultra', 'bon', 'bien']
165
+ },
166
+ 'es': {
167
+ 'name': 'Spanish',
168
+ 'common_words': ['hola', 'contraseña', 'bienvenido', 'admin', 'sol', 'dragón', 'mono', 'fútbol', 'béisbol', 'fácil',
169
+ 'qwerty', 'secreto1', '123456', '12345678', 'fútbol', '123456789', 'abc123', '1234567', 'mono',
170
+ 'amor', 'princesa', 'admin123', 'bienvenido1', 'contraseña1', 'qwerty123', '12345', '123123', '111111', 'abc123'],
171
+ 'special_chars': ['@', '#', '$', '%', '&', '*', '!', '_', '.', '-'],
172
+ 'number_patterns': ['1234', '12345', '123456', '1111', '2023', '2024', '0000', '123123', '7777', '9999', '123', '321', '01', '13', '15', '80', '21', '99', '00'],
173
+ 'cultural_events': ['Navidad', 'Halloween', 'Día de Acción de Gracias', 'Pascua', 'Año Nuevo', 'Día de la Independencia', 'San Valentín', 'Feria de Abril', 'San Fermín', 'Día de los Muertos', 'La Tomatina', 'Semana Santa'],
174
+ 'zodiac_signs': ['Aries', 'Tauro', 'Géminis', 'Cáncer', 'Leo', 'Virgo', 'Libra', 'Escorpio', 'Sagitario', 'Capricornio', 'Acuario', 'Piscis', 'Ofiuco'],
175
+ 'celebrity_names': ['Messi', 'Ronaldo', 'Beyoncé', 'Shakira', 'Piqué', 'García', 'Martínez', 'Rodríguez', 'Fernández', 'López', 'González', 'Pérez', 'Sánchez', 'Díaz'],
176
+ 'sports_teams': ['Barça', 'Madrid', 'Atletico', 'Barcelona', 'Real', 'Madrid', 'Sevilla', 'Valencia', 'Atlético', 'Betis', 'Villarreal', 'Athletic', 'Espanyol', 'Málaga'],
177
+ 'universities': ['Complutense', 'Autónoma', 'Politécnica', 'Barcelona', 'Sorbona', 'Salamanca', 'Granada', 'Sevilla', 'Valencia', 'Málaga', 'Santiago', 'Navarra'],
178
+ 'common_dates': ['0101', '1231', '1207', '1508', '2512', '1402', '0911', '1111', '0104', '0102', '2802', '0208'],
179
+ 'leet_mappings': {
180
+ 'a': ['@', '4', 'Á', 'À'],
181
+ 'e': ['3', 'E', '€', 'É', 'È'],
182
+ 'i': ['1', '!', 'I', '|', 'Í', 'Ì'],
183
+ 'o': ['0', 'O', 'Ö', 'Ó', 'Ò'],
184
+ 's': ['$', '5', 'S', 'Š'],
185
+ 't': ['+', '7', 'T'],
186
+ 'l': ['1', '|', 'L', '£'],
187
+ 'g': ['9', '6', 'G', 'Ĝ'],
188
+ 'b': ['8', 'B', 'b']
189
+ },
190
+ 'keyboard_patterns': [
191
+ 'qwerty', 'asdfgh', 'zxcvbn', '123456', 'qwaszx', '1q2w3e', '123qwe',
192
+ 'zaq12wsx', '1qaz2wsx', 'qwerasdf', '1234qwer', '!@#$%^&*()', '1q2w3e4r',
193
+ 'qwe123', '123asd', 'qaz123', '1qazxsw', '1q2w3e4', 'qazxsw'
194
+ ],
195
+ 'common_suffixes': ['123', '1234', '12345', '007', '2023', '2024', '!', '@', '#', '$', '%', '&', '*', '_', '.'],
196
+ 'common_prefixes': ['mi', 'mis', 'el', 'la', 'los', 'las', 'super', 'mega', 'ultra', 'buen', 'bien']
197
+ }
198
+ }
199
+ CURRENT_LANGUAGE = 'en'
200
+ class EthicalSafeguard:
201
+ def __init__(self):
202
+ self.usage_log = []
203
+ self.authorization_key = None
204
+ self.ethical_agreement = False
205
+ self.geolocation_verified = False
206
+ self.purpose_verified = False
207
+ def verify_ethical_usage(self):
208
+ print("\n🛡️ ETHICAL USAGE VERIFICATION REQUIRED")
209
+ print("This tool is strictly for educational and authorized security testing purposes only.")
210
+ try:
211
+ response = requests.get('https://ipapi.co/json/', timeout=5)
212
+ if response.status_code == 200:
213
+ geo_data = response.json()
214
+ country = geo_data.get('country', '').lower()
215
+ print(f"📍 Detected country: {geo_data.get('country_name', 'Unknown')}")
216
+ restricted_countries = ['cn', 'ru', 'kp', 'iq', 'ir', 'sy']
217
+ if country in restricted_countries:
218
+ print(f"❌ Usage restricted in {geo_data['country_name']} due to local regulations")
219
+ return False
220
+ self.geolocation_verified = True
221
+ except:
222
+ print("⚠️ Could not verify geolocation. Proceed with caution.")
223
+ print("\n📜 ETHICAL AGREEMENT:")
224
+ print("1. I confirm I have explicit written authorization to test the target system")
225
+ print("2. I understand that unauthorized access is illegal and unethical")
226
+ print("3. I will not use this tool for any malicious or unauthorized purpose")
227
+ print("4. I accept full responsibility for any consequences of my actions")
228
+ agree = input("\nDo you agree to these terms? (YES/NO): ").strip().upper()
229
+ if agree != "YES":
230
+ print("❌ Ethical agreement not accepted. Exiting...")
231
+ return False
232
+ self.ethical_agreement = True
233
+ print("\n🔍 PURPOSE VERIFICATION:")
234
+ print("Please describe the authorized purpose of this security test:")
235
+ purpose = input("> ").strip()
236
+ valid_purposes = [
237
+ 'penetration testing', 'security assessment', 'vulnerability research',
238
+ 'educational purpose', 'authorized security test', 'red team exercise'
239
+ ]
240
+ if not any(p in purpose.lower() for p in valid_purposes):
241
+ print("❌ Purpose does not match authorized security testing. Exiting...")
242
+ return False
243
+ self.purpose_verified = True
244
+ timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
245
+ random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=8))
246
+ self.authorization_key = f"AUTH-{timestamp}-{random_str}"
247
+ self.usage_log.append({
248
+ 'timestamp': datetime.now().isoformat(),
249
+ 'agreement_accepted': True,
250
+ 'purpose': purpose,
251
+ 'authorization_key': self.authorization_key,
252
+ 'geolocation_verified': self.geolocation_verified
253
+ })
254
+ print(f"\n✅ Ethical verification successful!")
255
+ print(f"🔑 Authorization Key: {self.authorization_key}")
256
+ print("⚠️ This key must be documented in your security testing report")
257
+ return True
258
+ def log_usage(self, passwords_generated, target_info):
259
+ usage_record = {
260
+ 'timestamp': datetime.now().isoformat(),
261
+ 'authorization_key': self.authorization_key,
262
+ 'passwords_generated': passwords_generated,
263
+ 'target_info_summary': {
264
+ 'has_name': bool(target_info.get('first_name')),
265
+ 'has_birthdate': bool(target_info.get('birthdate')),
266
+ 'has_email': bool(target_info.get('email')),
267
+ 'language': target_info.get('language', 'en')
268
+ },
269
+ 'ethical_verification': {
270
+ 'agreement': self.ethical_agreement,
271
+ 'geolocation': self.geolocation_verified,
272
+ 'purpose': self.purpose_verified
273
+ }
274
+ }
275
+ log_dir = os.path.join(os.path.expanduser("~"), ".security_tool_logs")
276
+ os.makedirs(log_dir, exist_ok=True)
277
+ log_file = os.path.join(log_dir, f"usage_log_{datetime.now().strftime('%Y%m%d')}.enc")
278
+ encrypted_log = hashlib.sha256(json.dumps(usage_record).encode()).hexdigest()
279
+ with open(log_file, 'a') as f:
280
+ f.write(encrypted_log + "\n")
281
+ return usage_record
282
+ class UserBehaviorPredictor:
283
+ def __init__(self, info):
284
+ self.user_info = info
285
+ self.behavior_profile = self._build_behavior_profile()
286
+ def _build_behavior_profile(self):
287
+ profile = {
288
+ 'security_awareness': 0.5,
289
+ 'password_complexity_preference': 0.5,
290
+ 'cultural_influence': 'moderate',
291
+ 'emotional_attachment_level': 0.5,
292
+ 'tech_savviness': 0.5
293
+ }
294
+ if self.user_info.get('password_change_frequency'):
295
+ try:
296
+ freq = int(self.user_info['password_change_frequency'])
297
+ profile['security_awareness'] = min(1.0, freq / 3)
298
+ except:
299
+ pass
300
+ tech_indicators = 0
301
+ if self.user_info.get('tech_savviness'):
302
+ try:
303
+ profile['tech_savviness'] = min(1.0, int(self.user_info['tech_savviness']) / 10)
304
+ tech_indicators += 1
305
+ except:
306
+ pass
307
+ if self.user_info.get('occupation') in ['developer', 'engineer', 'security', 'it']:
308
+ profile['tech_savviness'] = 0.8
309
+ tech_indicators += 1
310
+ if self.user_info.get('device_models'):
311
+ if any('iphone' in d.lower() or 'android' in d.lower() for d in self.user_info['device_models']):
312
+ profile['tech_savviness'] = max(profile['tech_savviness'], 0.3)
313
+ tech_indicators += 0.5
314
+ if tech_indicators > 1 or profile['tech_savviness'] > 0.6:
315
+ profile['password_complexity_preference'] = 0.7
316
+ else:
317
+ profile['password_complexity_preference'] = 0.3
318
+ nationality = self.user_info.get('nationality', '').lower()
319
+ if any(c in nationality for c in ['iran', 'persia', 'farsi']):
320
+ profile['cultural_influence'] = 'high'
321
+ elif any(c in nationality for c in ['usa', 'uk', 'canada', 'australia']):
322
+ profile['cultural_influence'] = 'low'
323
+ emotional_indicators = 0
324
+ if self.user_info.get('pets'):
325
+ emotional_indicators += len(self.user_info['pets']) * 0.2
326
+ if self.user_info.get('children'):
327
+ emotional_indicators += len(self.user_info['children']) * 0.3
328
+ if self.user_info.get('spouse'):
329
+ emotional_indicators += 0.3
330
+ profile['emotional_attachment_level'] = min(1.0, emotional_indicators)
331
+ return profile
332
+ def predict_password_patterns(self):
333
+ patterns = {
334
+ 'structure': [],
335
+ 'transformations': [],
336
+ 'common_elements': [],
337
+ 'avoided_patterns': []
338
+ }
339
+ security_awareness = self.behavior_profile['security_awareness']
340
+ if security_awareness < 0.3:
341
+ patterns['structure'] = [
342
+ '{name}{birth_year}',
343
+ '{pet}{number}',
344
+ '{favorite}{special}{number}',
345
+ '{name}{birth_day}{birth_month}',
346
+ '{pet}{birth_year}',
347
+ '{child}{number}'
348
+ ]
349
+ patterns['avoided_patterns'] = ['{complex_mixture}', '{random_caps}']
350
+ elif security_awareness < 0.6:
351
+ patterns['structure'] = [
352
+ '{name}{special}{birth_year}',
353
+ '{pet}{number}{special}',
354
+ '{word}{number}{special}',
355
+ '{name}{birth_year}{special}',
356
+ '{pet}{birth_year}',
357
+ '{spouse}{number}',
358
+ '{child}{special}{number}'
359
+ ]
360
+ patterns['transformations'] = ['add_number', 'add_special', 'capitalize', 'simple_leet']
361
+ elif security_awareness < 0.8:
362
+ patterns['structure'] = [
363
+ '{word1}{special}{word2}{number}',
364
+ '{word}{special}{number}{special}',
365
+ '{name}{pet}{number}',
366
+ '{zodiac}{number}',
367
+ '{cultural_event}{number}'
368
+ ]
369
+ patterns['transformations'] = ['leet_speak', 'camel_case', 'random_caps', 'add_special']
370
+ else:
371
+ patterns['structure'] = [
372
+ '{random_mixture}',
373
+ '{complex_pattern}',
374
+ '{word1}{word2}{number}{special}',
375
+ '{cultural_event}{zodiac}{number}'
376
+ ]
377
+ patterns['transformations'] = ['complex_leet', 'random_caps', 'spinal_case', 'hex_encoding']
378
+
379
+ tech_savviness = self.behavior_profile['tech_savviness']
380
+ if tech_savviness > 0.8:
381
+ patterns['transformations'].extend(['hex_encoding', 'base64_patterns', 'unicode_mixing'])
382
+ patterns['common_elements'].extend(['tech_terms', 'crypto_terms'])
383
+
384
+ emotional_level = self.behavior_profile['emotional_attachment_level']
385
+ if emotional_level > 0.8:
386
+ patterns['structure'].insert(0, '{pet}{child}{special}{number}')
387
+ patterns['structure'].insert(0, '{spouse}{pet}{number}')
388
+ patterns['structure'].insert(0, '{child}{birth_year}')
389
+
390
+ cultural_influence = self.behavior_profile['cultural_influence']
391
+ if cultural_influence == 'high':
392
+ patterns['common_elements'].extend(['cultural_events', 'zodiac', 'national_holidays'])
393
+ if self.user_info.get('nationality', '').lower() in ['iran', 'persia', 'farsi']:
394
+ patterns['structure'].extend(['{cultural_event}{number}', '{zodiac}{number}'])
395
+
396
+ current_year = datetime.now().year
397
+ if self.user_info.get('birth_year'):
398
+ birth_year = int(self.user_info['birth_year'])
399
+ age = current_year - birth_year
400
+ if 13 <= age <= 25:
401
+ patterns['common_elements'].append('pop_culture')
402
+ elif 26 <= age <= 40:
403
+ patterns['common_elements'].append('family_elements')
404
+ elif age > 40:
405
+ patterns['common_elements'].append('nostalgic_elements')
406
+
407
+ return patterns
408
+ def get_password_generation_weights(self):
409
+ weights = {
410
+ 'personal_info': 0.7,
411
+ 'dates': 0.8,
412
+ 'pets': 0.9,
413
+ 'children': 0.95,
414
+ 'spouse': 0.92,
415
+ 'interests': 0.6,
416
+ 'cultural': 0.5,
417
+ 'keyboard': 0.3,
418
+ 'common': 0.2,
419
+ 'tech_terms': 0.4,
420
+ 'crypto_terms': 0.3
421
+ }
422
+ if self.behavior_profile['emotional_attachment_level'] > 0.7:
423
+ weights['pets'] = 0.95
424
+ weights['children'] = 0.95
425
+ weights['spouse'] = 0.92
426
+ weights['anniversary'] = 0.93
427
+ if self.behavior_profile['security_awareness'] < 0.4:
428
+ weights['dates'] = 0.9
429
+ weights['personal_info'] = 0.85
430
+ elif self.behavior_profile['security_awareness'] > 0.7:
431
+ weights['keyboard'] = 0.6
432
+ weights['common'] = 0.4
433
+ weights['tech_terms'] = 0.7
434
+ if self.behavior_profile['cultural_influence'] == 'high':
435
+ weights['cultural'] = 0.75
436
+ return weights
437
+ class PasswordEntropyAnalyzer:
438
+ def __init__(self, language='en'):
439
+ self.language = language
440
+ self.lang_data = LANGUAGE_DATA.get(language, LANGUAGE_DATA['en'])
441
+ self.common_patterns = [
442
+ r'(?:password|pass|1234|qwerty|admin|login|welcome|123456|111111|iloveyou)',
443
+ r'(\d{4})\1', r'(.)\1{2,}',
444
+ r'(abc|bcd|cde|def|efg|fgh|ghi|hij|ijk|jkl|klm|lmn|mno|nop|opq|pqr|qrs|rst|stu|uvw|vwx|wxy|xyz)'
445
+ ]
446
+ self.dictionary_words = set()
447
+ self.dictionary_words.update(LANGUAGE_DATA[language]['common_words'])
448
+ for w in wn.all_lemma_names():
449
+ if len(w) > 3:
450
+ self.dictionary_words.add(w.lower())
451
+ self.dictionary_words.update([
452
+ 'christmas', 'halloween', 'thanksgiving', 'easter', 'newyear', 'valentine',
453
+ 'yankees', 'cowboys', 'lakers', 'giants', 'patriots', 'warriors',
454
+ 'harvard', 'yale', 'stanford', 'mit', 'princeton', 'columbia',
455
+ 'superbowl', 'eagles', 'knicks', 'rangers', 'redsox'
456
+ ])
457
+ def calculate_entropy(self, password):
458
+ if not password:
459
+ return 0
460
+ char_space = 0
461
+ if any(c.islower() for c in password): char_space += 26
462
+ if any(c.isupper() for c in password): char_space += 26
463
+ if any(c.isdigit() for c in password): char_space += 10
464
+ if any(c in string.punctuation for c in password): char_space += len(string.punctuation)
465
+ freq = Counter(password)
466
+ entropy = -sum((count / len(password)) * math.log2(count / len(password)) for count in freq.values())
467
+ for pattern in self.common_patterns:
468
+ if re.search(pattern, password.lower()):
469
+ entropy *= 0.2
470
+ keyboard_walks = self.lang_data['keyboard_patterns']
471
+ for walk in keyboard_walks:
472
+ if walk in password.lower():
473
+ entropy *= 0.3
474
+ for word in self.dictionary_words:
475
+ if word in password.lower() and len(word) > 3:
476
+ entropy *= 0.4
477
+ complexity_bonus = 1.0
478
+ if (any(c.islower() for c in password) and any(c.isupper() for c in password)):
479
+ complexity_bonus += 0.2
480
+ if any(c.isdigit() for c in password):
481
+ complexity_bonus += 0.15
482
+ if any(c in string.punctuation for c in password):
483
+ complexity_bonus += 0.25
484
+ if re.search(r'\d{4}', password) and any(c.isalpha() for c in password):
485
+ complexity_bonus += 0.1
486
+ entropy *= complexity_bonus
487
+ length_bonus = min(1.0, len(password) / 16) * 0.4
488
+ entropy *= (1 + length_bonus)
489
+ return round(entropy * len(password), 2)
490
+ def analyze_password_patterns(self, password):
491
+ analysis = {
492
+ 'length': len(password),
493
+ 'has_upper': any(c.isupper() for c in password),
494
+ 'has_lower': any(c.islower() for c in password),
495
+ 'has_digit': any(c.isdigit() for c in password),
496
+ 'has_special': any(c in string.punctuation for c in password),
497
+ 'digit_count': sum(1 for c in password if c.isdigit()),
498
+ 'special_count': sum(1 for c in password if c in string.punctuation),
499
+ 'repeated_chars': self._detect_repeated_chars(password),
500
+ 'keyboard_patterns': self._detect_keyboard_patterns(password),
501
+ 'common_words': self._detect_common_words(password),
502
+ 'cultural_patterns': self._detect_cultural_patterns(password),
503
+ 'entropy': self.calculate_entropy(password)
504
+ }
505
+ return analysis
506
+ def _detect_repeated_chars(self, password):
507
+ repeats = []
508
+ for match in re.finditer(r'(.)\1{2,}', password):
509
+ repeats.append({
510
+ 'char': match.group(1),
511
+ 'count': len(match.group(0)),
512
+ 'position': match.start()
513
+ })
514
+ return repeats
515
+ def _detect_keyboard_patterns(self, password):
516
+ patterns = []
517
+ password_lower = password.lower()
518
+ keyboard_layouts = {
519
+ 'qwerty': [
520
+ 'qwertyuiop', 'asdfghjkl', 'zxcvbnm',
521
+ '1234567890', '!@#$%^&*()'
522
+ ],
523
+ 'qwertz': [
524
+ 'qwertzuiop', 'asdfghjkl', 'yxcvbnm',
525
+ '1234567890', '!@#$%^&*()'
526
+ ],
527
+ 'azerty': [
528
+ 'azertyuiop', 'qsdfghjklm', 'wxcvbn',
529
+ '1234567890', '&é"\'(-è_çà)'
530
+ ],
531
+ 'dvorak': [
532
+ 'pyfgcrl', 'aoeuidhtns', 'qjkxbmwvz',
533
+ '1234567890', '!@#$%^&*()'
534
+ ]
535
+ }
536
+ likely_layout = 'qwerty'
537
+ if 'z' in password_lower and 'w' in password_lower and 'x' in password_lower:
538
+ likely_layout = 'qwertz'
539
+ elif 'a' in password_lower and 'z' in password_lower and 'q' in password_lower:
540
+ likely_layout = 'azerty'
541
+ elif 'p' in password_lower and 'y' in password_lower and 'f' in password_lower:
542
+ likely_layout = 'dvorak'
543
+ layout = keyboard_layouts.get(likely_layout, keyboard_layouts['qwerty'])
544
+ for row in layout:
545
+ for i in range(len(row)):
546
+ for j in range(i+3, min(i+10, len(row)+1)):
547
+ segment = row[i:j]
548
+ if segment in password_lower:
549
+ patterns.append({
550
+ 'pattern': segment,
551
+ 'type': f'{likely_layout}_horizontal',
552
+ 'length': len(segment),
553
+ 'layout': likely_layout
554
+ })
555
+ vertical_sequences = []
556
+ for col_idx in range(min(len(row) for row in layout if len(row) > 0)):
557
+ col_chars = []
558
+ for row in layout:
559
+ if col_idx < len(row):
560
+ col_chars.append(row[col_idx])
561
+ col_str = ''.join(col_chars)
562
+ if len(col_str) >= 3:
563
+ vertical_sequences.append(col_str)
564
+ for seq in vertical_sequences:
565
+ for i in range(len(seq)):
566
+ for j in range(i+3, min(i+8, len(seq)+1)):
567
+ segment = seq[i:j]
568
+ if segment in password_lower:
569
+ patterns.append({
570
+ 'pattern': segment,
571
+ 'type': f'{likely_layout}_vertical',
572
+ 'length': len(segment),
573
+ 'layout': likely_layout
574
+ })
575
+ diagonal_sequences = []
576
+ for start_row in range(len(layout)):
577
+ for start_col in range(len(layout[start_row])):
578
+ diag_chars = []
579
+ row, col = start_row, start_col
580
+ while row < len(layout) and col < len(layout[row]):
581
+ diag_chars.append(layout[row][col])
582
+ row += 1
583
+ col += 1
584
+ if len(diag_chars) >= 3:
585
+ diagonal_sequences.append(''.join(diag_chars))
586
+ for start_row in range(len(layout)):
587
+ for start_col in range(len(layout[start_row])):
588
+ diag_chars = []
589
+ row, col = start_row, start_col
590
+ while row < len(layout) and col >= 0:
591
+ if col < len(layout[row]):
592
+ diag_chars.append(layout[row][col])
593
+ row += 1
594
+ col -= 1
595
+ if len(diag_chars) >= 3:
596
+ diagonal_sequences.append(''.join(diag_chars))
597
+ for seq in diagonal_sequences:
598
+ for i in range(len(seq)):
599
+ for j in range(i+3, min(i+8, len(seq)+1)):
600
+ segment = seq[i:j]
601
+ if segment in password_lower:
602
+ patterns.append({
603
+ 'pattern': segment,
604
+ 'type': f'{likely_layout}_diagonal',
605
+ 'length': len(segment),
606
+ 'layout': likely_layout
607
+ })
608
+ spiral_patterns = [
609
+ 'q2we43', 'qaz2sx3ed', '1qaz2wsx', 'zaq12wsx',
610
+ 'qscde32', 'qwe321', '123edc', 'asdf432',
611
+ 'q1a2z3', 'q12w3e', '1q2w3e4r', '123qwe',
612
+ '1234qwer', 'qwer4321', '123456', '654321'
613
+ ]
614
+ for pattern in spiral_patterns:
615
+ if pattern in password_lower:
616
+ patterns.append({
617
+ 'pattern': pattern,
618
+ 'type': 'spiral',
619
+ 'length': len(pattern),
620
+ 'layout': likely_layout
621
+ })
622
+ return patterns
623
+ def _detect_common_words(self, password):
624
+ matches = []
625
+ password_lower = password.lower()
626
+ for word in self.dictionary_words:
627
+ if word in password_lower and len(word) > 3:
628
+ matches.append({
629
+ 'word': word,
630
+ 'position': password_lower.find(word),
631
+ 'length': len(word)
632
+ })
633
+ return matches
634
+ def _detect_cultural_patterns(self, password):
635
+ patterns = []
636
+ password_lower = password.lower()
637
+ for event in self.lang_data['cultural_events']:
638
+ if event.lower() in password_lower:
639
+ patterns.append({
640
+ 'pattern': event,
641
+ 'type': 'cultural_event',
642
+ 'relevance': 0.8
643
+ })
644
+ for sign in self.lang_data['zodiac_signs']:
645
+ if sign.lower() in password_lower:
646
+ patterns.append({
647
+ 'pattern': sign,
648
+ 'type': 'zodiac',
649
+ 'relevance': 0.7
650
+ })
651
+ for team in self.lang_data['sports_teams']:
652
+ if team.lower() in password_lower:
653
+ patterns.append({
654
+ 'pattern': team,
655
+ 'type': 'sports_team',
656
+ 'relevance': 0.6
657
+ })
658
+ for university in self.lang_data['universities']:
659
+ if university.lower() in password_lower:
660
+ patterns.append({
661
+ 'pattern': university,
662
+ 'type': 'university',
663
+ 'relevance': 0.5
664
+ })
665
+ for pattern in self.lang_data['number_patterns']:
666
+ if pattern in password:
667
+ patterns.append({
668
+ 'pattern': pattern,
669
+ 'type': 'number_pattern',
670
+ 'relevance': 0.4
671
+ })
672
+ if self.language == 'fa':
673
+ persian_patterns = {
674
+ 'religious': ['روز', 'عید', 'نماز', 'قرآن', 'حاج', 'سجاد', 'حسین', 'فاطمه', 'زهره', 'عاشورا', 'نوروز', 'یلدا'],
675
+ 'national': ['ایران', 'تهران', 'شهید', 'نظام', 'انقلاب', 'آزادی', 'سپاه', 'بدر', 'قادسیه', 'پارس', 'شیراز']
676
+ }
677
+ for category, words in persian_patterns.items():
678
+ for word in words:
679
+ if word.lower() in password_lower:
680
+ patterns.append({
681
+ 'pattern': word,
682
+ 'type': f'persian_{category}',
683
+ 'relevance': 0.7 if category == 'religious' else 0.6
684
+ })
685
+ elif self.language == 'de':
686
+ german_patterns = {
687
+ 'cultural': ['oktoberfest', 'bier', 'wurst', 'bayern', 'berlin', 'deutschland', 'karneval'],
688
+ 'historical': ['mauer', 'berliner', 'euro', 'dm', 'pfennig', 'reich', 'wiedervereinigung']
689
+ }
690
+ for category, words in german_patterns.items():
691
+ for word in words:
692
+ if word.lower() in password_lower:
693
+ patterns.append({
694
+ 'pattern': word,
695
+ 'type': f'german_{category}',
696
+ 'relevance': 0.6
697
+ })
698
+ elif self.language == 'fr':
699
+ french_patterns = {
700
+ 'cultural': ['tour', 'eiffel', 'paris', 'baguette', 'fromage', 'vin', 'bastille'],
701
+ 'historical': ['revolution', 'napoleon', 'berlin', 'liberté', 'fraternité', 'egalité']
702
+ }
703
+ for category, words in french_patterns.items():
704
+ for word in words:
705
+ if word.lower() in password_lower:
706
+ patterns.append({
707
+ 'pattern': word,
708
+ 'type': f'french_{category}',
709
+ 'relevance': 0.6
710
+ })
711
+ elif self.language == 'es':
712
+ spanish_patterns = {
713
+ 'cultural': ['flamenco', 'paella', 'toro', 'fiesta', 'barça', 'madrid', 'tomatina'],
714
+ 'historical': ['inquisicion', 'colombus', 'espana', 'reconquista', 'cervantes']
715
+ }
716
+ for category, words in spanish_patterns.items():
717
+ for word in words:
718
+ if word.lower() in password_lower:
719
+ patterns.append({
720
+ 'pattern': word,
721
+ 'type': f'spanish_{category}',
722
+ 'relevance': 0.6
723
+ })
724
+ return patterns
725
+ class ContextualPasswordGenerator:
726
+ def __init__(self, language='en'):
727
+ self.language = language
728
+ self.lang_data = LANGUAGE_DATA.get(language, LANGUAGE_DATA['en'])
729
+ self.entropy_analyzer = PasswordEntropyAnalyzer(language)
730
+ self.context_weights = self._initialize_context_weights()
731
+ self.context_info = {}
732
+ def _initialize_context_weights(self):
733
+ return {
734
+ 'personal_info': 0.8,
735
+ 'dates': 0.9,
736
+ 'pets': 0.7,
737
+ 'children': 0.8,
738
+ 'spouse': 0.75,
739
+ 'interests': 0.6,
740
+ 'cultural': 0.5,
741
+ 'keyboard': 0.4,
742
+ 'common': 0.3,
743
+ 'tech_terms': 0.4,
744
+ 'crypto_terms': 0.3
745
+ }
746
+ def _calculate_relevance_score(self, info, element, category):
747
+ score = self.context_weights.get(category, 0.5)
748
+ psychological_factors = {
749
+ 'emotional_value': 0.0,
750
+ 'temporal_relevance': 0.0,
751
+ 'cognitive_load': 0.0,
752
+ 'length_factor': 0.0
753
+ }
754
+ emotional_keywords = {
755
+ 'pet': 0.85, 'child': 0.92, 'spouse': 0.88, 'anniversary': 0.75,
756
+ 'favorite': 0.78, 'love': 0.95, 'heart': 0.82, 'baby': 0.90, 'soulmate': 0.93
757
+ }
758
+ if isinstance(element, str):
759
+ element_lower = element.lower()
760
+ for keyword, value in emotional_keywords.items():
761
+ if keyword in element_lower or category == keyword:
762
+ psychological_factors['emotional_value'] = max(
763
+ psychological_factors['emotional_value'], value
764
+ )
765
+ current_year = datetime.now().year
766
+ if re.search(r'\d{4}', element):
767
+ year_match = re.search(r'(\d{4})', element)
768
+ if year_match:
769
+ year = int(year_match.group(1))
770
+ if abs(current_year - year) <= 2:
771
+ psychological_factors['temporal_relevance'] = 0.7
772
+ elif year == int(info.get('birth_year', 0)):
773
+ psychological_factors['temporal_relevance'] = 0.9
774
+ if 3 <= len(element) <= 8:
775
+ psychological_factors['cognitive_load'] = 0.6
776
+ elif 9 <= len(element) <= 12:
777
+ psychological_factors['cognitive_load'] = 0.3
778
+ else:
779
+ psychological_factors['cognitive_load'] = 0.1
780
+ if 8 <= len(element) <= 12:
781
+ psychological_factors['length_factor'] = 0.8
782
+ elif 6 <= len(element) <= 14:
783
+ psychological_factors['length_factor'] = 0.6
784
+ else:
785
+ psychological_factors['length_factor'] = 0.2
786
+ emotional_weight = 0.35
787
+ temporal_weight = 0.20
788
+ cognitive_weight = 0.20
789
+ length_weight = 0.25
790
+ psychological_score = (
791
+ psychological_factors['emotional_value'] * emotional_weight +
792
+ psychological_factors['temporal_relevance'] * temporal_weight +
793
+ (1 - psychological_factors['cognitive_load']) * cognitive_weight +
794
+ psychological_factors['length_factor'] * length_weight
795
+ )
796
+ final_score = (score * 0.5) + (psychological_score * 0.5)
797
+ if category in ['pets', 'children', 'spouse', 'favorite_numbers', 'anniversary']:
798
+ final_score *= 1.3
799
+ if category in ['common', 'keyboard'] and info.get('password_patterns') and 'complex' in info['password_patterns']:
800
+ final_score *= 0.6
801
+ return min(1.0, max(0.2, final_score))
802
+ def _apply_leet_transformations(self, text):
803
+ if not text or len(text) < 3:
804
+ return [text]
805
+ results = set([text])
806
+ text_lower = text.lower()
807
+ target_language = self.language
808
+ if 'nationality' in self.context_info:
809
+ nationality_to_lang = {
810
+ 'usa': 'en', 'united states': 'en', 'america': 'en',
811
+ 'uk': 'en', 'united kingdom': 'en', 'britain': 'en',
812
+ 'germany': 'de', 'deutschland': 'de', 'deutsch': 'de',
813
+ 'france': 'fr', 'français': 'fr', 'france': 'fr',
814
+ 'spain': 'es', 'españa': 'es', 'spanish': 'es',
815
+ 'iran': 'fa', 'persian': 'fa', 'farsi': 'fa'
816
+ }
817
+ nationality = self.context_info['nationality'].lower()
818
+ for key, lang in nationality_to_lang.items():
819
+ if key in nationality:
820
+ target_language = lang
821
+ break
822
+ language_specific_leet = {
823
+ 'en': {'a': ['@', '4'], 'e': ['3', '&'], 'i': ['1', '!'], 'o': ['0', '*'], 's': ['$', '5']},
824
+ 'de': {'a': ['@', '4', 'ä'], 'e': ['3', '&'], 'i': ['1', '!'], 'o': ['0', '*'], 's': ['$', '5', 'ß']},
825
+ 'fr': {'a': ['@', '4', 'à', 'â'], 'e': ['3', '&', 'é', 'è', 'ê'], 'c': ['(', '©']},
826
+ 'es': {'a': ['@', '4', 'á'], 'e': ['3', '&', 'é'], 'o': ['0', '*', 'ó']},
827
+ 'fa': {'a': ['@', '4', 'آ', 'ا'], 'i': ['1', '!', 'ی'], 'o': ['0', '*', 'او']}
828
+ }
829
+ leet_mappings = language_specific_leet.get(target_language,
830
+ self.lang_data['leet_mappings'])
831
+ birth_year = self.context_info.get('birth_year', '')
832
+ if birth_year and len(birth_year) == 4:
833
+ year_suffix = birth_year[2:]
834
+ results.add(text + year_suffix)
835
+ results.add(year_suffix + text)
836
+ if len(year_suffix) == 2:
837
+ results.add(text + year_suffix + '!')
838
+ results.add(text + year_suffix + '@')
839
+ email = self.context_info.get('email', '')
840
+ if '@' in email:
841
+ domain = email.split('@')[1].split('.')[0]
842
+ if domain and len(domain) > 2:
843
+ results.add(text + '@' + domain)
844
+ results.add(domain + '@' + text)
845
+ base_transformations = []
846
+ for i, char in enumerate(text_lower):
847
+ if char in leet_mappings:
848
+ position_factor = 0.7 if 1 < i < len(text_lower) - 2 else 0.9
849
+ if random.random() < position_factor:
850
+ for replacement in leet_mappings[char]:
851
+ new_text = text_lower[:i] + replacement + text_lower[i+1:]
852
+ base_transformations.append(new_text)
853
+ if len(text) > 5:
854
+ for _ in range(min(5, len(base_transformations))):
855
+ if len(base_transformations) > 1:
856
+ base = random.choice(base_transformations)
857
+ for i, char in enumerate(base):
858
+ if char.isalpha() and char in leet_mappings and random.random() < 0.4:
859
+ for replacement in leet_mappings[char]:
860
+ new_text = base[:i] + replacement + base[i+1:]
861
+ base_transformations.append(new_text)
862
+ break
863
+ special_chars = self.lang_data['special_chars']
864
+ if target_language == 'fa':
865
+ special_chars += ['_', 'ـ', '•']
866
+ for char in special_chars[:3]:
867
+ results.add(text + char)
868
+ results.add(char + text)
869
+ if len(text) > 4:
870
+ results.add(text[:len(text)//2] + char + text[len(text)//2:])
871
+ cultural_numbers = {
872
+ 'en': ['1', '7', '13', '21', '23', '69', '123', '2023', '2024'],
873
+ 'de': ['7', '13', '18', '42', '77', '88', '123', '2023', '2024'],
874
+ 'fr': ['7', '13', '17', '21', '42', '89', '123', '2023', '2024'],
875
+ 'es': ['7', '10', '13', '21', '99', '123', '2023', '2024'],
876
+ 'fa': ['5', '7', '14', '22', '88', '99', '110', '123', '2023', '2024']
877
+ }
878
+ numbers = cultural_numbers.get(target_language, ['1', '7', '13', '21', '99', '123'])
879
+ for num in numbers:
880
+ results.add(text + num)
881
+ results.add(num + text)
882
+ if len(text) > 4:
883
+ results.add(text[:3] + num + text[3:])
884
+ if len(text) > 3:
885
+ results.add(text.capitalize())
886
+ results.add(text.upper())
887
+ results.add(text.lower())
888
+ if len(text) > 5:
889
+ camel_case = text[0].lower() + text[1].upper() + text[2:]
890
+ results.add(camel_case)
891
+ return list(set(results))[:15]
892
+ def _generate_weighted_combinations(self, info, count, min_length, max_length):
893
+ self.context_info = info
894
+ weighted_elements = []
895
+ behavioral_categories = {
896
+ 'high_emotional': ['pets', 'children', 'spouse', 'anniversary', 'favorite_things'],
897
+ 'medium_emotional': ['hobbies', 'sports', 'music', 'cars', 'food', 'books'],
898
+ 'low_emotional': ['job_title', 'employer', 'school', 'uni', 'location'],
899
+ 'temporal': ['birth_year', 'grad_year', 'grad_year_uni', 'favorite_numbers', 'current_year'],
900
+ 'cultural': ['cultural_events', 'zodiac', 'national_holidays']
901
+ }
902
+ for category, keys in behavioral_categories.items():
903
+ for key in keys:
904
+ if key in info:
905
+ items = info[key]
906
+ if not isinstance(items, list):
907
+ items = [items]
908
+ for item in items:
909
+ if item and isinstance(item, str) and len(item) >= 2:
910
+ if category == 'high_emotional':
911
+ weight = 0.95
912
+ elif category == 'medium_emotional':
913
+ weight = 0.75
914
+ elif category == 'temporal':
915
+ weight = 0.8
916
+ if re.search(r'\d{4}', item):
917
+ year = int(re.search(r'\d{4}', item).group())
918
+ current_year = datetime.now().year
919
+ weight = 0.9 - (current_year - year) * 0.05
920
+ weight = max(0.4, weight)
921
+ elif category == 'cultural':
922
+ weight = 0.65
923
+ if 'nationality' in info:
924
+ nat = info['nationality'].lower()
925
+ if (self.language == 'fa' and ('iran' in nat or 'persia' in nat)) or \
926
+ (self.language == 'de' and ('german' in nat or 'germany' in nat)) or \
927
+ (self.language == 'fr' and ('french' in nat or 'france' in nat)) or \
928
+ (self.language == 'es' and ('spanish' in nat or 'spain' in nat)):
929
+ weight = 0.85
930
+ else:
931
+ weight = 0.5
932
+ length_factor = 0.5
933
+ if min_length <= len(item) <= max_length:
934
+ length_factor = 1.0
935
+ elif len(item) < min_length:
936
+ length_factor = 0.7
937
+ weight *= length_factor
938
+ weighted_elements.append((item, category, weight))
939
+ weighted_elements.sort(key=lambda x: (
940
+ x[2] * (1.0 if min_length <= len(x[0]) <= max_length else 0.7),
941
+ -abs(len(x[0]) - (min_length + max_length) / 2)
942
+ ), reverse=True)
943
+ passwords = set()
944
+ for item, category, weight in weighted_elements:
945
+ if weight > 0.6:
946
+ cultural_numbers = self._get_cultural_numbers(info)
947
+ for num in cultural_numbers[:3]:
948
+ pwd = f"{item}{num}"
949
+ if min_length <= len(pwd) <= max_length:
950
+ passwords.add(pwd)
951
+ pwd = f"{num}{item}"
952
+ if min_length <= len(pwd) <= max_length:
953
+ passwords.add(pwd)
954
+ for char in self.lang_data['special_chars'][:2]:
955
+ pwd = f"{item}{char}"
956
+ if min_length <= len(pwd) <= max_length:
957
+ passwords.add(pwd)
958
+ pwd = f"{char}{item}"
959
+ if min_length <= len(pwd) <= max_length:
960
+ passwords.add(pwd)
961
+ if len(item) > 4:
962
+ pwd = f"{item[:len(item)//2]}{char}{item[len(item)//2:]}"
963
+ if min_length <= len(pwd) <= max_length:
964
+ passwords.add(pwd)
965
+ if len(weighted_elements) >= 2:
966
+ high_weight_elements = [e for e in weighted_elements if e[2] > 0.75]
967
+ emotional_items = [e for e in high_weight_elements if e[1] == 'high_emotional']
968
+ temporal_items = [e for e in weighted_elements if e[1] == 'temporal' and e[2] > 0.5]
969
+ for emo in emotional_items[:3]:
970
+ for temp in temporal_items[:2]:
971
+ for sep in ['', '_', '@', '#', '!']:
972
+ pwd = f"{emo[0]}{sep}{temp[0]}"
973
+ if min_length <= len(pwd) <= max_length:
974
+ passwords.add(pwd)
975
+ pwd = f"{temp[0]}{sep}{emo[0]}"
976
+ if min_length <= len(pwd) <= max_length:
977
+ passwords.add(pwd)
978
+ for i in range(min(3, len(emotional_items))):
979
+ for j in range(min(3, len(emotional_items))):
980
+ if i != j:
981
+ for sep in ['', '_', '@']:
982
+ pwd = f"{emotional_items[i][0]}{sep}{emotional_items[j][0]}"
983
+ if min_length <= len(pwd) <= max_length:
984
+ passwords.add(pwd)
985
+ if info.get('tech_savviness'):
986
+ try:
987
+ tech_level = int(info['tech_savviness'])
988
+ if tech_level >= 7:
989
+ tech_terms = ['admin', 'root', 'sys', 'dev', 'prod', 'test', 'api', 'db', 'sql', 'http', 'www']
990
+ for term in tech_terms:
991
+ for num in ['123', '456', '789', '007', '2023', '2024']:
992
+ pwd = f"{term}{num}"
993
+ if min_length <= len(pwd) <= max_length:
994
+ passwords.add(pwd)
995
+ for special in self.lang_data['special_chars'][:2]:
996
+ pwd = f"{term}{special}{datetime.now().year % 100}"
997
+ if min_length <= len(pwd) <= max_length:
998
+ passwords.add(pwd)
999
+ elif tech_level <= 3:
1000
+ for item, category, weight in weighted_elements:
1001
+ if weight > 0.5 and min_length <= len(item) <= max_length:
1002
+ passwords.add(item)
1003
+ common_structures = [
1004
+ "{word}{num}", "{num}{word}", "{word}{special}{num}",
1005
+ "{word1}{word2}", "{word}{num}{special}", "{word}{special}{word}"
1006
+ ]
1007
+ for structure in common_structures:
1008
+ if "{word}" in structure:
1009
+ for item, category, weight in weighted_elements[:5]:
1010
+ if weight > 0.6:
1011
+ num = random.choice(self.lang_data['number_patterns'][:3])
1012
+ special = random.choice(self.lang_data['special_chars'])
1013
+ pwd = structure.format(word=item, num=num, special=special)
1014
+ if min_length <= len(pwd) <= max_length:
1015
+ passwords.add(pwd)
1016
+ elif "{word1}" in structure and len(weighted_elements) >= 2:
1017
+ for i in range(min(3, len(weighted_elements))):
1018
+ for j in range(min(3, len(weighted_elements))):
1019
+ if i != j:
1020
+ item1 = weighted_elements[i][0]
1021
+ item2 = weighted_elements[j][0]
1022
+ num = random.choice(self.lang_data['number_patterns'][:2])
1023
+ special = random.choice(self.lang_data['special_chars'])
1024
+ pwd = structure.format(word1=item1, word2=item2, num=num, special=special)
1025
+ if min_length <= len(pwd) <= max_length:
1026
+ passwords.add(pwd)
1027
+ if info.get('leaked_passwords'):
1028
+ for pwd in info['leaked_passwords']:
1029
+ analysis = self.entropy_analyzer.analyze_password_patterns(pwd)
1030
+ if analysis['has_digit'] and analysis['has_special']:
1031
+ new_num = random.choice(self.lang_data['number_patterns'][:3])
1032
+ new_special = random.choice(self.lang_data['special_chars'])
1033
+ new_pwd = re.sub(r'\d+', new_num, pwd)
1034
+ new_pwd = re.sub(r'[^\w\s]', new_special, new_pwd)
1035
+ if min_length <= len(new_pwd) <= max_length:
1036
+ passwords.add(new_pwd)
1037
+ final_passwords = set()
1038
+ for pwd in passwords:
1039
+ leet_versions = self._apply_leet_transformations(pwd)
1040
+ for leet_pwd in leet_versions:
1041
+ if min_length <= len(leet_pwd) <= max_length:
1042
+ final_passwords.add(leet_pwd)
1043
+ sorted_passwords = self._rank_passwords_by_probability(list(final_passwords), info, min_length, max_length)
1044
+ return sorted_passwords[:count]
1045
+ def _get_cultural_numbers(self, info):
1046
+ target_language = info.get('language', 'en')
1047
+ nationality = info.get('nationality', '').lower()
1048
+ cultural_numbers = {
1049
+ 'en': {
1050
+ 'usa': ['1776', '13', '7', '42', '21', '69', '123', '2023', '2024'],
1051
+ 'uk': ['1066', '13', '7', '42', '18', '77', '123', '2023', '2024']
1052
+ },
1053
+ 'de': {
1054
+ 'germany': ['1989', '13', '7', '42', '18', '88', '123', '2023', '2024'],
1055
+ 'austria': ['1918', '13', '7', '42', '19', '77', '123', '2023', '2024']
1056
+ },
1057
+ 'fa': {
1058
+ 'iran': ['1399', '5', '7', '14', '22', '88', '99', '110', '313', '123', '2023', '2024'],
1059
+ 'afghanistan': ['1399', '7', '14', '22', '88', '99', '123', '2023', '2024']
1060
+ },
1061
+ 'fr': {
1062
+ 'france': ['1789', '13', '7', '42', '14', '75', '89', '123', '2023', '2024']
1063
+ },
1064
+ 'es': {
1065
+ 'spain': ['1492', '7', '10', '13', '21', '99', '123', '2023', '2024']
1066
+ }
1067
+ }
1068
+ if nationality in cultural_numbers.get(target_language, {}):
1069
+ return cultural_numbers[target_language][nationality]
1070
+ default_numbers = {
1071
+ 'en': ['1', '7', '13', '21', '23', '69', '123', '2023', '2024'],
1072
+ 'de': ['7', '13', '18', '42', '77', '88', '123', '2023', '2024'],
1073
+ 'fa': ['5', '7', '14', '22', '88', '99', '110', '123', '2023', '2024'],
1074
+ 'fr': ['7', '13', '14', '17', '42', '75', '89', '123', '2023', '2024'],
1075
+ 'es': ['7', '10', '13', '21', '99', '123', '2023', '2024']
1076
+ }
1077
+ return default_numbers.get(target_language, ['1', '7', '12', '13', '21', '99', '123', '2023', '2024'])
1078
+ def _rank_passwords_by_probability(self, passwords, info, min_length, max_length):
1079
+ ranked = []
1080
+ for pwd in passwords:
1081
+ score = 0.0
1082
+ length = len(pwd)
1083
+ if min_length <= length <= max_length:
1084
+ score += 0.4
1085
+ elif min_length - 2 <= length <= max_length + 2:
1086
+ score += 0.2
1087
+ else:
1088
+ score -= 0.2
1089
+ analysis = self.entropy_analyzer.analyze_password_patterns(pwd)
1090
+ if analysis['has_upper'] and analysis['has_lower'] and analysis['has_digit']:
1091
+ score += 0.2
1092
+ if analysis['has_special']:
1093
+ score += 0.1
1094
+ high_importance = ['pets', 'children', 'spouse', 'birth_year', 'anniversary']
1095
+ for key in high_importance:
1096
+ if key in info and info[key]:
1097
+ values = info[key] if isinstance(info[key], list) else [info[key]]
1098
+ for val in values:
1099
+ if val and val.lower() in pwd.lower():
1100
+ score += 0.5
1101
+ break
1102
+ for event in self.lang_data['cultural_events']:
1103
+ if event.lower() in pwd.lower():
1104
+ score += 0.2
1105
+ if analysis['repeated_chars']:
1106
+ score *= 0.6
1107
+ if len(analysis['keyboard_patterns']) > 1:
1108
+ score *= 0.5
1109
+ behavior_predictor = UserBehaviorPredictor(info)
1110
+ behavior_profile = behavior_predictor.predict_password_patterns()
1111
+ for structure in behavior_profile['structure']:
1112
+ pattern = structure.replace('{name}', '[a-zA-Z]+')
1113
+ pattern = pattern.replace('{pet}', '[a-zA-Z]+')
1114
+ pattern = pattern.replace('{child}', '[a-zA-Z]+')
1115
+ pattern = pattern.replace('{spouse}', '[a-zA-Z]+')
1116
+ pattern = pattern.replace('{birth_year}', '\d{4}')
1117
+ pattern = pattern.replace('{number}', '\d+')
1118
+ pattern = pattern.replace('{special}', '[!@#$%^&*()_+\-=\[\]{};\'\\:"|,.<>\/?]')
1119
+ if re.match(f"^{pattern}$", pwd):
1120
+ score += 0.3
1121
+ ranked.append((pwd, score))
1122
+ ranked.sort(key=lambda x: x[1], reverse=True)
1123
+ return [item[0] for item in ranked]
1124
+ def _generate_cultural(self, info, count, min_length, max_length):
1125
+ passwords = set()
1126
+ lang_data = self.lang_data
1127
+ relevant_events = []
1128
+ relevant_events.extend(lang_data['cultural_events'])
1129
+ nationality = info.get('nationality', '').lower()
1130
+ if 'iran' in nationality or 'persian' in nationality:
1131
+ relevant_events.extend(['Nowruz', 'Tasua', 'Ashura', 'Yalda', 'Sizdah Bedar'])
1132
+ elif 'german' in nationality or 'germany' in nationality:
1133
+ relevant_events.extend(['Oktoberfest', 'Weihnachten', 'Karneval', 'Silvester'])
1134
+ elif 'french' in nationality or 'france' in nationality:
1135
+ relevant_events.extend(['Bastille Day', 'Tour de France', 'La Fête de la Musique'])
1136
+ elif 'spanish' in nationality or 'spain' in nationality:
1137
+ relevant_events.extend(['La Tomatina', 'San Fermin', 'Dia de los Muertos'])
1138
+ relevant_events = list(set(relevant_events))
1139
+ for event in relevant_events[:5]:
1140
+ for year_type in ['current_year', 'birth_year', 'common_years']:
1141
+ years = []
1142
+ if year_type == 'current_year':
1143
+ years = [str(datetime.now().year), str(datetime.now().year)[-2:]]
1144
+ elif year_type == 'birth_year' and info.get('birth_year'):
1145
+ years = [info['birth_year'], info['birth_year'][-2:]]
1146
+ else:
1147
+ years = ['2023', '2024', '23', '24', '00', '01', '99']
1148
+ for year in years:
1149
+ structures = [
1150
+ '{event}{year}',
1151
+ '{year}{event}',
1152
+ '{event}_{year}',
1153
+ '{event}#{year}',
1154
+ '{event}@{year}',
1155
+ '{event}{special}{year}'
1156
+ ]
1157
+ for structure in structures:
1158
+ pwd = structure.format(event=event, year=year, special=random.choice(lang_data['special_chars']))
1159
+ if min_length <= len(pwd) <= max_length:
1160
+ passwords.add(pwd)
1161
+ for num_pattern in lang_data['number_patterns'][:3]:
1162
+ pwd = f"{event}{num_pattern}"
1163
+ if min_length <= len(pwd) <= max_length:
1164
+ passwords.add(pwd)
1165
+ pwd = f"{num_pattern}{event}"
1166
+ if min_length <= len(pwd) <= max_length:
1167
+ passwords.add(pwd)
1168
+ if info.get('zodiac'):
1169
+ zodiac = info['zodiac']
1170
+ for num_pattern in lang_data['number_patterns'][:2]:
1171
+ pwd = f"{zodiac}{num_pattern}"
1172
+ if min_length <= len(pwd) <= max_length:
1173
+ passwords.add(pwd)
1174
+ pwd = f"{num_pattern}{zodiac}"
1175
+ if min_length <= len(pwd) <= max_length:
1176
+ passwords.add(pwd)
1177
+ cultural_numbers = self._get_cultural_numbers(info)
1178
+ for event in relevant_events[:3]:
1179
+ for num in cultural_numbers[:3]:
1180
+ pwd = f"{event}{num}"
1181
+ if min_length <= len(pwd) <= max_length:
1182
+ passwords.add(pwd)
1183
+ return list(passwords)[:count]
1184
+ def _predict_password_patterns(self, info):
1185
+ behavior_predictor = UserBehaviorPredictor(info)
1186
+ return behavior_predictor.predict_password_patterns()
1187
+ def _generate_behavioral(self, info, count, min_length, max_length):
1188
+ passwords = set()
1189
+ behavior_profile = self._predict_password_patterns(info)
1190
+ for structure in behavior_profile['structure'][:3]:
1191
+ required_elements = []
1192
+ if '{name}' in structure:
1193
+ if info.get('first_name'):
1194
+ required_elements.append(info['first_name'])
1195
+ elif info.get('nickname'):
1196
+ required_elements.append(info['nickname'])
1197
+ if '{pet}' in structure and info.get('pets'):
1198
+ required_elements.extend(info['pets'][:3])
1199
+ if '{child}' in structure and info.get('children'):
1200
+ required_elements.extend(info['children'][:2])
1201
+ if '{spouse}' in structure and info.get('spouse'):
1202
+ required_elements.append(info['spouse'])
1203
+ if '{birth_year}' in structure and info.get('birth_year'):
1204
+ required_elements.append(info['birth_year'])
1205
+ if '{birth_date}' in structure and info.get('birth_day') and info.get('birth_month'):
1206
+ required_elements.append(f"{info['birth_day']}{info['birth_month']}")
1207
+ if '{cultural_event}' in structure:
1208
+ required_elements.extend(self.lang_data['cultural_events'][:2])
1209
+ if '{zodiac}' in structure and info.get('zodiac'):
1210
+ required_elements.append(info['zodiac'])
1211
+ if required_elements:
1212
+ for elem in required_elements[:3]:
1213
+ for num_type in ['favorite_numbers', 'common_numbers', 'birth_year']:
1214
+ numbers = []
1215
+ if num_type == 'favorite_numbers' and info.get('favorite_numbers'):
1216
+ numbers = info['favorite_numbers'][:2]
1217
+ elif num_type == 'birth_year' and info.get('birth_year'):
1218
+ numbers = [info['birth_year']]
1219
+ else:
1220
+ numbers = self.lang_data['number_patterns'][:3]
1221
+ for num in numbers:
1222
+ pwd = structure
1223
+ if '{name}' in pwd:
1224
+ pwd = pwd.replace('{name}', elem, 1)
1225
+ if '{pet}' in pwd:
1226
+ pwd = pwd.replace('{pet}', elem, 1)
1227
+ if '{child}' in pwd:
1228
+ pwd = pwd.replace('{child}', elem, 1)
1229
+ if '{spouse}' in pwd:
1230
+ pwd = pwd.replace('{spouse}', elem, 1)
1231
+ if '{cultural_event}' in pwd:
1232
+ pwd = pwd.replace('{cultural_event}', elem, 1)
1233
+ if '{zodiac}' in pwd:
1234
+ pwd = pwd.replace('{zodiac}', elem, 1)
1235
+ if '{number}' in pwd:
1236
+ pwd = pwd.replace('{number}', num, 1)
1237
+ if '{special}' in pwd:
1238
+ pwd = pwd.replace('{special}', random.choice(self.lang_data['special_chars']), 1)
1239
+ if min_length <= len(pwd) <= max_length:
1240
+ passwords.add(pwd)
1241
+ for pwd in list(passwords):
1242
+ if 'leet_speak' in behavior_profile['transformations']:
1243
+ leet_versions = self._apply_leet_transformations(pwd)
1244
+ for leet_pwd in leet_versions:
1245
+ if min_length <= len(leet_pwd) <= max_length:
1246
+ passwords.add(leet_pwd)
1247
+ if 'camel_case' in behavior_profile['transformations']:
1248
+ camel_case = pwd[0].lower() + pwd[1:].capitalize()
1249
+ if min_length <= len(camel_case) <= max_length:
1250
+ passwords.add(camel_case)
1251
+ if 'random_caps' in behavior_profile['transformations']:
1252
+ random_caps = ''.join(c.upper() if random.random() > 0.7 else c for c in pwd)
1253
+ if min_length <= len(random_caps) <= max_length:
1254
+ passwords.add(random_caps)
1255
+ if 'cultural_events' in behavior_profile['common_elements']:
1256
+ for event in self.lang_data['cultural_events'][:3]:
1257
+ for num in self.lang_data['number_patterns'][:2]:
1258
+ pwd = f"{event}{num}"
1259
+ if min_length <= len(pwd) <= max_length:
1260
+ passwords.add(pwd)
1261
+ pwd = f"{num}{event}"
1262
+ if min_length <= len(pwd) <= max_length:
1263
+ passwords.add(pwd)
1264
+ if 'pets' in behavior_profile['common_elements'] and info.get('pets'):
1265
+ for pet in info['pets'][:3]:
1266
+ for num in ['01', '02', '03', '123', '2023', '2024']:
1267
+ pwd = f"{pet}{num}"
1268
+ if min_length <= len(pwd) <= max_length:
1269
+ passwords.add(pwd)
1270
+ if 'children' in behavior_profile['common_elements'] and info.get('children'):
1271
+ for child in info['children'][:2]:
1272
+ for year in [info.get('birth_year', '')[-2:], '01', '2023']:
1273
+ if year:
1274
+ pwd = f"{child}{year}"
1275
+ if min_length <= len(pwd) <= max_length:
1276
+ passwords.add(pwd)
1277
+ return list(passwords)[:count]
1278
+ def _filter_and_rank_passwords(self, passwords, info, count, min_length, max_length):
1279
+ valid_passwords = []
1280
+ for pwd in passwords:
1281
+ is_valid, _ = self._validate_password(pwd, info, min_length=min_length, max_length=max_length)
1282
+ if is_valid:
1283
+ valid_passwords.append(pwd)
1284
+ ranked_passwords = []
1285
+ for pwd in valid_passwords:
1286
+ probability_score = self._calculate_password_probability(pwd, info, min_length, max_length)
1287
+ ranked_passwords.append((pwd, probability_score))
1288
+ ranked_passwords.sort(key=lambda x: x[1], reverse=True)
1289
+ return [pwd for pwd, score in ranked_passwords[:count]]
1290
+ def _calculate_password_probability(self, password, info, min_length, max_length):
1291
+ score = 0.0
1292
+ length = len(password)
1293
+ if min_length <= length <= max_length:
1294
+ score += 0.3
1295
+ elif min_length - 2 <= length <= max_length + 2:
1296
+ score += 0.1
1297
+ else:
1298
+ score -= 0.2
1299
+ high_value_elements = ['pets', 'children', 'spouse', 'birth_year', 'anniversary']
1300
+ for element in high_value_elements:
1301
+ if element in info and info[element]:
1302
+ values = info[element] if isinstance(info[element], list) else [info[element]]
1303
+ for val in values:
1304
+ if val and val.lower() in password.lower():
1305
+ score += 0.9
1306
+ break
1307
+ common_structures = [
1308
+ r'^[a-zA-Z]+\d+$',
1309
+ r'^\d+[a-zA-Z]+$',
1310
+ r'^[a-zA-Z]+[!@#$%^&*()_+\-=\[\]{};\'\\:"|,.<>\/?]\d+$',
1311
+ r'^[a-zA-Z]+\d+[!@#$%^&*()_+\-=\[\]{};\'\\:"|,.<>\/?]$'
1312
+ ]
1313
+ for pattern in common_structures:
1314
+ if re.match(pattern, password):
1315
+ score += 0.3
1316
+ break
1317
+ if (any(c.islower() for c in password) and any(c.isupper() for c in password)):
1318
+ score += 0.1
1319
+ if any(c.isdigit() for c in password):
1320
+ score += 0.1
1321
+ if any(c in string.punctuation for c in password):
1322
+ score += 0.1
1323
+ if re.search(r'(.)\1{2,}', password):
1324
+ score *= 0.5
1325
+ behavior_predictor = UserBehaviorPredictor(info)
1326
+ behavior_profile = behavior_predictor.predict_password_patterns()
1327
+ for structure in behavior_profile['structure']:
1328
+ pattern = structure.replace('{name}', '[a-zA-Z]+')
1329
+ pattern = pattern.replace('{pet}', '[a-zA-Z]+')
1330
+ pattern = pattern.replace('{birth_year}', '\d{4}')
1331
+ pattern = pattern.replace('{number}', '\d+')
1332
+ pattern = pattern.replace('{special}', '[!@#$%^&*()_+\-=\[\]{};\'\\:"|,.<>\/?]')
1333
+ if re.match(f"^{pattern}$", password):
1334
+ score += 0.5
1335
+ return min(1.0, score)
1336
+ def _validate_password(self, password, info, min_length=8, max_length=64, entropy_threshold=45):
1337
+ if not password or len(password) < min_length or len(password) > max_length:
1338
+ return False, 0
1339
+ behavior_predictor = UserBehaviorPredictor(info)
1340
+ behavior_profile = behavior_predictor.get_password_generation_weights()
1341
+ if behavior_profile['security_awareness'] < 0.4:
1342
+ entropy_threshold = 35
1343
+ elif behavior_profile['security_awareness'] > 0.7:
1344
+ entropy_threshold = 55
1345
+ has_upper = any(c in string.ascii_uppercase for c in password)
1346
+ has_lower = any(c in string.ascii_lowercase for c in password)
1347
+ has_digit = any(c in string.digits for c in password)
1348
+ has_special = any(c in string.punctuation for c in password)
1349
+ required_types = 3
1350
+ if behavior_profile['security_awareness'] < 0.4:
1351
+ required_types = 2
1352
+ elif behavior_profile['security_awareness'] > 0.7:
1353
+ required_types = 4
1354
+ if sum([has_upper, has_lower, has_digit, has_special]) < required_types:
1355
+ return False, 0
1356
+ common_patterns = [
1357
+ r'(?:password|pass|1234|qwerty|admin|login|welcome|123456|111111|iloveyou)',
1358
+ r'(\d{4})\1',
1359
+ r'(.)\1{2,}',
1360
+ r'(abc|bcd|cde|def|efg|fgh|ghi|hij|ijk|jkl|klm|lmn|mno|nop|opq|pqr|qrs|rst|stu|uvw|vwx|wxy|xyz)'
1361
+ ]
1362
+ for pattern in common_patterns:
1363
+ if re.search(pattern, password.lower()):
1364
+ is_personal = False
1365
+ personal_elements = []
1366
+ for key in ['pets', 'children', 'spouse', 'birth_year', 'favorite_numbers', 'anniversary']:
1367
+ if info.get(key):
1368
+ if isinstance(info[key], str):
1369
+ personal_elements.append(info[key])
1370
+ elif isinstance(info[key], list):
1371
+ personal_elements.extend(info[key])
1372
+ for elem in personal_elements:
1373
+ if elem and elem.lower() in password.lower():
1374
+ is_personal = True
1375
+ break
1376
+ if not is_personal:
1377
+ return False, 0
1378
+ for word in self.entropy_analyzer.dictionary_words:
1379
+ if len(word) > 4 and word in password.lower():
1380
+ is_personal = False
1381
+ for key, value in info.items():
1382
+ if isinstance(value, str) and value.lower() == word:
1383
+ is_personal = True
1384
+ break
1385
+ elif isinstance(value, list):
1386
+ if any(v.lower() == word for v in value if isinstance(v, str)):
1387
+ is_personal = True
1388
+ break
1389
+ cultural_data = [
1390
+ *self.lang_data['cultural_events'],
1391
+ *self.lang_data['zodiac_signs'],
1392
+ *self.lang_data['sports_teams'],
1393
+ *self.lang_data['universities']
1394
+ ]
1395
+ if any(word in item.lower() for item in cultural_data):
1396
+ is_personal = True
1397
+ if not is_personal:
1398
+ return False, 0
1399
+ entropy = self.entropy_analyzer.calculate_entropy(password)
1400
+ if entropy < entropy_threshold:
1401
+ return False, 0
1402
+ if behavior_profile['tech_savviness'] > 0.7:
1403
+ tech_terms = ['admin', 'root', 'sys', 'dev', 'prod', 'test', 'api', 'db', 'sql', 'http']
1404
+ has_tech_term = any(term in password.lower() for term in tech_terms)
1405
+ if not has_tech_term:
1406
+ if entropy < entropy_threshold + 10:
1407
+ return False, 0
1408
+ return True, entropy
1409
+ def generate_with_context(self, info, count=50, min_length=8, max_length=12, strategy='comprehensive'):
1410
+ behavior_predictor = UserBehaviorPredictor(info)
1411
+ behavior_profile = behavior_predictor.predict_password_patterns()
1412
+ behavior_weights = behavior_predictor.get_password_generation_weights()
1413
+ self.context_weights = behavior_weights
1414
+ if behavior_profile['structure'] and strategy == 'comprehensive':
1415
+ if any('random' in s for s in behavior_profile['structure']):
1416
+ strategy = 'behavioral'
1417
+ elif any('emotional' in s for s in behavior_profile['structure']):
1418
+ strategy = 'smart'
1419
+ if strategy == 'comprehensive' or strategy == 'smart':
1420
+ behavioral_passwords = self._generate_behavioral(info, count//2, min_length, max_length)
1421
+ cultural_passwords = self._generate_cultural(info, count//3, min_length, max_length)
1422
+ basic_passwords = self._generate_weighted_combinations(info, count//6, min_length, max_length)
1423
+ all_passwords = behavioral_passwords + cultural_passwords + basic_passwords
1424
+ return self._filter_and_rank_passwords(all_passwords, info, count, min_length, max_length)
1425
+ elif strategy == 'behavioral':
1426
+ return self._generate_behavioral(info, count, min_length, max_length)
1427
+ elif strategy == 'cultural':
1428
+ return self._generate_cultural(info, count, min_length, max_length)
1429
+ else:
1430
+ return self._generate_weighted_combinations(info, count, min_length, max_length)
1431
+ def get_extended_user_info():
1432
+ print("\n" + "="*60)
1433
+ print("🔐 ENTER DETAILED TARGET INFORMATION (PRESS ENTER TO SKIP FIELDS)")
1434
+ print("="*60)
1435
+ info = {'language': 'en'}
1436
+ print("\n👤 PERSONAL INFORMATION")
1437
+ info.update({
1438
+ 'first_name': input("First Name: "),
1439
+ 'middle_name': input("Middle Name: "),
1440
+ 'last_name': input("Last Name: "),
1441
+ 'nickname': input("Nickname: "),
1442
+ 'maiden_name': input("Mother's Maiden Name (type 'non' if unknown): "),
1443
+ 'gender': input("Gender: "),
1444
+ 'language': 'en',
1445
+ 'nationality': input("Nationality: "),
1446
+ 'passport': input("Passport Number: "),
1447
+ 'national_id': input("National ID Number: "),
1448
+ })
1449
+ print("\n📅 DATES & AGE")
1450
+ info.update({
1451
+ 'birthdate': input("Birthdate (YYYY-MM-DD): "),
1452
+ 'birth_year': input("Birth Year: "),
1453
+ 'birth_month': input("Birth Month: "),
1454
+ 'birth_day': input("Birth Day: "),
1455
+ 'zodiac': input("Zodiac Sign: "),
1456
+ 'age': input("Age: "),
1457
+ 'deceased': input("Is the person deceased? (Y/N): ").lower() in ['y', 'yes'],
1458
+ 'death_date': input("Date of Death (if applicable): "),
1459
+ 'birth_time': input("Birth Time (HH:MM): "),
1460
+ })
1461
+ print("\n📍 LOCATION DETAILS")
1462
+ info.update({
1463
+ 'city': input("City: "),
1464
+ 'state': input("State/Province: "),
1465
+ 'country': input("Country: "),
1466
+ 'zipcode': input("ZIP Code: "),
1467
+ 'street': input("Street Address: "),
1468
+ 'house_num': input("House Number: "),
1469
+ 'neighborhood': input("Neighborhood: "),
1470
+ 'travel_destinations': input("Frequent Travel Destinations (space separated): ").split(),
1471
+ })
1472
+ print("\n📞 CONTACT INFORMATION")
1473
+ info.update({
1474
+ 'phone': input("Primary Phone: "),
1475
+ 'mobile': input("Mobile Number: "),
1476
+ 'home_phone': input("Home Phone: "),
1477
+ 'work_phone': input("Work Phone: "),
1478
+ 'email': input("Primary Email: "),
1479
+ 'alt_email': input("Alternative Email: "),
1480
+ 'social_media': input("Social Media Handles (space separated): ").split(),
1481
+ })
1482
+ print("\n💻 DIGITAL FOOTPRINT")
1483
+ info.update({
1484
+ 'username': input("Primary Username: "),
1485
+ 'prev_usernames': input("Previous Usernames (space separated): ").split(),
1486
+ 'gaming_ids': input("Gaming IDs (space separated): ").split(),
1487
+ 'crypto_wallets': input("Crypto Wallet Addresses (space separated): ").split(),
1488
+ 'websites': input("Owned Websites (space separated): ").split(),
1489
+ 'device_models': input("Device Models (space separated): ").split(),
1490
+ 'wifi_ssids': input("Common WiFi SSIDs (space separated): ").split(),
1491
+ 'social_media_platforms': input("Social Media Platforms Used (space separated): ").split(),
1492
+ 'online_gaming_platforms': input("Gaming Platforms Used (space separated): ").split(),
1493
+ })
1494
+ print("\n🎓 EDUCATION & EMPLOYMENT")
1495
+ info.update({
1496
+ 'school': input("High School: "),
1497
+ 'uni': input("University: "),
1498
+ 'grad_year': input("High School Graduation Year: "),
1499
+ 'major': input("University Major: "),
1500
+ 'grad_year_uni': input("University Graduation Year: "),
1501
+ 'job_title': input("Job Title: "),
1502
+ 'employer': input("Employer: "),
1503
+ 'employee_id': input("Employee ID: "),
1504
+ })
1505
+ print("\n👨‍👩‍👧‍👦 FAMILY MEMBERS")
1506
+ info.update({
1507
+ 'spouse': input("Spouse's Name: "),
1508
+ 'spouse_birth': input("Spouse's Birth Year: "),
1509
+ 'children': input("Children's Names (space separated): ").split(),
1510
+ 'children_births': input("Children's Birth Years (space separated): ").split(),
1511
+ 'parents': input("Parents' Names (space separated): ").split(),
1512
+ 'in_laws': input("In-Laws' Names (space separated): ").split(),
1513
+ 'cousins': input("Close Cousins' Names (space separated): ").split(),
1514
+ 'relatives': input("Close Relatives' Names (space separated): ").split(),
1515
+ })
1516
+ print("\n❤️ INTERESTS & PREFERENCES")
1517
+ info.update({
1518
+ 'hobbies': input("Hobbies (space separated): ").split(),
1519
+ 'sports': input("Sports Teams (space separated): ").split(),
1520
+ 'music': input("Favorite Bands/Artists (space separated): ").split(),
1521
+ 'movies': input("Favorite Movies (space separated): ").split(),
1522
+ 'tv_shows': input("Favorite TV Shows (space separated): ").split(),
1523
+ 'books': input("Favorite Books (space separated): ").split(),
1524
+ 'colors': input("Favorite Colors (space separated): ").split(),
1525
+ 'pets': input("Pet Names (space separated): ").split(),
1526
+ 'pet_types': input("Pet Species/Breeds (space separated): ").split(),
1527
+ 'cars': input("Car Models (space separated): ").split(),
1528
+ 'car_plates': input("Car License Plates (space separated): ").split(),
1529
+ 'brands': input("Favorite Brands (space separated): ").split(),
1530
+ 'foods': input("Favorite Foods (space separated): ").split(),
1531
+ 'restaurants': input("Frequent Restaurants (space separated): ").split(),
1532
+ 'authors': input("Favorite Authors (space separated): ").split(),
1533
+ 'actors': input("Favorite Actors/Actresses (space separated): ").split(),
1534
+ 'genres': input("Favorite Music/Movie Genres (space separated): ").split(),
1535
+ })
1536
+ print("\n🔒 SECURITY INFORMATION")
1537
+ info.update({
1538
+ 'common_passwords': input("Known Common Passwords (space separated): ").split(),
1539
+ 'leaked_passwords': input("Previously Leaked Passwords (space separated): ").split(),
1540
+ 'data_breaches': input("Involved Data Breaches (space separated): ").split(),
1541
+ 'security_questions': input("Security Questions Used (space separated): ").split(),
1542
+ 'security_answers': input("Security Question Answers (space separated): ").split(),
1543
+ 'password_patterns': input("Common Password Patterns (space separated): ").split(),
1544
+ 'special_chars': input("Common Special Characters (space separated): ").split(),
1545
+ 'number_patterns': input("Common Number Patterns (space separated): ").split(),
1546
+ 'favorite_numbers': input("Favorite/Lucky Numbers (space separated): ").split(),
1547
+ })
1548
+ print("\n🧠 BEHAVIORAL PATTERNS")
1549
+ info.update({
1550
+ 'leet_transforms': input("Common Leet Transformations (e.g., '4=a', '3=e') (space separated): ").split(),
1551
+ 'keyboard_walks': input("Common Keyboard Walks (space separated): ").split(),
1552
+ 'catchphrases': input("Frequently Used Catchphrases (space separated): ").split(),
1553
+ 'quotes': input("Favorite Quotes (space separated): ").split(),
1554
+ 'online_behaviors': input("Notable Online Behaviors (space separated): ").split(),
1555
+ })
1556
+ print("\n🔍 ADVANCED PROFILING")
1557
+ info.update({
1558
+ 'political_views': input("Political Views (space separated): ").split(),
1559
+ 'religious_views': input("Religious Views (space separated): ").split(),
1560
+ 'ethnicity': input("Ethnicity: "),
1561
+ 'education_level': input("Education Level: "),
1562
+ 'income_range': input("Income Range: "),
1563
+ 'relationship_status': input("Relationship Status: "),
1564
+ 'occupation': input("Occupation: "),
1565
+ 'industry': input("Industry: "),
1566
+ 'tech_savviness': input("Tech Savviness (1-10): "),
1567
+ 'password_change_frequency': input("Password Change Frequency (monthly): "),
1568
+ })
1569
+ return info
1570
+ def save_passwords(passwords, info, filename="advanced_passwords.txt", min_length=8, max_length=12):
1571
+ if not passwords:
1572
+ print("❌ No valid passwords generated")
1573
+ return
1574
+ entropy_analyzer = PasswordEntropyAnalyzer(info.get('language', 'en'))
1575
+ entropy_scores = [entropy_analyzer.calculate_entropy(p) for p in passwords]
1576
+ avg_entropy = sum(entropy_scores) / len(entropy_scores)
1577
+ password_analysis = []
1578
+ for pwd in passwords:
1579
+ analysis = entropy_analyzer.analyze_password_patterns(pwd)
1580
+ password_analysis.append({
1581
+ 'password': pwd,
1582
+ 'length': analysis['length'],
1583
+ 'entropy': analysis['entropy'],
1584
+ 'complexity_score': min(10, round(analysis['entropy']/10)),
1585
+ 'pattern_types': []
1586
+ })
1587
+ if analysis['repeated_chars']:
1588
+ password_analysis[-1]['pattern_types'].append('repeated_chars')
1589
+ if analysis['keyboard_patterns']:
1590
+ password_analysis[-1]['pattern_types'].append('keyboard_walk')
1591
+ if analysis['common_words']:
1592
+ password_analysis[-1]['pattern_types'].append('dictionary_word')
1593
+ if analysis['cultural_patterns']:
1594
+ password_analysis[-1]['pattern_types'].append('cultural_pattern')
1595
+ password_analysis.sort(key=lambda x: x['entropy'], reverse=True)
1596
+ output_dir = os.path.dirname(os.path.abspath(filename))
1597
+ if output_dir and not os.path.exists(output_dir):
1598
+ os.makedirs(output_dir)
1599
+ with open(filename, "w", encoding="utf-8") as f:
1600
+ f.write("# ================================================\n")
1601
+ f.write("# Ethically Generated Password List\n")
1602
+ f.write("# ================================================\n")
1603
+ f.write("# Generated with advanced algorithmic pattern analysis\n")
1604
+ f.write("# For authorized security testing and educational purposes only\n")
1605
+ f.write("# ================================================\n")
1606
+ f.write(f"# Date: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
1607
+ f.write(f"# Language: {LANGUAGE_DATA.get(info.get('language', 'en'), LANGUAGE_DATA['en'])['name']}\n")
1608
+ f.write(f"# Length Constraints: {min_length}-{max_length} characters\n")
1609
+ f.write(f"# Total Passwords: {len(passwords)}\n")
1610
+ f.write(f"# Average Entropy: {avg_entropy:.2f}\n")
1611
+ f.write(f"# Security Level: {'High' if avg_entropy > 60 else 'Medium' if avg_entropy > 45 else 'Low'}\n")
1612
+ f.write("# ================================================\n")
1613
+ f.write("# Target Profile Summary:\n")
1614
+ f.write(f"# - Name: {info.get('first_name', '')} {info.get('last_name', '')}\n")
1615
+ f.write(f"# - Location: {info.get('city', '')}, {info.get('country', '')}\n")
1616
+ f.write(f"# - Age: {info.get('age', 'Unknown')}\n")
1617
+ f.write("# ================================================\n")
1618
+ f.write("# Password Analysis:\n")
1619
+ f.write("# High Entropy (60+): Strong password patterns\n")
1620
+ f.write("# Medium Entropy (45-60): Moderate security\n")
1621
+ f.write("# Low Entropy (<45): Weak patterns, use with caution\n")
1622
+ f.write("# ================================================\n")
1623
+ f.write("# Passwords:\n")
1624
+ for i, analysis in enumerate(password_analysis, 1):
1625
+ pwd = analysis['password']
1626
+ entropy = analysis['entropy']
1627
+ complexity = analysis['complexity_score']
1628
+ patterns = ", ".join(analysis['pattern_types']) if analysis['pattern_types'] else "complex_pattern"
1629
+ f.write(f"{i:3}. {pwd}\n")
1630
+ f.write(f" 🔍 Entropy: {entropy:.2f} | Length: {len(pwd)} | Complexity: {complexity}/10\n")
1631
+ f.write(f" 📌 Pattern Type: {patterns}\n")
1632
+ f.write(f" 💡 Security Rating: {'⭐' * complexity}{'☆' * (10-complexity)}\n")
1633
+ print(f"\n✅ Saved {len(passwords)} advanced passwords to {filename}")
1634
+ print(f"📊 Average entropy: {avg_entropy:.2f}")
1635
+ print(f"📏 Length range: {min_length}-{max_length} characters")
1636
+ print(f"📈 Security level: {'High' if avg_entropy > 60 else 'Medium' if avg_entropy > 45 else 'Low'}")
1637
+ stats_file = filename.replace('.txt', '_stats.json')
1638
+ stats = {
1639
+ 'metadata': {
1640
+ 'timestamp': datetime.now().isoformat(),
1641
+ 'language': info.get('language', 'en'),
1642
+ 'length_constraints': {
1643
+ 'min': min_length,
1644
+ 'max': max_length
1645
+ },
1646
+ 'target_summary': {
1647
+ 'has_name': bool(info.get('first_name') and info.get('last_name')),
1648
+ 'has_birthdate': bool(info.get('birthdate')),
1649
+ 'has_email': bool(info.get('email')),
1650
+ 'country': info.get('country', 'Unknown')
1651
+ },
1652
+ 'password_count': len(passwords),
1653
+ 'average_entropy': avg_entropy,
1654
+ 'security_level': 'High' if avg_entropy > 60 else 'Medium' if avg_entropy > 45 else 'Low'
1655
+ },
1656
+ 'password_analysis': password_analysis
1657
+ }
1658
+ with open(stats_file, 'w', encoding='utf-8') as f:
1659
+ json.dump(stats, f, ensure_ascii=False, indent=2)
1660
+ print(f"📊 Detailed statistics saved to {stats_file}")
1661
+ def parse_arguments():
1662
+ parser = argparse.ArgumentParser(
1663
+ description="""
1664
+ 🔐 Advanced Password List Generator - Smart Algorithmic Edition
1665
+ Generates highly personalized password guesses based on detailed user profiles.
1666
+ Designed for ethical security testing and educational use only.
1667
+ """,
1668
+ formatter_class=argparse.RawTextHelpFormatter,
1669
+ epilog="""
1670
+ Example Usage:
1671
+ python advanced_password_generator.py -c 100 --min_p 8 --max_p 12 -o passwords.txt --strategy smart
1672
+ python advanced_password_generator.py -h
1673
+ """
1674
+ )
1675
+ parser.add_argument(
1676
+ '-c', '--count',
1677
+ type=int,
1678
+ default=50,
1679
+ help="Number of passwords to generate (default: 50)"
1680
+ )
1681
+ parser.add_argument(
1682
+ '--min_p',
1683
+ type=int,
1684
+ default=8,
1685
+ help="Minimum length of generated passwords (default: 8)"
1686
+ )
1687
+ parser.add_argument(
1688
+ '--max_p',
1689
+ type=int,
1690
+ default=12,
1691
+ help="Maximum length of generated passwords (default: 12)"
1692
+ )
1693
+ parser.add_argument(
1694
+ '-o', '--output',
1695
+ type=str,
1696
+ default='passwords.txt',
1697
+ help="Output file name for generated passwords (default: passwords.txt)"
1698
+ )
1699
+ parser.add_argument(
1700
+ '--seed',
1701
+ type=int,
1702
+ default=42,
1703
+ help="Seed value for reproducibility (default: 42)"
1704
+ )
1705
+ parser.add_argument(
1706
+ '--strategy',
1707
+ type=str,
1708
+ default='smart',
1709
+ choices=['basic', 'cultural', 'behavioral', 'comprehensive', 'smart'],
1710
+ help="Password generation strategy:\n"
1711
+ " basic : Simple combinations of user info\n"
1712
+ " cultural : Focus on cultural and language patterns\n"
1713
+ " behavioral : Focus on typing behavior and habits\n"
1714
+ " comprehensive : Balanced combination of approaches\n"
1715
+ " smart : Intelligent weighted combinations (default)"
1716
+ )
1717
+ parser.add_argument(
1718
+ '--ethical-verify',
1719
+ action='store_true',
1720
+ default=True,
1721
+ help="Enable ethical usage verification (default: True)"
1722
+ )
1723
+ parser.add_argument(
1724
+ '--no-ethical-verify',
1725
+ dest='ethical_verify',
1726
+ action='store_false',
1727
+ help="Disable ethical usage verification (not recommended)"
1728
+ )
1729
+ return parser.parse_args()
1730
+ def main():
1731
+ print(r"""
1732
+ 🔐 Advanced Password List Generator v5.1 (Smart Algorithmic Edition)
1733
+ 🌐 Intelligent Algorithms • Ethical Safeguards • Comprehensive Profiling
1734
+ ⚠️ For authorized security testing ONLY with explicit permission
1735
+ """)
1736
+ ethical_guard = EthicalSafeguard()
1737
+ args = parse_arguments()
1738
+ random.seed(args.seed)
1739
+ if args.ethical_verify and not ethical_guard.verify_ethical_usage():
1740
+ print("❌ Ethical verification failed. Exiting...")
1741
+ return
1742
+ info = get_extended_user_info()
1743
+ print(f"\n🔄 Generating passwords using '{args.strategy}' strategy...")
1744
+ generator = ContextualPasswordGenerator(
1745
+ language=info.get('language', 'en')
1746
+ )
1747
+ passwords = generator.generate_with_context(
1748
+ info,
1749
+ count=args.count,
1750
+ min_length=args.min_p,
1751
+ max_length=args.max_p,
1752
+ strategy=args.strategy
1753
+ )
1754
+ if args.ethical_verify:
1755
+ ethical_guard.log_usage(len(passwords), info)
1756
+ save_passwords(passwords, info, args.output, args.min_p, args.max_p)
1757
+ print("\n" + "="*60)
1758
+ print(" PASSWORD GENERATION COMPLETE - SECURITY RECOMMENDATIONS")
1759
+ print("="*60)
1760
+ print("1. This password list is for authorized security testing ONLY")
1761
+ print("2. Always obtain explicit written permission before testing")
1762
+ print("3. Securely delete these passwords after your authorized test")
1763
+ print("4. Document all activities for your security report")
1764
+ print("5. Never use these passwords for unauthorized access")
1765
+ print("="*60)
1766
+ print("🔒 Remember: With great power comes great responsibility")
1767
+ print("="*60)
1768
+ if __name__ == "__main__":
1769
+ main()
password-list-generator-complete-main/README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # password-list-generator-complete
2
+ pass list generator tool for **Ethical Hacking**
3
+ **under development**
4
+
5
+
6
+ # 🔐 PLG_ysnrfd – Advanced Context-Aware Password Intelligence & Security Analyzer
7
+
8
+ [![Python](https://img.shields.io/badge/python-3.8%2B-blue.svg)](https://www.python.org/)
9
+ [![Security Tool](https://img.shields.io/badge/Security-Analyzer-critical.svg)]()
10
+
11
+ ---
12
+
13
+ ## 🚀 Overview
14
+
15
+ **PLG_ysnrfd** is a powerful tool for **password security analysis** and **context-aware password generation**.
16
+ It leverages advanced algorithms for entropy calculation, pattern recognition, user behavior modeling, and cultural context to produce **strong, personalized passwords** and evaluate the strength of existing ones.
17
+
18
+ This project is designed to improve cybersecurity awareness and is intended **only for authorized security testing and educational purposes**.
19
+
20
+ ---
21
+
22
+ ## ✨ Features
23
+
24
+ - **Comprehensive Password Analysis**
25
+ Detects dictionary words, keyboard patterns, repeated characters, cultural references, and common password structures.
26
+
27
+ - **Entropy Calculation**
28
+ Calculates the password entropy based on Shannon’s formula and penalizes weak or predictable patterns.
29
+
30
+ - **Smart Password Generation**
31
+ Creates personalized passwords using user information (e.g., names, pets, dates, cultural events) with advanced transformations (leet-speak, camelCase, hex encoding, etc.).
32
+
33
+ - **Multi-Language Support**
34
+ Supports **English, German, Persian, French, and Spanish** with language-specific wordlists and keyboard layouts.
35
+
36
+ - **Ethical Safeguard System**
37
+ Built-in interactive verification to ensure usage is legal and ethical (authorized penetration testing).
38
+
39
+ - **Behavioral Prediction**
40
+ Uses a `UserBehaviorPredictor` class to understand the security awareness, emotional bias, and cultural context of the user.
41
+
42
+ - **Encrypted Logging**
43
+ Secure logging of usage data with SHA-256 hashing.
44
+
45
+ - **Leaked Password Transformation**
46
+ Learns from compromised passwords and generates improved, stronger variants.
47
+
48
+ ---
49
+
50
+ ## 📦 Installation
51
+
52
+ **1. Clone the repository**
53
+
54
+ ```bash
55
+ git clone https://github.com/ysnrfd/password-list-generator-complete.git
56
+ cd password-list-generator-complete
57
+ ```
58
+
59
+ **2. Install dependencies**
60
+
61
+ ```bash
62
+ pip install -r requirements.txt
63
+ ```
64
+
65
+ **3. Required Python Libraries**
66
+
67
+ nltk (WordNet, Punkt tokenizer)
68
+
69
+ pandas
70
+
71
+ scikit-learn
72
+
73
+ requests
74
+
75
+ tqdm
76
+
77
+ python-Levenshtein
78
+
79
+ **Note: For NLTK, you may need to download WordNet and Punkt data:**
80
+
81
+ ```python
82
+ import nltk
83
+ nltk.download('wordnet')
84
+ nltk.download('punkt')
85
+ ```
86
+
87
+ ## 🔧 Usage
88
+
89
+ **1. Run the tool directly**
90
+
91
+ **Windows:**
92
+ ```python
93
+ python PLG_ysnrfd.py
94
+ ```
95
+ **Linux**:
96
+ ```python
97
+ python3 PLG_ysnrfd.py
98
+ ```
99
+
100
+ **2. Analyze the strength of a password**
101
+
102
+ ```python
103
+ from PLG_ysnrfd import PasswordEntropyAnalyzer
104
+
105
+ analyzer = PasswordEntropyAnalyzer(language='en')
106
+ result = analyzer.analyze_password_patterns("MyP@ssw0rd2024")
107
+ print(result)
108
+ ```
109
+
110
+ **3. Generate context-aware passwords**
111
+
112
+ ```python
113
+ from PLG_ysnrfd import ContextualPasswordGenerator
114
+
115
+ user_info = {
116
+ 'first_name': 'Alice',
117
+ 'birth_year': '1995',
118
+ 'pets': ['Luna'],
119
+ 'nationality': 'USA',
120
+ 'language': 'en'
121
+ }
122
+
123
+ generator = ContextualPasswordGenerator(language='en')
124
+ passwords = generator._generate_weighted_combinations(user_info, count=10, min_length=8, max_length=16)
125
+ print(passwords)
126
+ ```
127
+
128
+ ## ⚠️ Ethical Disclaimer
129
+
130
+ This tool is strictly for educational and authorized penetration testing.
131
+ Before usage, the program requires you to accept the Ethical Usage Agreement.
132
+ Unauthorized use of this tool is illegal and unethical.
133
+ Always ensure you have explicit written consent before testing any system.
134
+
135
+ ## 📂 Project Structure
136
+
137
+ ```structure
138
+ password-list-generator-complete/
139
+ │── PLG_ysnrfd.py
140
+ │── requirements.txt
141
+ │── README.md
142
+ ```
143
+
144
+ ## 🧠 Algorithms
145
+
146
+ - **Entropy Calculation**
147
+ Uses Shannon entropy, keyboard pattern detection, and character frequency analysis to assess password complexity.
148
+
149
+ - **User Behavior Modeling**
150
+ Predicts password habits based on cultural background, age, pets, children, and emotional factors.
151
+
152
+ - **Context-Aware Password Generation**
153
+ Combines personal data, cultural events, and randomized transformations to produce strong, unique passwords.
154
+
155
+ ## 📊 Roadmap
156
+
157
+ - Integration with leaked password databases (HaveIBeenPwned API)
158
+
159
+ - More language packs (Italian, Russian, Arabic)
160
+
161
+ - Plugin-based architecture for custom rules
162
+
163
+ ## 📝 License
164
+
165
+ This project is released under the **ysnrfd LICENSE.**
166
+ See the LICENSE file for details.
167
+
168
+ ## 👤 Author
169
+
170
+ **Developer: YSNRFD**
171
+ **Telegram: @ysnrfd**
172
+
173
+ ## 🌟 Support the Project
174
+
175
+ **If you find this project useful, please give it a ⭐ on GitHub!**
password-list-generator-complete-main/requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ argparse
2
+ nltk>=3.8
3
+ requests>=2.31
4
+ tqdm>=4.66
5
+ pandas>=2.0
6
+ scikit-learn>=1.3
7
+ python-Levenshtein>=0.21