repo_name
stringclasses
1 value
pr_number
int64
4.12k
11.2k
pr_title
stringlengths
9
107
pr_description
stringlengths
107
5.48k
author
stringlengths
4
18
date_created
unknown
date_merged
unknown
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
query
stringlengths
118
5.52k
before_content
stringlengths
0
7.93M
after_content
stringlengths
0
7.93M
label
int64
-1
1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import base64 def base85_encode(string: str) -> bytes: """ >>> base85_encode("") b'' >>> base85_encode("12345") b'0etOA2#' >>> base85_encode("base 85") b'@UX=h+?24' """ # encoded the input to a bytes-like object and then a85encode that return base64.a85encode(string.encode("utf-8")) def base85_decode(a85encoded: bytes) -> str: """ >>> base85_decode(b"") '' >>> base85_decode(b"0etOA2#") '12345' >>> base85_decode(b"@UX=h+?24") 'base 85' """ # a85decode the input into bytes and decode that into a human readable string return base64.a85decode(a85encoded).decode("utf-8") if __name__ == "__main__": import doctest doctest.testmod()
import base64 def base85_encode(string: str) -> bytes: """ >>> base85_encode("") b'' >>> base85_encode("12345") b'0etOA2#' >>> base85_encode("base 85") b'@UX=h+?24' """ # encoded the input to a bytes-like object and then a85encode that return base64.a85encode(string.encode("utf-8")) def base85_decode(a85encoded: bytes) -> str: """ >>> base85_decode(b"") '' >>> base85_decode(b"0etOA2#") '12345' >>> base85_decode(b"@UX=h+?24") 'base 85' """ # a85decode the input into bytes and decode that into a human readable string return base64.a85decode(a85encoded).decode("utf-8") if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 89: https://projecteuler.net/problem=89 For a number written in Roman numerals to be considered valid there are basic rules which must be followed. Even though the rules allow some numbers to be expressed in more than one way there is always a "best" way of writing a particular number. For example, it would appear that there are at least six ways of writing the number sixteen: IIIIIIIIIIIIIIII VIIIIIIIIIII VVIIIIII XIIIIII VVVI XVI However, according to the rules only XIIIIII and XVI are valid, and the last example is considered to be the most efficient, as it uses the least number of numerals. The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one thousand numbers written in valid, but not necessarily minimal, Roman numerals; see About... Roman Numerals for the definitive rules for this problem. Find the number of characters saved by writing each of these in their minimal form. Note: You can assume that all the Roman numerals in the file contain no more than four consecutive identical units. """ import os SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000} def parse_roman_numerals(numerals: str) -> int: """ Converts a string of roman numerals to an integer. e.g. >>> parse_roman_numerals("LXXXIX") 89 >>> parse_roman_numerals("IIII") 4 """ total_value = 0 index = 0 while index < len(numerals) - 1: current_value = SYMBOLS[numerals[index]] next_value = SYMBOLS[numerals[index + 1]] if current_value < next_value: total_value -= current_value else: total_value += current_value index += 1 total_value += SYMBOLS[numerals[index]] return total_value def generate_roman_numerals(num: int) -> str: """ Generates a string of roman numerals for a given integer. e.g. >>> generate_roman_numerals(89) 'LXXXIX' >>> generate_roman_numerals(4) 'IV' """ numerals = "" m_count = num // 1000 numerals += m_count * "M" num %= 1000 c_count = num // 100 if c_count == 9: numerals += "CM" c_count -= 9 elif c_count == 4: numerals += "CD" c_count -= 4 if c_count >= 5: numerals += "D" c_count -= 5 numerals += c_count * "C" num %= 100 x_count = num // 10 if x_count == 9: numerals += "XC" x_count -= 9 elif x_count == 4: numerals += "XL" x_count -= 4 if x_count >= 5: numerals += "L" x_count -= 5 numerals += x_count * "X" num %= 10 if num == 9: numerals += "IX" num -= 9 elif num == 4: numerals += "IV" num -= 4 if num >= 5: numerals += "V" num -= 5 numerals += num * "I" return numerals def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int: """ Calculates and returns the answer to project euler problem 89. >>> solution("/numeralcleanup_test.txt") 16 """ savings = 0 with open(os.path.dirname(__file__) + roman_numerals_filename) as file1: lines = file1.readlines() for line in lines: original = line.strip() num = parse_roman_numerals(original) shortened = generate_roman_numerals(num) savings += len(original) - len(shortened) return savings if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 89: https://projecteuler.net/problem=89 For a number written in Roman numerals to be considered valid there are basic rules which must be followed. Even though the rules allow some numbers to be expressed in more than one way there is always a "best" way of writing a particular number. For example, it would appear that there are at least six ways of writing the number sixteen: IIIIIIIIIIIIIIII VIIIIIIIIIII VVIIIIII XIIIIII VVVI XVI However, according to the rules only XIIIIII and XVI are valid, and the last example is considered to be the most efficient, as it uses the least number of numerals. The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one thousand numbers written in valid, but not necessarily minimal, Roman numerals; see About... Roman Numerals for the definitive rules for this problem. Find the number of characters saved by writing each of these in their minimal form. Note: You can assume that all the Roman numerals in the file contain no more than four consecutive identical units. """ import os SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000} def parse_roman_numerals(numerals: str) -> int: """ Converts a string of roman numerals to an integer. e.g. >>> parse_roman_numerals("LXXXIX") 89 >>> parse_roman_numerals("IIII") 4 """ total_value = 0 index = 0 while index < len(numerals) - 1: current_value = SYMBOLS[numerals[index]] next_value = SYMBOLS[numerals[index + 1]] if current_value < next_value: total_value -= current_value else: total_value += current_value index += 1 total_value += SYMBOLS[numerals[index]] return total_value def generate_roman_numerals(num: int) -> str: """ Generates a string of roman numerals for a given integer. e.g. >>> generate_roman_numerals(89) 'LXXXIX' >>> generate_roman_numerals(4) 'IV' """ numerals = "" m_count = num // 1000 numerals += m_count * "M" num %= 1000 c_count = num // 100 if c_count == 9: numerals += "CM" c_count -= 9 elif c_count == 4: numerals += "CD" c_count -= 4 if c_count >= 5: numerals += "D" c_count -= 5 numerals += c_count * "C" num %= 100 x_count = num // 10 if x_count == 9: numerals += "XC" x_count -= 9 elif x_count == 4: numerals += "XL" x_count -= 4 if x_count >= 5: numerals += "L" x_count -= 5 numerals += x_count * "X" num %= 10 if num == 9: numerals += "IX" num -= 9 elif num == 4: numerals += "IV" num -= 4 if num >= 5: numerals += "V" num -= 5 numerals += num * "I" return numerals def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int: """ Calculates and returns the answer to project euler problem 89. >>> solution("/numeralcleanup_test.txt") 16 """ savings = 0 with open(os.path.dirname(__file__) + roman_numerals_filename) as file1: lines = file1.readlines() for line in lines: original = line.strip() num = parse_roman_numerals(original) shortened = generate_roman_numerals(num) savings += len(original) - len(shortened) return savings if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Information on binary shifts: # https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types # https://www.interviewcake.com/concept/java/bit-shift def logical_left_shift(number: int, shift_amount: int) -> str: """ Take in 2 positive integers. 'number' is the integer to be logically left shifted 'shift_amount' times. i.e. (number << shift_amount) Return the shifted binary representation. >>> logical_left_shift(0, 1) '0b00' >>> logical_left_shift(1, 1) '0b10' >>> logical_left_shift(1, 5) '0b100000' >>> logical_left_shift(17, 2) '0b1000100' >>> logical_left_shift(1983, 4) '0b111101111110000' >>> logical_left_shift(1, -1) Traceback (most recent call last): ... ValueError: both inputs must be positive integers """ if number < 0 or shift_amount < 0: raise ValueError("both inputs must be positive integers") binary_number = str(bin(number)) binary_number += "0" * shift_amount return binary_number def logical_right_shift(number: int, shift_amount: int) -> str: """ Take in positive 2 integers. 'number' is the integer to be logically right shifted 'shift_amount' times. i.e. (number >>> shift_amount) Return the shifted binary representation. >>> logical_right_shift(0, 1) '0b0' >>> logical_right_shift(1, 1) '0b0' >>> logical_right_shift(1, 5) '0b0' >>> logical_right_shift(17, 2) '0b100' >>> logical_right_shift(1983, 4) '0b1111011' >>> logical_right_shift(1, -1) Traceback (most recent call last): ... ValueError: both inputs must be positive integers """ if number < 0 or shift_amount < 0: raise ValueError("both inputs must be positive integers") binary_number = str(bin(number))[2:] if shift_amount >= len(binary_number): return "0b0" shifted_binary_number = binary_number[: len(binary_number) - shift_amount] return "0b" + shifted_binary_number def arithmetic_right_shift(number: int, shift_amount: int) -> str: """ Take in 2 integers. 'number' is the integer to be arithmetically right shifted 'shift_amount' times. i.e. (number >> shift_amount) Return the shifted binary representation. >>> arithmetic_right_shift(0, 1) '0b00' >>> arithmetic_right_shift(1, 1) '0b00' >>> arithmetic_right_shift(-1, 1) '0b11' >>> arithmetic_right_shift(17, 2) '0b000100' >>> arithmetic_right_shift(-17, 2) '0b111011' >>> arithmetic_right_shift(-1983, 4) '0b111110000100' """ if number >= 0: # Get binary representation of positive number binary_number = "0" + str(bin(number)).strip("-")[2:] else: # Get binary (2's complement) representation of negative number binary_number_length = len(bin(number)[3:]) # Find 2's complement of number binary_number = bin(abs(number) - (1 << binary_number_length))[3:] binary_number = ( "1" + "0" * (binary_number_length - len(binary_number)) + binary_number ) if shift_amount >= len(binary_number): return "0b" + binary_number[0] * len(binary_number) return ( "0b" + binary_number[0] * shift_amount + binary_number[: len(binary_number) - shift_amount] ) if __name__ == "__main__": import doctest doctest.testmod()
# Information on binary shifts: # https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types # https://www.interviewcake.com/concept/java/bit-shift def logical_left_shift(number: int, shift_amount: int) -> str: """ Take in 2 positive integers. 'number' is the integer to be logically left shifted 'shift_amount' times. i.e. (number << shift_amount) Return the shifted binary representation. >>> logical_left_shift(0, 1) '0b00' >>> logical_left_shift(1, 1) '0b10' >>> logical_left_shift(1, 5) '0b100000' >>> logical_left_shift(17, 2) '0b1000100' >>> logical_left_shift(1983, 4) '0b111101111110000' >>> logical_left_shift(1, -1) Traceback (most recent call last): ... ValueError: both inputs must be positive integers """ if number < 0 or shift_amount < 0: raise ValueError("both inputs must be positive integers") binary_number = str(bin(number)) binary_number += "0" * shift_amount return binary_number def logical_right_shift(number: int, shift_amount: int) -> str: """ Take in positive 2 integers. 'number' is the integer to be logically right shifted 'shift_amount' times. i.e. (number >>> shift_amount) Return the shifted binary representation. >>> logical_right_shift(0, 1) '0b0' >>> logical_right_shift(1, 1) '0b0' >>> logical_right_shift(1, 5) '0b0' >>> logical_right_shift(17, 2) '0b100' >>> logical_right_shift(1983, 4) '0b1111011' >>> logical_right_shift(1, -1) Traceback (most recent call last): ... ValueError: both inputs must be positive integers """ if number < 0 or shift_amount < 0: raise ValueError("both inputs must be positive integers") binary_number = str(bin(number))[2:] if shift_amount >= len(binary_number): return "0b0" shifted_binary_number = binary_number[: len(binary_number) - shift_amount] return "0b" + shifted_binary_number def arithmetic_right_shift(number: int, shift_amount: int) -> str: """ Take in 2 integers. 'number' is the integer to be arithmetically right shifted 'shift_amount' times. i.e. (number >> shift_amount) Return the shifted binary representation. >>> arithmetic_right_shift(0, 1) '0b00' >>> arithmetic_right_shift(1, 1) '0b00' >>> arithmetic_right_shift(-1, 1) '0b11' >>> arithmetic_right_shift(17, 2) '0b000100' >>> arithmetic_right_shift(-17, 2) '0b111011' >>> arithmetic_right_shift(-1983, 4) '0b111110000100' """ if number >= 0: # Get binary representation of positive number binary_number = "0" + str(bin(number)).strip("-")[2:] else: # Get binary (2's complement) representation of negative number binary_number_length = len(bin(number)[3:]) # Find 2's complement of number binary_number = bin(abs(number) - (1 << binary_number_length))[3:] binary_number = ( "1" + "0" * (binary_number_length - len(binary_number)) + binary_number ) if shift_amount >= len(binary_number): return "0b" + binary_number[0] * len(binary_number) return ( "0b" + binary_number[0] * shift_amount + binary_number[: len(binary_number) - shift_amount] ) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
NUMBERS_PLUS_LETTER = "Input must be a string of 8 numbers plus letter" LOOKUP_LETTERS = "TRWAGMYFPDXBNJZSQVHLCKE" def is_spain_national_id(spanish_id: str) -> bool: """ Spain National Id is a string composed by 8 numbers plus a letter The letter in fact is not part of the ID, it acts as a validator, checking you didn't do a mistake when entering it on a system or are giving a fake one. https://en.wikipedia.org/wiki/Documento_Nacional_de_Identidad_(Spain)#Number >>> is_spain_national_id("12345678Z") True >>> is_spain_national_id("12345678z") # It is case-insensitive True >>> is_spain_national_id("12345678x") False >>> is_spain_national_id("12345678I") False >>> is_spain_national_id("12345678-Z") # Some systems add a dash True >>> is_spain_national_id("12345678") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("123456709") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234567--Z") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234Z") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234ZzZZ") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id(12345678) Traceback (most recent call last): ... TypeError: Expected string as input, found int """ if not isinstance(spanish_id, str): msg = f"Expected string as input, found {type(spanish_id).__name__}" raise TypeError(msg) spanish_id_clean = spanish_id.replace("-", "").upper() if len(spanish_id_clean) != 9: raise ValueError(NUMBERS_PLUS_LETTER) try: number = int(spanish_id_clean[0:8]) letter = spanish_id_clean[8] except ValueError as ex: raise ValueError(NUMBERS_PLUS_LETTER) from ex if letter.isdigit(): raise ValueError(NUMBERS_PLUS_LETTER) return letter == LOOKUP_LETTERS[number % 23] if __name__ == "__main__": import doctest doctest.testmod()
NUMBERS_PLUS_LETTER = "Input must be a string of 8 numbers plus letter" LOOKUP_LETTERS = "TRWAGMYFPDXBNJZSQVHLCKE" def is_spain_national_id(spanish_id: str) -> bool: """ Spain National Id is a string composed by 8 numbers plus a letter The letter in fact is not part of the ID, it acts as a validator, checking you didn't do a mistake when entering it on a system or are giving a fake one. https://en.wikipedia.org/wiki/Documento_Nacional_de_Identidad_(Spain)#Number >>> is_spain_national_id("12345678Z") True >>> is_spain_national_id("12345678z") # It is case-insensitive True >>> is_spain_national_id("12345678x") False >>> is_spain_national_id("12345678I") False >>> is_spain_national_id("12345678-Z") # Some systems add a dash True >>> is_spain_national_id("12345678") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("123456709") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234567--Z") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234Z") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id("1234ZzZZ") Traceback (most recent call last): ... ValueError: Input must be a string of 8 numbers plus letter >>> is_spain_national_id(12345678) Traceback (most recent call last): ... TypeError: Expected string as input, found int """ if not isinstance(spanish_id, str): msg = f"Expected string as input, found {type(spanish_id).__name__}" raise TypeError(msg) spanish_id_clean = spanish_id.replace("-", "").upper() if len(spanish_id_clean) != 9: raise ValueError(NUMBERS_PLUS_LETTER) try: number = int(spanish_id_clean[0:8]) letter = spanish_id_clean[8] except ValueError as ex: raise ValueError(NUMBERS_PLUS_LETTER) from ex if letter.isdigit(): raise ValueError(NUMBERS_PLUS_LETTER) return letter == LOOKUP_LETTERS[number % 23] if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
B64_CHARSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" def base64_encode(data: bytes) -> bytes: """Encodes data according to RFC4648. The data is first transformed to binary and appended with binary digits so that its length becomes a multiple of 6, then each 6 binary digits will match a character in the B64_CHARSET string. The number of appended binary digits would later determine how many "=" signs should be added, the padding. For every 2 binary digits added, a "=" sign is added in the output. We can add any binary digits to make it a multiple of 6, for instance, consider the following example: "AA" -> 0010100100101001 -> 001010 010010 1001 As can be seen above, 2 more binary digits should be added, so there's 4 possibilities here: 00, 01, 10 or 11. That being said, Base64 encoding can be used in Steganography to hide data in these appended digits. >>> from base64 import b64encode >>> a = b"This pull request is part of Hacktoberfest20!" >>> b = b"https://tools.ietf.org/html/rfc4648" >>> c = b"A" >>> base64_encode(a) == b64encode(a) True >>> base64_encode(b) == b64encode(b) True >>> base64_encode(c) == b64encode(c) True >>> base64_encode("abc") Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str' """ # Make sure the supplied data is a bytes-like object if not isinstance(data, bytes): msg = f"a bytes-like object is required, not '{data.__class__.__name__}'" raise TypeError(msg) binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data) padding_needed = len(binary_stream) % 6 != 0 if padding_needed: # The padding that will be added later padding = b"=" * ((6 - len(binary_stream) % 6) // 2) # Append binary_stream with arbitrary binary digits (0's by default) to make its # length a multiple of 6. binary_stream += "0" * (6 - len(binary_stream) % 6) else: padding = b"" # Encode every 6 binary digits to their corresponding Base64 character return ( "".join( B64_CHARSET[int(binary_stream[index : index + 6], 2)] for index in range(0, len(binary_stream), 6) ).encode() + padding ) def base64_decode(encoded_data: str) -> bytes: """Decodes data according to RFC4648. This does the reverse operation of base64_encode. We first transform the encoded data back to a binary stream, take off the previously appended binary digits according to the padding, at this point we would have a binary stream whose length is multiple of 8, the last step is to convert every 8 bits to a byte. >>> from base64 import b64decode >>> a = "VGhpcyBwdWxsIHJlcXVlc3QgaXMgcGFydCBvZiBIYWNrdG9iZXJmZXN0MjAh" >>> b = "aHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3JmYzQ2NDg=" >>> c = "QQ==" >>> base64_decode(a) == b64decode(a) True >>> base64_decode(b) == b64decode(b) True >>> base64_decode(c) == b64decode(c) True >>> base64_decode("abc") Traceback (most recent call last): ... AssertionError: Incorrect padding """ # Make sure encoded_data is either a string or a bytes-like object if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str): msg = ( "argument should be a bytes-like object or ASCII string, " f"not '{encoded_data.__class__.__name__}'" ) raise TypeError(msg) # In case encoded_data is a bytes-like object, make sure it contains only # ASCII characters so we convert it to a string object if isinstance(encoded_data, bytes): try: encoded_data = encoded_data.decode("utf-8") except UnicodeDecodeError: raise ValueError("base64 encoded data should only contain ASCII characters") padding = encoded_data.count("=") # Check if the encoded string contains non base64 characters if padding: assert all( char in B64_CHARSET for char in encoded_data[:-padding] ), "Invalid base64 character(s) found." else: assert all( char in B64_CHARSET for char in encoded_data ), "Invalid base64 character(s) found." # Check the padding assert len(encoded_data) % 4 == 0 and padding < 3, "Incorrect padding" if padding: # Remove padding if there is one encoded_data = encoded_data[:-padding] binary_stream = "".join( bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data )[: -padding * 2] else: binary_stream = "".join( bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data ) data = [ int(binary_stream[index : index + 8], 2) for index in range(0, len(binary_stream), 8) ] return bytes(data) if __name__ == "__main__": import doctest doctest.testmod()
B64_CHARSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" def base64_encode(data: bytes) -> bytes: """Encodes data according to RFC4648. The data is first transformed to binary and appended with binary digits so that its length becomes a multiple of 6, then each 6 binary digits will match a character in the B64_CHARSET string. The number of appended binary digits would later determine how many "=" signs should be added, the padding. For every 2 binary digits added, a "=" sign is added in the output. We can add any binary digits to make it a multiple of 6, for instance, consider the following example: "AA" -> 0010100100101001 -> 001010 010010 1001 As can be seen above, 2 more binary digits should be added, so there's 4 possibilities here: 00, 01, 10 or 11. That being said, Base64 encoding can be used in Steganography to hide data in these appended digits. >>> from base64 import b64encode >>> a = b"This pull request is part of Hacktoberfest20!" >>> b = b"https://tools.ietf.org/html/rfc4648" >>> c = b"A" >>> base64_encode(a) == b64encode(a) True >>> base64_encode(b) == b64encode(b) True >>> base64_encode(c) == b64encode(c) True >>> base64_encode("abc") Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str' """ # Make sure the supplied data is a bytes-like object if not isinstance(data, bytes): msg = f"a bytes-like object is required, not '{data.__class__.__name__}'" raise TypeError(msg) binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data) padding_needed = len(binary_stream) % 6 != 0 if padding_needed: # The padding that will be added later padding = b"=" * ((6 - len(binary_stream) % 6) // 2) # Append binary_stream with arbitrary binary digits (0's by default) to make its # length a multiple of 6. binary_stream += "0" * (6 - len(binary_stream) % 6) else: padding = b"" # Encode every 6 binary digits to their corresponding Base64 character return ( "".join( B64_CHARSET[int(binary_stream[index : index + 6], 2)] for index in range(0, len(binary_stream), 6) ).encode() + padding ) def base64_decode(encoded_data: str) -> bytes: """Decodes data according to RFC4648. This does the reverse operation of base64_encode. We first transform the encoded data back to a binary stream, take off the previously appended binary digits according to the padding, at this point we would have a binary stream whose length is multiple of 8, the last step is to convert every 8 bits to a byte. >>> from base64 import b64decode >>> a = "VGhpcyBwdWxsIHJlcXVlc3QgaXMgcGFydCBvZiBIYWNrdG9iZXJmZXN0MjAh" >>> b = "aHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3JmYzQ2NDg=" >>> c = "QQ==" >>> base64_decode(a) == b64decode(a) True >>> base64_decode(b) == b64decode(b) True >>> base64_decode(c) == b64decode(c) True >>> base64_decode("abc") Traceback (most recent call last): ... AssertionError: Incorrect padding """ # Make sure encoded_data is either a string or a bytes-like object if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str): msg = ( "argument should be a bytes-like object or ASCII string, " f"not '{encoded_data.__class__.__name__}'" ) raise TypeError(msg) # In case encoded_data is a bytes-like object, make sure it contains only # ASCII characters so we convert it to a string object if isinstance(encoded_data, bytes): try: encoded_data = encoded_data.decode("utf-8") except UnicodeDecodeError: raise ValueError("base64 encoded data should only contain ASCII characters") padding = encoded_data.count("=") # Check if the encoded string contains non base64 characters if padding: assert all( char in B64_CHARSET for char in encoded_data[:-padding] ), "Invalid base64 character(s) found." else: assert all( char in B64_CHARSET for char in encoded_data ), "Invalid base64 character(s) found." # Check the padding assert len(encoded_data) % 4 == 0 and padding < 3, "Incorrect padding" if padding: # Remove padding if there is one encoded_data = encoded_data[:-padding] binary_stream = "".join( bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data )[: -padding * 2] else: binary_stream = "".join( bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data ) data = [ int(binary_stream[index : index + 8], 2) for index in range(0, len(binary_stream), 8) ] return bytes(data) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import math """ In cryptography, the TRANSPOSITION cipher is a method of encryption where the positions of plaintext are shifted a certain number(determined by the key) that follows a regular system that results in the permuted text, known as the encrypted text. The type of transposition cipher demonstrated under is the ROUTE cipher. """ def main() -> None: message = input("Enter message: ") key = int(input(f"Enter key [2-{len(message) - 1}]: ")) mode = input("Encryption/Decryption [e/d]: ") if mode.lower().startswith("e"): text = encrypt_message(key, message) elif mode.lower().startswith("d"): text = decrypt_message(key, message) # Append pipe symbol (vertical bar) to identify spaces at the end. print(f"Output:\n{text + '|'}") def encrypt_message(key: int, message: str) -> str: """ >>> encrypt_message(6, 'Harshil Darji') 'Hlia rDsahrij' """ cipher_text = [""] * key for col in range(key): pointer = col while pointer < len(message): cipher_text[col] += message[pointer] pointer += key return "".join(cipher_text) def decrypt_message(key: int, message: str) -> str: """ >>> decrypt_message(6, 'Hlia rDsahrij') 'Harshil Darji' """ num_cols = math.ceil(len(message) / key) num_rows = key num_shaded_boxes = (num_cols * num_rows) - len(message) plain_text = [""] * num_cols col = 0 row = 0 for symbol in message: plain_text[col] += symbol col += 1 if ( (col == num_cols) or (col == num_cols - 1) and (row >= num_rows - num_shaded_boxes) ): col = 0 row += 1 return "".join(plain_text) if __name__ == "__main__": import doctest doctest.testmod() main()
import math """ In cryptography, the TRANSPOSITION cipher is a method of encryption where the positions of plaintext are shifted a certain number(determined by the key) that follows a regular system that results in the permuted text, known as the encrypted text. The type of transposition cipher demonstrated under is the ROUTE cipher. """ def main() -> None: message = input("Enter message: ") key = int(input(f"Enter key [2-{len(message) - 1}]: ")) mode = input("Encryption/Decryption [e/d]: ") if mode.lower().startswith("e"): text = encrypt_message(key, message) elif mode.lower().startswith("d"): text = decrypt_message(key, message) # Append pipe symbol (vertical bar) to identify spaces at the end. print(f"Output:\n{text + '|'}") def encrypt_message(key: int, message: str) -> str: """ >>> encrypt_message(6, 'Harshil Darji') 'Hlia rDsahrij' """ cipher_text = [""] * key for col in range(key): pointer = col while pointer < len(message): cipher_text[col] += message[pointer] pointer += key return "".join(cipher_text) def decrypt_message(key: int, message: str) -> str: """ >>> decrypt_message(6, 'Hlia rDsahrij') 'Harshil Darji' """ num_cols = math.ceil(len(message) / key) num_rows = key num_shaded_boxes = (num_cols * num_rows) - len(message) plain_text = [""] * num_cols col = 0 row = 0 for symbol in message: plain_text[col] += symbol col += 1 if ( (col == num_cols) or (col == num_cols - 1) and (row >= num_rows - num_shaded_boxes) ): col = 0 row += 1 return "".join(plain_text) if __name__ == "__main__": import doctest doctest.testmod() main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" This is used to convert the currency using the Amdoren Currency API https://www.amdoren.com """ import os import requests URL_BASE = "https://www.amdoren.com/api/currency.php" # Currency and their description list_of_currencies = """ AED United Arab Emirates Dirham AFN Afghan Afghani ALL Albanian Lek AMD Armenian Dram ANG Netherlands Antillean Guilder AOA Angolan Kwanza ARS Argentine Peso AUD Australian Dollar AWG Aruban Florin AZN Azerbaijani Manat BAM Bosnia & Herzegovina Convertible Mark BBD Barbadian Dollar BDT Bangladeshi Taka BGN Bulgarian Lev BHD Bahraini Dinar BIF Burundian Franc BMD Bermudian Dollar BND Brunei Dollar BOB Bolivian Boliviano BRL Brazilian Real BSD Bahamian Dollar BTN Bhutanese Ngultrum BWP Botswana Pula BYN Belarus Ruble BZD Belize Dollar CAD Canadian Dollar CDF Congolese Franc CHF Swiss Franc CLP Chilean Peso CNY Chinese Yuan COP Colombian Peso CRC Costa Rican Colon CUC Cuban Convertible Peso CVE Cape Verdean Escudo CZK Czech Republic Koruna DJF Djiboutian Franc DKK Danish Krone DOP Dominican Peso DZD Algerian Dinar EGP Egyptian Pound ERN Eritrean Nakfa ETB Ethiopian Birr EUR Euro FJD Fiji Dollar GBP British Pound Sterling GEL Georgian Lari GHS Ghanaian Cedi GIP Gibraltar Pound GMD Gambian Dalasi GNF Guinea Franc GTQ Guatemalan Quetzal GYD Guyanaese Dollar HKD Hong Kong Dollar HNL Honduran Lempira HRK Croatian Kuna HTG Haiti Gourde HUF Hungarian Forint IDR Indonesian Rupiah ILS Israeli Shekel INR Indian Rupee IQD Iraqi Dinar IRR Iranian Rial ISK Icelandic Krona JMD Jamaican Dollar JOD Jordanian Dinar JPY Japanese Yen KES Kenyan Shilling KGS Kyrgystani Som KHR Cambodian Riel KMF Comorian Franc KPW North Korean Won KRW South Korean Won KWD Kuwaiti Dinar KYD Cayman Islands Dollar KZT Kazakhstan Tenge LAK Laotian Kip LBP Lebanese Pound LKR Sri Lankan Rupee LRD Liberian Dollar LSL Lesotho Loti LYD Libyan Dinar MAD Moroccan Dirham MDL Moldovan Leu MGA Malagasy Ariary MKD Macedonian Denar MMK Myanma Kyat MNT Mongolian Tugrik MOP Macau Pataca MRO Mauritanian Ouguiya MUR Mauritian Rupee MVR Maldivian Rufiyaa MWK Malawi Kwacha MXN Mexican Peso MYR Malaysian Ringgit MZN Mozambican Metical NAD Namibian Dollar NGN Nigerian Naira NIO Nicaragua Cordoba NOK Norwegian Krone NPR Nepalese Rupee NZD New Zealand Dollar OMR Omani Rial PAB Panamanian Balboa PEN Peruvian Nuevo Sol PGK Papua New Guinean Kina PHP Philippine Peso PKR Pakistani Rupee PLN Polish Zloty PYG Paraguayan Guarani QAR Qatari Riyal RON Romanian Leu RSD Serbian Dinar RUB Russian Ruble RWF Rwanda Franc SAR Saudi Riyal SBD Solomon Islands Dollar SCR Seychellois Rupee SDG Sudanese Pound SEK Swedish Krona SGD Singapore Dollar SHP Saint Helena Pound SLL Sierra Leonean Leone SOS Somali Shilling SRD Surinamese Dollar SSP South Sudanese Pound STD Sao Tome and Principe Dobra SYP Syrian Pound SZL Swazi Lilangeni THB Thai Baht TJS Tajikistan Somoni TMT Turkmenistani Manat TND Tunisian Dinar TOP Tonga Paanga TRY Turkish Lira TTD Trinidad and Tobago Dollar TWD New Taiwan Dollar TZS Tanzanian Shilling UAH Ukrainian Hryvnia UGX Ugandan Shilling USD United States Dollar UYU Uruguayan Peso UZS Uzbekistan Som VEF Venezuelan Bolivar VND Vietnamese Dong VUV Vanuatu Vatu WST Samoan Tala XAF Central African CFA franc XCD East Caribbean Dollar XOF West African CFA franc XPF CFP Franc YER Yemeni Rial ZAR South African Rand ZMW Zambian Kwacha """ def convert_currency( from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = "" ) -> str: """https://www.amdoren.com/currency-api/""" # Instead of manually generating parameters params = locals() # from is a reserved keyword params["from"] = params.pop("from_") res = requests.get(URL_BASE, params=params).json() return str(res["amount"]) if res["error"] == 0 else res["error_message"] if __name__ == "__main__": TESTING = os.getenv("CI", "") API_KEY = os.getenv("AMDOREN_API_KEY", "") if not API_KEY and not TESTING: raise KeyError( "API key must be provided in the 'AMDOREN_API_KEY' environment variable." ) print( convert_currency( input("Enter from currency: ").strip(), input("Enter to currency: ").strip(), float(input("Enter the amount: ").strip()), API_KEY, ) )
""" This is used to convert the currency using the Amdoren Currency API https://www.amdoren.com """ import os import requests URL_BASE = "https://www.amdoren.com/api/currency.php" # Currency and their description list_of_currencies = """ AED United Arab Emirates Dirham AFN Afghan Afghani ALL Albanian Lek AMD Armenian Dram ANG Netherlands Antillean Guilder AOA Angolan Kwanza ARS Argentine Peso AUD Australian Dollar AWG Aruban Florin AZN Azerbaijani Manat BAM Bosnia & Herzegovina Convertible Mark BBD Barbadian Dollar BDT Bangladeshi Taka BGN Bulgarian Lev BHD Bahraini Dinar BIF Burundian Franc BMD Bermudian Dollar BND Brunei Dollar BOB Bolivian Boliviano BRL Brazilian Real BSD Bahamian Dollar BTN Bhutanese Ngultrum BWP Botswana Pula BYN Belarus Ruble BZD Belize Dollar CAD Canadian Dollar CDF Congolese Franc CHF Swiss Franc CLP Chilean Peso CNY Chinese Yuan COP Colombian Peso CRC Costa Rican Colon CUC Cuban Convertible Peso CVE Cape Verdean Escudo CZK Czech Republic Koruna DJF Djiboutian Franc DKK Danish Krone DOP Dominican Peso DZD Algerian Dinar EGP Egyptian Pound ERN Eritrean Nakfa ETB Ethiopian Birr EUR Euro FJD Fiji Dollar GBP British Pound Sterling GEL Georgian Lari GHS Ghanaian Cedi GIP Gibraltar Pound GMD Gambian Dalasi GNF Guinea Franc GTQ Guatemalan Quetzal GYD Guyanaese Dollar HKD Hong Kong Dollar HNL Honduran Lempira HRK Croatian Kuna HTG Haiti Gourde HUF Hungarian Forint IDR Indonesian Rupiah ILS Israeli Shekel INR Indian Rupee IQD Iraqi Dinar IRR Iranian Rial ISK Icelandic Krona JMD Jamaican Dollar JOD Jordanian Dinar JPY Japanese Yen KES Kenyan Shilling KGS Kyrgystani Som KHR Cambodian Riel KMF Comorian Franc KPW North Korean Won KRW South Korean Won KWD Kuwaiti Dinar KYD Cayman Islands Dollar KZT Kazakhstan Tenge LAK Laotian Kip LBP Lebanese Pound LKR Sri Lankan Rupee LRD Liberian Dollar LSL Lesotho Loti LYD Libyan Dinar MAD Moroccan Dirham MDL Moldovan Leu MGA Malagasy Ariary MKD Macedonian Denar MMK Myanma Kyat MNT Mongolian Tugrik MOP Macau Pataca MRO Mauritanian Ouguiya MUR Mauritian Rupee MVR Maldivian Rufiyaa MWK Malawi Kwacha MXN Mexican Peso MYR Malaysian Ringgit MZN Mozambican Metical NAD Namibian Dollar NGN Nigerian Naira NIO Nicaragua Cordoba NOK Norwegian Krone NPR Nepalese Rupee NZD New Zealand Dollar OMR Omani Rial PAB Panamanian Balboa PEN Peruvian Nuevo Sol PGK Papua New Guinean Kina PHP Philippine Peso PKR Pakistani Rupee PLN Polish Zloty PYG Paraguayan Guarani QAR Qatari Riyal RON Romanian Leu RSD Serbian Dinar RUB Russian Ruble RWF Rwanda Franc SAR Saudi Riyal SBD Solomon Islands Dollar SCR Seychellois Rupee SDG Sudanese Pound SEK Swedish Krona SGD Singapore Dollar SHP Saint Helena Pound SLL Sierra Leonean Leone SOS Somali Shilling SRD Surinamese Dollar SSP South Sudanese Pound STD Sao Tome and Principe Dobra SYP Syrian Pound SZL Swazi Lilangeni THB Thai Baht TJS Tajikistan Somoni TMT Turkmenistani Manat TND Tunisian Dinar TOP Tonga Paanga TRY Turkish Lira TTD Trinidad and Tobago Dollar TWD New Taiwan Dollar TZS Tanzanian Shilling UAH Ukrainian Hryvnia UGX Ugandan Shilling USD United States Dollar UYU Uruguayan Peso UZS Uzbekistan Som VEF Venezuelan Bolivar VND Vietnamese Dong VUV Vanuatu Vatu WST Samoan Tala XAF Central African CFA franc XCD East Caribbean Dollar XOF West African CFA franc XPF CFP Franc YER Yemeni Rial ZAR South African Rand ZMW Zambian Kwacha """ def convert_currency( from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = "" ) -> str: """https://www.amdoren.com/currency-api/""" # Instead of manually generating parameters params = locals() # from is a reserved keyword params["from"] = params.pop("from_") res = requests.get(URL_BASE, params=params).json() return str(res["amount"]) if res["error"] == 0 else res["error_message"] if __name__ == "__main__": TESTING = os.getenv("CI", "") API_KEY = os.getenv("AMDOREN_API_KEY", "") if not API_KEY and not TESTING: raise KeyError( "API key must be provided in the 'AMDOREN_API_KEY' environment variable." ) print( convert_currency( input("Enter from currency: ").strip(), input("Enter to currency: ").strip(), float(input("Enter the amount: ").strip()), API_KEY, ) )
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" See https://en.wikipedia.org/wiki/Bloom_filter The use of this data structure is to test membership in a set. Compared to Python's built-in set() it is more space-efficient. In the following example, only 8 bits of memory will be used: >>> bloom = Bloom(size=8) Initially, the filter contains all zeros: >>> bloom.bitstring '00000000' When an element is added, two bits are set to 1 since there are 2 hash functions in this implementation: >>> "Titanic" in bloom False >>> bloom.add("Titanic") >>> bloom.bitstring '01100000' >>> "Titanic" in bloom True However, sometimes only one bit is added because both hash functions return the same value >>> bloom.add("Avatar") >>> "Avatar" in bloom True >>> bloom.format_hash("Avatar") '00000100' >>> bloom.bitstring '01100100' Not added elements should return False ... >>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction") >>> { ... film: bloom.format_hash(film) for film in not_present_films ... } # doctest: +NORMALIZE_WHITESPACE {'The Godfather': '00000101', 'Interstellar': '00000011', 'Parasite': '00010010', 'Pulp Fiction': '10000100'} >>> any(film in bloom for film in not_present_films) False but sometimes there are false positives: >>> "Ratatouille" in bloom True >>> bloom.format_hash("Ratatouille") '01100000' The probability increases with the number of elements added. The probability decreases with the number of bits in the bitarray. >>> bloom.estimated_error_rate 0.140625 >>> bloom.add("The Godfather") >>> bloom.estimated_error_rate 0.25 >>> bloom.bitstring '01100101' """ from hashlib import md5, sha256 HASH_FUNCTIONS = (sha256, md5) class Bloom: def __init__(self, size: int = 8) -> None: self.bitarray = 0b0 self.size = size def add(self, value: str) -> None: h = self.hash_(value) self.bitarray |= h def exists(self, value: str) -> bool: h = self.hash_(value) return (h & self.bitarray) == h def __contains__(self, other: str) -> bool: return self.exists(other) def format_bin(self, bitarray: int) -> str: res = bin(bitarray)[2:] return res.zfill(self.size) @property def bitstring(self) -> str: return self.format_bin(self.bitarray) def hash_(self, value: str) -> int: res = 0b0 for func in HASH_FUNCTIONS: position = ( int.from_bytes(func(value.encode()).digest(), "little") % self.size ) res |= 2**position return res def format_hash(self, value: str) -> str: return self.format_bin(self.hash_(value)) @property def estimated_error_rate(self) -> float: n_ones = bin(self.bitarray).count("1") return (n_ones / self.size) ** len(HASH_FUNCTIONS)
""" See https://en.wikipedia.org/wiki/Bloom_filter The use of this data structure is to test membership in a set. Compared to Python's built-in set() it is more space-efficient. In the following example, only 8 bits of memory will be used: >>> bloom = Bloom(size=8) Initially, the filter contains all zeros: >>> bloom.bitstring '00000000' When an element is added, two bits are set to 1 since there are 2 hash functions in this implementation: >>> "Titanic" in bloom False >>> bloom.add("Titanic") >>> bloom.bitstring '01100000' >>> "Titanic" in bloom True However, sometimes only one bit is added because both hash functions return the same value >>> bloom.add("Avatar") >>> "Avatar" in bloom True >>> bloom.format_hash("Avatar") '00000100' >>> bloom.bitstring '01100100' Not added elements should return False ... >>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction") >>> { ... film: bloom.format_hash(film) for film in not_present_films ... } # doctest: +NORMALIZE_WHITESPACE {'The Godfather': '00000101', 'Interstellar': '00000011', 'Parasite': '00010010', 'Pulp Fiction': '10000100'} >>> any(film in bloom for film in not_present_films) False but sometimes there are false positives: >>> "Ratatouille" in bloom True >>> bloom.format_hash("Ratatouille") '01100000' The probability increases with the number of elements added. The probability decreases with the number of bits in the bitarray. >>> bloom.estimated_error_rate 0.140625 >>> bloom.add("The Godfather") >>> bloom.estimated_error_rate 0.25 >>> bloom.bitstring '01100101' """ from hashlib import md5, sha256 HASH_FUNCTIONS = (sha256, md5) class Bloom: def __init__(self, size: int = 8) -> None: self.bitarray = 0b0 self.size = size def add(self, value: str) -> None: h = self.hash_(value) self.bitarray |= h def exists(self, value: str) -> bool: h = self.hash_(value) return (h & self.bitarray) == h def __contains__(self, other: str) -> bool: return self.exists(other) def format_bin(self, bitarray: int) -> str: res = bin(bitarray)[2:] return res.zfill(self.size) @property def bitstring(self) -> str: return self.format_bin(self.bitarray) def hash_(self, value: str) -> int: res = 0b0 for func in HASH_FUNCTIONS: position = ( int.from_bytes(func(value.encode()).digest(), "little") % self.size ) res |= 2**position return res def format_hash(self, value: str) -> str: return self.format_bin(self.hash_(value)) @property def estimated_error_rate(self) -> float: n_ones = bin(self.bitarray).count("1") return (n_ones / self.size) ** len(HASH_FUNCTIONS)
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Created on Fri Oct 16 09:31:07 2020 @author: Dr. Tobias Schröder @license: MIT-license This file contains the test-suite for the knapsack problem. """ import unittest from knapsack import knapsack as k class Test(unittest.TestCase): def test_base_case(self): """ test for the base case """ cap = 0 val = [0] w = [0] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 0) val = [60] w = [10] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 0) def test_easy_case(self): """ test for the base case """ cap = 3 val = [1, 2, 3] w = [3, 2, 1] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 5) def test_knapsack(self): """ test for the knapsack """ cap = 50 val = [60, 100, 120] w = [10, 20, 30] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 220) if __name__ == "__main__": unittest.main()
""" Created on Fri Oct 16 09:31:07 2020 @author: Dr. Tobias Schröder @license: MIT-license This file contains the test-suite for the knapsack problem. """ import unittest from knapsack import knapsack as k class Test(unittest.TestCase): def test_base_case(self): """ test for the base case """ cap = 0 val = [0] w = [0] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 0) val = [60] w = [10] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 0) def test_easy_case(self): """ test for the base case """ cap = 3 val = [1, 2, 3] w = [3, 2, 1] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 5) def test_knapsack(self): """ test for the knapsack """ cap = 50 val = [60, 100, 120] w = [10, 20, 30] c = len(val) self.assertEqual(k.knapsack(cap, w, val, c), 220) if __name__ == "__main__": unittest.main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# A Python implementation of the Banker's Algorithm in Operating Systems using # Processes and Resources # { # "Author: "Biney Kingsley ([email protected]), [email protected]", # "Date": 28-10-2018 # } """ The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes a "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. [Source] Wikipedia [Credit] Rosetta Code C implementation helped very much. (https://rosettacode.org/wiki/Banker%27s_algorithm) """ from __future__ import annotations import time import numpy as np test_claim_vector = [8, 5, 9, 7] test_allocated_res_table = [ [2, 0, 1, 1], [0, 1, 2, 1], [4, 0, 0, 3], [0, 2, 1, 0], [1, 0, 3, 0], ] test_maximum_claim_table = [ [3, 2, 1, 4], [0, 2, 5, 2], [5, 1, 0, 5], [1, 5, 3, 0], [3, 0, 3, 3], ] class BankersAlgorithm: def __init__( self, claim_vector: list[int], allocated_resources_table: list[list[int]], maximum_claim_table: list[list[int]], ) -> None: """ :param claim_vector: A nxn/nxm list depicting the amount of each resources (eg. memory, interface, semaphores, etc.) available. :param allocated_resources_table: A nxn/nxm list depicting the amount of each resource each process is currently holding :param maximum_claim_table: A nxn/nxm list depicting how much of each resource the system currently has available """ self.__claim_vector = claim_vector self.__allocated_resources_table = allocated_resources_table self.__maximum_claim_table = maximum_claim_table def __processes_resource_summation(self) -> list[int]: """ Check for allocated resources in line with each resource in the claim vector """ return [ sum(p_item[i] for p_item in self.__allocated_resources_table) for i in range(len(self.__allocated_resources_table[0])) ] def __available_resources(self) -> list[int]: """ Check for available resources in line with each resource in the claim vector """ return np.array(self.__claim_vector) - np.array( self.__processes_resource_summation() ) def __need(self) -> list[list[int]]: """ Implement safety checker that calculates the needs by ensuring that max_claim[i][j] - alloc_table[i][j] <= avail[j] """ return [ list(np.array(self.__maximum_claim_table[i]) - np.array(allocated_resource)) for i, allocated_resource in enumerate(self.__allocated_resources_table) ] def __need_index_manager(self) -> dict[int, list[int]]: """ This function builds an index control dictionary to track original ids/indices of processes when altered during execution of method "main" Return: {0: [a: int, b: int], 1: [c: int, d: int]} >>> (BankersAlgorithm(test_claim_vector, test_allocated_res_table, ... test_maximum_claim_table)._BankersAlgorithm__need_index_manager() ... ) # doctest: +NORMALIZE_WHITESPACE {0: [1, 2, 0, 3], 1: [0, 1, 3, 1], 2: [1, 1, 0, 2], 3: [1, 3, 2, 0], 4: [2, 0, 0, 3]} """ return {self.__need().index(i): i for i in self.__need()} def main(self, **kwargs) -> None: """ Utilize various methods in this class to simulate the Banker's algorithm Return: None >>> BankersAlgorithm(test_claim_vector, test_allocated_res_table, ... test_maximum_claim_table).main(describe=True) Allocated Resource Table P1 2 0 1 1 <BLANKLINE> P2 0 1 2 1 <BLANKLINE> P3 4 0 0 3 <BLANKLINE> P4 0 2 1 0 <BLANKLINE> P5 1 0 3 0 <BLANKLINE> System Resource Table P1 3 2 1 4 <BLANKLINE> P2 0 2 5 2 <BLANKLINE> P3 5 1 0 5 <BLANKLINE> P4 1 5 3 0 <BLANKLINE> P5 3 0 3 3 <BLANKLINE> Current Usage by Active Processes: 8 5 9 7 Initial Available Resources: 1 2 2 2 __________________________________________________ <BLANKLINE> Process 3 is executing. Updated available resource stack for processes: 5 2 2 5 The process is in a safe state. <BLANKLINE> Process 1 is executing. Updated available resource stack for processes: 7 2 3 6 The process is in a safe state. <BLANKLINE> Process 2 is executing. Updated available resource stack for processes: 7 3 5 7 The process is in a safe state. <BLANKLINE> Process 4 is executing. Updated available resource stack for processes: 7 5 6 7 The process is in a safe state. <BLANKLINE> Process 5 is executing. Updated available resource stack for processes: 8 5 9 7 The process is in a safe state. <BLANKLINE> """ need_list = self.__need() alloc_resources_table = self.__allocated_resources_table available_resources = self.__available_resources() need_index_manager = self.__need_index_manager() for kw, val in kwargs.items(): if kw and val is True: self.__pretty_data() print("_" * 50 + "\n") while need_list: safe = False for each_need in need_list: execution = True for index, need in enumerate(each_need): if need > available_resources[index]: execution = False break if execution: safe = True # get the original index of the process from ind_ctrl db for original_need_index, need_clone in need_index_manager.items(): if each_need == need_clone: process_number = original_need_index print(f"Process {process_number + 1} is executing.") # remove the process run from stack need_list.remove(each_need) # update available/freed resources stack available_resources = np.array(available_resources) + np.array( alloc_resources_table[process_number] ) print( "Updated available resource stack for processes: " + " ".join([str(x) for x in available_resources]) ) break if safe: print("The process is in a safe state.\n") else: print("System in unsafe state. Aborting...\n") break def __pretty_data(self): """ Properly align display of the algorithm's solution """ print(" " * 9 + "Allocated Resource Table") for item in self.__allocated_resources_table: print( f"P{self.__allocated_resources_table.index(item) + 1}" + " ".join(f"{it:>8}" for it in item) + "\n" ) print(" " * 9 + "System Resource Table") for item in self.__maximum_claim_table: print( f"P{self.__maximum_claim_table.index(item) + 1}" + " ".join(f"{it:>8}" for it in item) + "\n" ) print( "Current Usage by Active Processes: " + " ".join(str(x) for x in self.__claim_vector) ) print( "Initial Available Resources: " + " ".join(str(x) for x in self.__available_resources()) ) time.sleep(1) if __name__ == "__main__": import doctest doctest.testmod()
# A Python implementation of the Banker's Algorithm in Operating Systems using # Processes and Resources # { # "Author: "Biney Kingsley ([email protected]), [email protected]", # "Date": 28-10-2018 # } """ The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes a "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. [Source] Wikipedia [Credit] Rosetta Code C implementation helped very much. (https://rosettacode.org/wiki/Banker%27s_algorithm) """ from __future__ import annotations import time import numpy as np test_claim_vector = [8, 5, 9, 7] test_allocated_res_table = [ [2, 0, 1, 1], [0, 1, 2, 1], [4, 0, 0, 3], [0, 2, 1, 0], [1, 0, 3, 0], ] test_maximum_claim_table = [ [3, 2, 1, 4], [0, 2, 5, 2], [5, 1, 0, 5], [1, 5, 3, 0], [3, 0, 3, 3], ] class BankersAlgorithm: def __init__( self, claim_vector: list[int], allocated_resources_table: list[list[int]], maximum_claim_table: list[list[int]], ) -> None: """ :param claim_vector: A nxn/nxm list depicting the amount of each resources (eg. memory, interface, semaphores, etc.) available. :param allocated_resources_table: A nxn/nxm list depicting the amount of each resource each process is currently holding :param maximum_claim_table: A nxn/nxm list depicting how much of each resource the system currently has available """ self.__claim_vector = claim_vector self.__allocated_resources_table = allocated_resources_table self.__maximum_claim_table = maximum_claim_table def __processes_resource_summation(self) -> list[int]: """ Check for allocated resources in line with each resource in the claim vector """ return [ sum(p_item[i] for p_item in self.__allocated_resources_table) for i in range(len(self.__allocated_resources_table[0])) ] def __available_resources(self) -> list[int]: """ Check for available resources in line with each resource in the claim vector """ return np.array(self.__claim_vector) - np.array( self.__processes_resource_summation() ) def __need(self) -> list[list[int]]: """ Implement safety checker that calculates the needs by ensuring that max_claim[i][j] - alloc_table[i][j] <= avail[j] """ return [ list(np.array(self.__maximum_claim_table[i]) - np.array(allocated_resource)) for i, allocated_resource in enumerate(self.__allocated_resources_table) ] def __need_index_manager(self) -> dict[int, list[int]]: """ This function builds an index control dictionary to track original ids/indices of processes when altered during execution of method "main" Return: {0: [a: int, b: int], 1: [c: int, d: int]} >>> (BankersAlgorithm(test_claim_vector, test_allocated_res_table, ... test_maximum_claim_table)._BankersAlgorithm__need_index_manager() ... ) # doctest: +NORMALIZE_WHITESPACE {0: [1, 2, 0, 3], 1: [0, 1, 3, 1], 2: [1, 1, 0, 2], 3: [1, 3, 2, 0], 4: [2, 0, 0, 3]} """ return {self.__need().index(i): i for i in self.__need()} def main(self, **kwargs) -> None: """ Utilize various methods in this class to simulate the Banker's algorithm Return: None >>> BankersAlgorithm(test_claim_vector, test_allocated_res_table, ... test_maximum_claim_table).main(describe=True) Allocated Resource Table P1 2 0 1 1 <BLANKLINE> P2 0 1 2 1 <BLANKLINE> P3 4 0 0 3 <BLANKLINE> P4 0 2 1 0 <BLANKLINE> P5 1 0 3 0 <BLANKLINE> System Resource Table P1 3 2 1 4 <BLANKLINE> P2 0 2 5 2 <BLANKLINE> P3 5 1 0 5 <BLANKLINE> P4 1 5 3 0 <BLANKLINE> P5 3 0 3 3 <BLANKLINE> Current Usage by Active Processes: 8 5 9 7 Initial Available Resources: 1 2 2 2 __________________________________________________ <BLANKLINE> Process 3 is executing. Updated available resource stack for processes: 5 2 2 5 The process is in a safe state. <BLANKLINE> Process 1 is executing. Updated available resource stack for processes: 7 2 3 6 The process is in a safe state. <BLANKLINE> Process 2 is executing. Updated available resource stack for processes: 7 3 5 7 The process is in a safe state. <BLANKLINE> Process 4 is executing. Updated available resource stack for processes: 7 5 6 7 The process is in a safe state. <BLANKLINE> Process 5 is executing. Updated available resource stack for processes: 8 5 9 7 The process is in a safe state. <BLANKLINE> """ need_list = self.__need() alloc_resources_table = self.__allocated_resources_table available_resources = self.__available_resources() need_index_manager = self.__need_index_manager() for kw, val in kwargs.items(): if kw and val is True: self.__pretty_data() print("_" * 50 + "\n") while need_list: safe = False for each_need in need_list: execution = True for index, need in enumerate(each_need): if need > available_resources[index]: execution = False break if execution: safe = True # get the original index of the process from ind_ctrl db for original_need_index, need_clone in need_index_manager.items(): if each_need == need_clone: process_number = original_need_index print(f"Process {process_number + 1} is executing.") # remove the process run from stack need_list.remove(each_need) # update available/freed resources stack available_resources = np.array(available_resources) + np.array( alloc_resources_table[process_number] ) print( "Updated available resource stack for processes: " + " ".join([str(x) for x in available_resources]) ) break if safe: print("The process is in a safe state.\n") else: print("System in unsafe state. Aborting...\n") break def __pretty_data(self): """ Properly align display of the algorithm's solution """ print(" " * 9 + "Allocated Resource Table") for item in self.__allocated_resources_table: print( f"P{self.__allocated_resources_table.index(item) + 1}" + " ".join(f"{it:>8}" for it in item) + "\n" ) print(" " * 9 + "System Resource Table") for item in self.__maximum_claim_table: print( f"P{self.__maximum_claim_table.index(item) + 1}" + " ".join(f"{it:>8}" for it in item) + "\n" ) print( "Current Usage by Active Processes: " + " ".join(str(x) for x in self.__claim_vector) ) print( "Initial Available Resources: " + " ".join(str(x) for x in self.__available_resources()) ) time.sleep(1) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from __future__ import annotations import math def default_matrix_multiplication(a: list, b: list) -> list: """ Multiplication only for 2x2 matrices """ if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2: raise Exception("Matrices are not 2x2") new_matrix = [ [a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]], [a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]], ] return new_matrix def matrix_addition(matrix_a: list, matrix_b: list): return [ [matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))] for row in range(len(matrix_a)) ] def matrix_subtraction(matrix_a: list, matrix_b: list): return [ [matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))] for row in range(len(matrix_a)) ] def split_matrix(a: list) -> tuple[list, list, list, list]: """ Given an even length matrix, returns the top_left, top_right, bot_left, bot_right quadrant. >>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]]) ([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]]) >>> split_matrix([ ... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6], ... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6] ... ]) # doctest: +NORMALIZE_WHITESPACE ([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]]) """ if len(a) % 2 != 0 or len(a[0]) % 2 != 0: raise Exception("Odd matrices are not supported!") matrix_length = len(a) mid = matrix_length // 2 top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)] bot_right = [ [a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length) ] top_left = [[a[i][j] for j in range(mid)] for i in range(mid)] bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)] return top_left, top_right, bot_left, bot_right def matrix_dimensions(matrix: list) -> tuple[int, int]: return len(matrix), len(matrix[0]) def print_matrix(matrix: list) -> None: print("\n".join(str(line) for line in matrix)) def actual_strassen(matrix_a: list, matrix_b: list) -> list: """ Recursive function to calculate the product of two matrices, using the Strassen Algorithm. It only supports even length matrices. """ if matrix_dimensions(matrix_a) == (2, 2): return default_matrix_multiplication(matrix_a, matrix_b) a, b, c, d = split_matrix(matrix_a) e, f, g, h = split_matrix(matrix_b) t1 = actual_strassen(a, matrix_subtraction(f, h)) t2 = actual_strassen(matrix_addition(a, b), h) t3 = actual_strassen(matrix_addition(c, d), e) t4 = actual_strassen(d, matrix_subtraction(g, e)) t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h)) t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h)) t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f)) top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6) top_right = matrix_addition(t1, t2) bot_left = matrix_addition(t3, t4) bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7) # construct the new matrix from our 4 quadrants new_matrix = [] for i in range(len(top_right)): new_matrix.append(top_left[i] + top_right[i]) for i in range(len(bot_right)): new_matrix.append(bot_left[i] + bot_right[i]) return new_matrix def strassen(matrix1: list, matrix2: list) -> list: """ >>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]]) [[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]] >>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]]) [[139, 163], [121, 134], [100, 121]] """ if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]: msg = ( "Unable to multiply these matrices, please check the dimensions.\n" f"Matrix A: {matrix1}\n" f"Matrix B: {matrix2}" ) raise Exception(msg) dimension1 = matrix_dimensions(matrix1) dimension2 = matrix_dimensions(matrix2) if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]: return [matrix1, matrix2] maximum = max(*dimension1, *dimension2) maxim = int(math.pow(2, math.ceil(math.log2(maximum)))) new_matrix1 = matrix1 new_matrix2 = matrix2 # Adding zeros to the matrices so that the arrays dimensions are the same and also # power of 2 for i in range(0, maxim): if i < dimension1[0]: for _ in range(dimension1[1], maxim): new_matrix1[i].append(0) else: new_matrix1.append([0] * maxim) if i < dimension2[0]: for _ in range(dimension2[1], maxim): new_matrix2[i].append(0) else: new_matrix2.append([0] * maxim) final_matrix = actual_strassen(new_matrix1, new_matrix2) # Removing the additional zeros for i in range(0, maxim): if i < dimension1[0]: for _ in range(dimension2[1], maxim): final_matrix[i].pop() else: final_matrix.pop() return final_matrix if __name__ == "__main__": matrix1 = [ [2, 3, 4, 5], [6, 4, 3, 1], [2, 3, 6, 7], [3, 1, 2, 4], [2, 3, 4, 5], [6, 4, 3, 1], [2, 3, 6, 7], [3, 1, 2, 4], [2, 3, 4, 5], [6, 2, 3, 1], ] matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]] print(strassen(matrix1, matrix2))
from __future__ import annotations import math def default_matrix_multiplication(a: list, b: list) -> list: """ Multiplication only for 2x2 matrices """ if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2: raise Exception("Matrices are not 2x2") new_matrix = [ [a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]], [a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]], ] return new_matrix def matrix_addition(matrix_a: list, matrix_b: list): return [ [matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))] for row in range(len(matrix_a)) ] def matrix_subtraction(matrix_a: list, matrix_b: list): return [ [matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))] for row in range(len(matrix_a)) ] def split_matrix(a: list) -> tuple[list, list, list, list]: """ Given an even length matrix, returns the top_left, top_right, bot_left, bot_right quadrant. >>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]]) ([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]]) >>> split_matrix([ ... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6], ... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6] ... ]) # doctest: +NORMALIZE_WHITESPACE ([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]]) """ if len(a) % 2 != 0 or len(a[0]) % 2 != 0: raise Exception("Odd matrices are not supported!") matrix_length = len(a) mid = matrix_length // 2 top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)] bot_right = [ [a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length) ] top_left = [[a[i][j] for j in range(mid)] for i in range(mid)] bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)] return top_left, top_right, bot_left, bot_right def matrix_dimensions(matrix: list) -> tuple[int, int]: return len(matrix), len(matrix[0]) def print_matrix(matrix: list) -> None: print("\n".join(str(line) for line in matrix)) def actual_strassen(matrix_a: list, matrix_b: list) -> list: """ Recursive function to calculate the product of two matrices, using the Strassen Algorithm. It only supports even length matrices. """ if matrix_dimensions(matrix_a) == (2, 2): return default_matrix_multiplication(matrix_a, matrix_b) a, b, c, d = split_matrix(matrix_a) e, f, g, h = split_matrix(matrix_b) t1 = actual_strassen(a, matrix_subtraction(f, h)) t2 = actual_strassen(matrix_addition(a, b), h) t3 = actual_strassen(matrix_addition(c, d), e) t4 = actual_strassen(d, matrix_subtraction(g, e)) t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h)) t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h)) t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f)) top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6) top_right = matrix_addition(t1, t2) bot_left = matrix_addition(t3, t4) bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7) # construct the new matrix from our 4 quadrants new_matrix = [] for i in range(len(top_right)): new_matrix.append(top_left[i] + top_right[i]) for i in range(len(bot_right)): new_matrix.append(bot_left[i] + bot_right[i]) return new_matrix def strassen(matrix1: list, matrix2: list) -> list: """ >>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]]) [[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]] >>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]]) [[139, 163], [121, 134], [100, 121]] """ if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]: msg = ( "Unable to multiply these matrices, please check the dimensions.\n" f"Matrix A: {matrix1}\n" f"Matrix B: {matrix2}" ) raise Exception(msg) dimension1 = matrix_dimensions(matrix1) dimension2 = matrix_dimensions(matrix2) if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]: return [matrix1, matrix2] maximum = max(*dimension1, *dimension2) maxim = int(math.pow(2, math.ceil(math.log2(maximum)))) new_matrix1 = matrix1 new_matrix2 = matrix2 # Adding zeros to the matrices so that the arrays dimensions are the same and also # power of 2 for i in range(0, maxim): if i < dimension1[0]: for _ in range(dimension1[1], maxim): new_matrix1[i].append(0) else: new_matrix1.append([0] * maxim) if i < dimension2[0]: for _ in range(dimension2[1], maxim): new_matrix2[i].append(0) else: new_matrix2.append([0] * maxim) final_matrix = actual_strassen(new_matrix1, new_matrix2) # Removing the additional zeros for i in range(0, maxim): if i < dimension1[0]: for _ in range(dimension2[1], maxim): final_matrix[i].pop() else: final_matrix.pop() return final_matrix if __name__ == "__main__": matrix1 = [ [2, 3, 4, 5], [6, 4, 3, 1], [2, 3, 6, 7], [3, 1, 2, 4], [2, 3, 4, 5], [6, 4, 3, 1], [2, 3, 6, 7], [3, 1, 2, 4], [2, 3, 4, 5], [6, 2, 3, 1], ] matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]] print(strassen(matrix1, matrix2))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Python program to show the usage of Fermat's little theorem in a division # According to Fermat's little theorem, (a / b) mod p always equals # a * (b ^ (p - 2)) mod p # Here we assume that p is a prime number, b divides a, and p doesn't divide b # Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem def binary_exponentiation(a, n, mod): if n == 0: return 1 elif n % 2 == 1: return (binary_exponentiation(a, n - 1, mod) * a) % mod else: b = binary_exponentiation(a, n / 2, mod) return (b * b) % mod # a prime number p = 701 a = 1000000000 b = 10 # using binary exponentiation function, O(log(p)): print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p) # using Python operators: print((a / b) % p == (a * b ** (p - 2)) % p)
# Python program to show the usage of Fermat's little theorem in a division # According to Fermat's little theorem, (a / b) mod p always equals # a * (b ^ (p - 2)) mod p # Here we assume that p is a prime number, b divides a, and p doesn't divide b # Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem def binary_exponentiation(a, n, mod): if n == 0: return 1 elif n % 2 == 1: return (binary_exponentiation(a, n - 1, mod) * a) % mod else: b = binary_exponentiation(a, n / 2, mod) return (b * b) % mod # a prime number p = 701 a = 1000000000 b = 10 # using binary exponentiation function, O(log(p)): print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p) # using Python operators: print((a / b) % p == (a * b ** (p - 2)) % p)
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Welcome to Quantum Algorithms Started at https://github.com/TheAlgorithms/Python/issues/1831 * D-Wave: https://www.dwavesys.com and https://github.com/dwavesystems * Google: https://research.google/teams/applied-science/quantum * IBM: https://qiskit.org and https://github.com/Qiskit * Rigetti: https://rigetti.com and https://github.com/rigetti * Zapata: https://www.zapatacomputing.com and https://github.com/zapatacomputing ## IBM Qiskit - Start using by installing `pip install qiskit`, refer the [docs](https://qiskit.org/documentation/install.html) for more info. - Tutorials & References - https://github.com/Qiskit/qiskit-tutorials - https://quantum-computing.ibm.com/docs/iql/first-circuit - https://medium.com/qiskit/how-to-program-a-quantum-computer-982a9329ed02 ## Google Cirq - Start using by installing `python -m pip install cirq`, refer the [docs](https://quantumai.google/cirq/start/install) for more info. - Tutorials & references - https://github.com/quantumlib/cirq - https://quantumai.google/cirq/experiments - https://tanishabassan.medium.com/quantum-programming-with-google-cirq-3209805279bc
# Welcome to Quantum Algorithms Started at https://github.com/TheAlgorithms/Python/issues/1831 * D-Wave: https://www.dwavesys.com and https://github.com/dwavesystems * Google: https://research.google/teams/applied-science/quantum * IBM: https://qiskit.org and https://github.com/Qiskit * Rigetti: https://rigetti.com and https://github.com/rigetti * Zapata: https://www.zapatacomputing.com and https://github.com/zapatacomputing ## IBM Qiskit - Start using by installing `pip install qiskit`, refer the [docs](https://qiskit.org/documentation/install.html) for more info. - Tutorials & References - https://github.com/Qiskit/qiskit-tutorials - https://quantum-computing.ibm.com/docs/iql/first-circuit - https://medium.com/qiskit/how-to-program-a-quantum-computer-982a9329ed02 ## Google Cirq - Start using by installing `python -m pip install cirq`, refer the [docs](https://quantumai.google/cirq/start/install) for more info. - Tutorials & references - https://github.com/quantumlib/cirq - https://quantumai.google/cirq/experiments - https://tanishabassan.medium.com/quantum-programming-with-google-cirq-3209805279bc
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def reverse_long_words(sentence: str) -> str: """ Reverse all words that are longer than 4 characters in a sentence. >>> reverse_long_words("Hey wollef sroirraw") 'Hey fellow warriors' >>> reverse_long_words("nohtyP is nohtyP") 'Python is Python' >>> reverse_long_words("1 12 123 1234 54321 654321") '1 12 123 1234 12345 123456' """ return " ".join( "".join(word[::-1]) if len(word) > 4 else word for word in sentence.split() ) if __name__ == "__main__": import doctest doctest.testmod() print(reverse_long_words("Hey wollef sroirraw"))
def reverse_long_words(sentence: str) -> str: """ Reverse all words that are longer than 4 characters in a sentence. >>> reverse_long_words("Hey wollef sroirraw") 'Hey fellow warriors' >>> reverse_long_words("nohtyP is nohtyP") 'Python is Python' >>> reverse_long_words("1 12 123 1234 54321 654321") '1 12 123 1234 12345 123456' """ return " ".join( "".join(word[::-1]) if len(word) > 4 else word for word in sentence.split() ) if __name__ == "__main__": import doctest doctest.testmod() print(reverse_long_words("Hey wollef sroirraw"))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 8: https://projecteuler.net/problem=8 Largest product in a series The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? """ import sys N = ( "73167176531330624919225119674426574742355349194934" "96983520312774506326239578318016984801869478851843" "85861560789112949495459501737958331952853208805511" "12540698747158523863050715693290963295227443043557" "66896648950445244523161731856403098711121722383113" "62229893423380308135336276614282806444486645238749" "30358907296290491560440772390713810515859307960866" "70172427121883998797908792274921901699720888093776" "65727333001053367881220235421809751254540594752243" "52584907711670556013604839586446706324415722155397" "53697817977846174064955149290862569321978468622482" "83972241375657056057490261407972968652414535100474" "82166370484403199890008895243450658541227588666881" "16427171479924442928230863465674813919123162824586" "17866458359124566529476545682848912883142607690042" "24219022671055626321111109370544217506941658960408" "07198403850962455444362981230987879927244284909188" "84580156166097919133875499200524063689912560717606" "05886116467109405077541002256983155200055935729725" "71636269561882670428252483600823257530420752963450" ) def str_eval(s: str) -> int: """ Returns product of digits in given string n >>> str_eval("987654321") 362880 >>> str_eval("22222222") 256 """ product = 1 for digit in s: product *= int(digit) return product def solution(n: str = N) -> int: """ Find the thirteen adjacent digits in the 1000-digit number n that have the greatest product and returns it. """ largest_product = -sys.maxsize - 1 substr = n[:13] cur_index = 13 while cur_index < len(n) - 13: if int(n[cur_index]) >= int(substr[0]): substr = substr[1:] + n[cur_index] cur_index += 1 else: largest_product = max(largest_product, str_eval(substr)) substr = n[cur_index : cur_index + 13] cur_index += 13 return largest_product if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 8: https://projecteuler.net/problem=8 Largest product in a series The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? """ import sys N = ( "73167176531330624919225119674426574742355349194934" "96983520312774506326239578318016984801869478851843" "85861560789112949495459501737958331952853208805511" "12540698747158523863050715693290963295227443043557" "66896648950445244523161731856403098711121722383113" "62229893423380308135336276614282806444486645238749" "30358907296290491560440772390713810515859307960866" "70172427121883998797908792274921901699720888093776" "65727333001053367881220235421809751254540594752243" "52584907711670556013604839586446706324415722155397" "53697817977846174064955149290862569321978468622482" "83972241375657056057490261407972968652414535100474" "82166370484403199890008895243450658541227588666881" "16427171479924442928230863465674813919123162824586" "17866458359124566529476545682848912883142607690042" "24219022671055626321111109370544217506941658960408" "07198403850962455444362981230987879927244284909188" "84580156166097919133875499200524063689912560717606" "05886116467109405077541002256983155200055935729725" "71636269561882670428252483600823257530420752963450" ) def str_eval(s: str) -> int: """ Returns product of digits in given string n >>> str_eval("987654321") 362880 >>> str_eval("22222222") 256 """ product = 1 for digit in s: product *= int(digit) return product def solution(n: str = N) -> int: """ Find the thirteen adjacent digits in the 1000-digit number n that have the greatest product and returns it. """ largest_product = -sys.maxsize - 1 substr = n[:13] cur_index = 13 while cur_index < len(n) - 13: if int(n[cur_index]) >= int(substr[0]): substr = substr[1:] + n[cur_index] cur_index += 1 else: largest_product = max(largest_product, str_eval(substr)) substr = n[cur_index : cur_index + 13] cur_index += 13 return largest_product if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" == Hexagonal Number == The nth hexagonal number hn is the number of distinct dots in a pattern of dots consisting of the outlines of regular hexagons with sides up to n dots, when the hexagons are overlaid so that they share one vertex. https://en.wikipedia.org/wiki/Hexagonal_number """ # Author : Akshay Dubey (https://github.com/itsAkshayDubey) def hexagonal(number: int) -> int: """ :param number: nth hexagonal number to calculate :return: the nth hexagonal number Note: A hexagonal number is only defined for positive integers >>> hexagonal(4) 28 >>> hexagonal(11) 231 >>> hexagonal(22) 946 >>> hexagonal(0) Traceback (most recent call last): ... ValueError: Input must be a positive integer >>> hexagonal(-1) Traceback (most recent call last): ... ValueError: Input must be a positive integer >>> hexagonal(11.0) Traceback (most recent call last): ... TypeError: Input value of [number=11.0] must be an integer """ if not isinstance(number, int): msg = f"Input value of [number={number}] must be an integer" raise TypeError(msg) if number < 1: raise ValueError("Input must be a positive integer") return number * (2 * number - 1) if __name__ == "__main__": import doctest doctest.testmod()
""" == Hexagonal Number == The nth hexagonal number hn is the number of distinct dots in a pattern of dots consisting of the outlines of regular hexagons with sides up to n dots, when the hexagons are overlaid so that they share one vertex. https://en.wikipedia.org/wiki/Hexagonal_number """ # Author : Akshay Dubey (https://github.com/itsAkshayDubey) def hexagonal(number: int) -> int: """ :param number: nth hexagonal number to calculate :return: the nth hexagonal number Note: A hexagonal number is only defined for positive integers >>> hexagonal(4) 28 >>> hexagonal(11) 231 >>> hexagonal(22) 946 >>> hexagonal(0) Traceback (most recent call last): ... ValueError: Input must be a positive integer >>> hexagonal(-1) Traceback (most recent call last): ... ValueError: Input must be a positive integer >>> hexagonal(11.0) Traceback (most recent call last): ... TypeError: Input value of [number=11.0] must be an integer """ if not isinstance(number, int): msg = f"Input value of [number={number}] must be an integer" raise TypeError(msg) if number < 1: raise ValueError("Input must be a positive integer") return number * (2 * number - 1) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import random import sys LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" def main() -> None: message = input("Enter message: ") key = "LFWOAYUISVKMNXPBDCRJTQEGHZ" resp = input("Encrypt/Decrypt [e/d]: ") check_valid_key(key) if resp.lower().startswith("e"): mode = "encrypt" translated = encrypt_message(key, message) elif resp.lower().startswith("d"): mode = "decrypt" translated = decrypt_message(key, message) print(f"\n{mode.title()}ion: \n{translated}") def check_valid_key(key: str) -> None: key_list = list(key) letters_list = list(LETTERS) key_list.sort() letters_list.sort() if key_list != letters_list: sys.exit("Error in the key or symbol set.") def encrypt_message(key: str, message: str) -> str: """ >>> encrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji') 'Ilcrism Olcvs' """ return translate_message(key, message, "encrypt") def decrypt_message(key: str, message: str) -> str: """ >>> decrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs') 'Harshil Darji' """ return translate_message(key, message, "decrypt") def translate_message(key: str, message: str, mode: str) -> str: translated = "" chars_a = LETTERS chars_b = key if mode == "decrypt": chars_a, chars_b = chars_b, chars_a for symbol in message: if symbol.upper() in chars_a: sym_index = chars_a.find(symbol.upper()) if symbol.isupper(): translated += chars_b[sym_index].upper() else: translated += chars_b[sym_index].lower() else: translated += symbol return translated def get_random_key() -> str: key = list(LETTERS) random.shuffle(key) return "".join(key) if __name__ == "__main__": main()
import random import sys LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" def main() -> None: message = input("Enter message: ") key = "LFWOAYUISVKMNXPBDCRJTQEGHZ" resp = input("Encrypt/Decrypt [e/d]: ") check_valid_key(key) if resp.lower().startswith("e"): mode = "encrypt" translated = encrypt_message(key, message) elif resp.lower().startswith("d"): mode = "decrypt" translated = decrypt_message(key, message) print(f"\n{mode.title()}ion: \n{translated}") def check_valid_key(key: str) -> None: key_list = list(key) letters_list = list(LETTERS) key_list.sort() letters_list.sort() if key_list != letters_list: sys.exit("Error in the key or symbol set.") def encrypt_message(key: str, message: str) -> str: """ >>> encrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji') 'Ilcrism Olcvs' """ return translate_message(key, message, "encrypt") def decrypt_message(key: str, message: str) -> str: """ >>> decrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs') 'Harshil Darji' """ return translate_message(key, message, "decrypt") def translate_message(key: str, message: str, mode: str) -> str: translated = "" chars_a = LETTERS chars_b = key if mode == "decrypt": chars_a, chars_b = chars_b, chars_a for symbol in message: if symbol.upper() in chars_a: sym_index = chars_a.find(symbol.upper()) if symbol.isupper(): translated += chars_b[sym_index].upper() else: translated += chars_b[sym_index].lower() else: translated += symbol return translated def get_random_key() -> str: key = list(LETTERS) random.shuffle(key) return "".join(key) if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import secrets from random import shuffle from string import ascii_letters, ascii_lowercase, ascii_uppercase, digits, punctuation def password_generator(length: int = 8) -> str: """ Password Generator allows you to generate a random password of length N. >>> len(password_generator()) 8 >>> len(password_generator(length=16)) 16 >>> len(password_generator(257)) 257 >>> len(password_generator(length=0)) 0 >>> len(password_generator(-1)) 0 """ chars = ascii_letters + digits + punctuation return "".join(secrets.choice(chars) for _ in range(length)) # ALTERNATIVE METHODS # chars_incl= characters that must be in password # i= how many letters or characters the password length will be def alternative_password_generator(chars_incl: str, i: int) -> str: # Password Generator = full boot with random_number, random_letters, and # random_character FUNCTIONS # Put your code here... i -= len(chars_incl) quotient = i // 3 remainder = i % 3 # chars = chars_incl + random_letters(ascii_letters, i / 3 + remainder) + # random_number(digits, i / 3) + random_characters(punctuation, i / 3) chars = ( chars_incl + random(ascii_letters, quotient + remainder) + random(digits, quotient) + random(punctuation, quotient) ) list_of_chars = list(chars) shuffle(list_of_chars) return "".join(list_of_chars) # random is a generalised function for letters, characters and numbers def random(chars_incl: str, i: int) -> str: return "".join(secrets.choice(chars_incl) for _ in range(i)) def random_number(chars_incl, i): pass # Put your code here... def random_letters(chars_incl, i): pass # Put your code here... def random_characters(chars_incl, i): pass # Put your code here... # This Will Check Whether A Given Password Is Strong Or Not # It Follows The Rule that Length Of Password Should Be At Least 8 Characters # And At Least 1 Lower, 1 Upper, 1 Number And 1 Special Character def is_strong_password(password: str, min_length: int = 8) -> bool: """ >>> is_strong_password('Hwea7$2!') True >>> is_strong_password('Sh0r1') False >>> is_strong_password('Hello123') False >>> is_strong_password('Hello1238udfhiaf038fajdvjjf!jaiuFhkqi1') True >>> is_strong_password('0') False """ if len(password) < min_length: # Your Password must be at least 8 characters long return False upper = any(char in ascii_uppercase for char in password) lower = any(char in ascii_lowercase for char in password) num = any(char in digits for char in password) spec_char = any(char in punctuation for char in password) return upper and lower and num and spec_char # Passwords should contain UPPERCASE, lowerase # numbers, and special characters def main(): length = int(input("Please indicate the max length of your password: ").strip()) chars_incl = input( "Please indicate the characters that must be in your password: " ).strip() print("Password generated:", password_generator(length)) print( "Alternative Password generated:", alternative_password_generator(chars_incl, length), ) print("[If you are thinking of using this passsword, You better save it.]") if __name__ == "__main__": main()
import secrets from random import shuffle from string import ascii_letters, ascii_lowercase, ascii_uppercase, digits, punctuation def password_generator(length: int = 8) -> str: """ Password Generator allows you to generate a random password of length N. >>> len(password_generator()) 8 >>> len(password_generator(length=16)) 16 >>> len(password_generator(257)) 257 >>> len(password_generator(length=0)) 0 >>> len(password_generator(-1)) 0 """ chars = ascii_letters + digits + punctuation return "".join(secrets.choice(chars) for _ in range(length)) # ALTERNATIVE METHODS # chars_incl= characters that must be in password # i= how many letters or characters the password length will be def alternative_password_generator(chars_incl: str, i: int) -> str: # Password Generator = full boot with random_number, random_letters, and # random_character FUNCTIONS # Put your code here... i -= len(chars_incl) quotient = i // 3 remainder = i % 3 # chars = chars_incl + random_letters(ascii_letters, i / 3 + remainder) + # random_number(digits, i / 3) + random_characters(punctuation, i / 3) chars = ( chars_incl + random(ascii_letters, quotient + remainder) + random(digits, quotient) + random(punctuation, quotient) ) list_of_chars = list(chars) shuffle(list_of_chars) return "".join(list_of_chars) # random is a generalised function for letters, characters and numbers def random(chars_incl: str, i: int) -> str: return "".join(secrets.choice(chars_incl) for _ in range(i)) def random_number(chars_incl, i): pass # Put your code here... def random_letters(chars_incl, i): pass # Put your code here... def random_characters(chars_incl, i): pass # Put your code here... # This Will Check Whether A Given Password Is Strong Or Not # It Follows The Rule that Length Of Password Should Be At Least 8 Characters # And At Least 1 Lower, 1 Upper, 1 Number And 1 Special Character def is_strong_password(password: str, min_length: int = 8) -> bool: """ >>> is_strong_password('Hwea7$2!') True >>> is_strong_password('Sh0r1') False >>> is_strong_password('Hello123') False >>> is_strong_password('Hello1238udfhiaf038fajdvjjf!jaiuFhkqi1') True >>> is_strong_password('0') False """ if len(password) < min_length: # Your Password must be at least 8 characters long return False upper = any(char in ascii_uppercase for char in password) lower = any(char in ascii_lowercase for char in password) num = any(char in digits for char in password) spec_char = any(char in punctuation for char in password) return upper and lower and num and spec_char # Passwords should contain UPPERCASE, lowerase # numbers, and special characters def main(): length = int(input("Please indicate the max length of your password: ").strip()) chars_incl = input( "Please indicate the characters that must be in your password: " ).strip() print("Password generated:", password_generator(length)) print( "Alternative Password generated:", alternative_password_generator(chars_incl, length), ) print("[If you are thinking of using this passsword, You better save it.]") if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
name: Bug report description: Create a bug report to help us address errors in the repository labels: [bug] body: - type: markdown attributes: value: > Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/bug). Usage questions such as "How do I...?" belong on the [Discord](https://discord.gg/c7MnfGFGa6) and will be closed. - type: input attributes: label: "Repository commit" description: > The commit hash for `TheAlgorithms/Python` repository. You can get this by running the command `git rev-parse HEAD` locally. placeholder: "a0b0f414ae134aa1772d33bb930e5a960f9979e8" validations: required: true - type: input attributes: label: "Python version (python --version)" placeholder: "Python 3.10.7" validations: required: true - type: textarea attributes: label: "Dependencies version (pip freeze)" description: > This is the output of the command `pip freeze --all`. Note that the actual output might be different as compared to the placeholder text. placeholder: | appnope==0.1.3 asttokens==2.0.8 backcall==0.2.0 ... validations: required: true - type: textarea attributes: label: "Expected behavior" description: "Describe the behavior you expect. May include images or videos." validations: required: true - type: textarea attributes: label: "Actual behavior" validations: required: true
name: Bug report description: Create a bug report to help us address errors in the repository labels: [bug] body: - type: markdown attributes: value: > Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/bug). Usage questions such as "How do I...?" belong on the [Discord](https://discord.gg/c7MnfGFGa6) and will be closed. - type: input attributes: label: "Repository commit" description: > The commit hash for `TheAlgorithms/Python` repository. You can get this by running the command `git rev-parse HEAD` locally. placeholder: "a0b0f414ae134aa1772d33bb930e5a960f9979e8" validations: required: true - type: input attributes: label: "Python version (python --version)" placeholder: "Python 3.10.7" validations: required: true - type: textarea attributes: label: "Dependencies version (pip freeze)" description: > This is the output of the command `pip freeze --all`. Note that the actual output might be different as compared to the placeholder text. placeholder: | appnope==0.1.3 asttokens==2.0.8 backcall==0.2.0 ... validations: required: true - type: textarea attributes: label: "Expected behavior" description: "Describe the behavior you expect. May include images or videos." validations: required: true - type: textarea attributes: label: "Actual behavior" validations: required: true
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 205: https://projecteuler.net/problem=205 Peter has nine four-sided (pyramidal) dice, each with faces numbered 1, 2, 3, 4. Colin has six six-sided (cubic) dice, each with faces numbered 1, 2, 3, 4, 5, 6. Peter and Colin roll their dice and compare totals: the highest total wins. The result is a draw if the totals are equal. What is the probability that Pyramidal Peter beats Cubic Colin? Give your answer rounded to seven decimal places in the form 0.abcdefg """ from itertools import product def total_frequency_distribution(sides_number: int, dice_number: int) -> list[int]: """ Returns frequency distribution of total >>> total_frequency_distribution(sides_number=6, dice_number=1) [0, 1, 1, 1, 1, 1, 1] >>> total_frequency_distribution(sides_number=4, dice_number=2) [0, 0, 1, 2, 3, 4, 3, 2, 1] """ max_face_number = sides_number max_total = max_face_number * dice_number totals_frequencies = [0] * (max_total + 1) min_face_number = 1 faces_numbers = range(min_face_number, max_face_number + 1) for dice_numbers in product(faces_numbers, repeat=dice_number): total = sum(dice_numbers) totals_frequencies[total] += 1 return totals_frequencies def solution() -> float: """ Returns probability that Pyramidal Peter beats Cubic Colin rounded to seven decimal places in the form 0.abcdefg >>> solution() 0.5731441 """ peter_totals_frequencies = total_frequency_distribution( sides_number=4, dice_number=9 ) colin_totals_frequencies = total_frequency_distribution( sides_number=6, dice_number=6 ) peter_wins_count = 0 min_peter_total = 9 max_peter_total = 4 * 9 min_colin_total = 6 for peter_total in range(min_peter_total, max_peter_total + 1): peter_wins_count += peter_totals_frequencies[peter_total] * sum( colin_totals_frequencies[min_colin_total:peter_total] ) total_games_number = (4**9) * (6**6) peter_win_probability = peter_wins_count / total_games_number rounded_peter_win_probability = round(peter_win_probability, ndigits=7) return rounded_peter_win_probability if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 205: https://projecteuler.net/problem=205 Peter has nine four-sided (pyramidal) dice, each with faces numbered 1, 2, 3, 4. Colin has six six-sided (cubic) dice, each with faces numbered 1, 2, 3, 4, 5, 6. Peter and Colin roll their dice and compare totals: the highest total wins. The result is a draw if the totals are equal. What is the probability that Pyramidal Peter beats Cubic Colin? Give your answer rounded to seven decimal places in the form 0.abcdefg """ from itertools import product def total_frequency_distribution(sides_number: int, dice_number: int) -> list[int]: """ Returns frequency distribution of total >>> total_frequency_distribution(sides_number=6, dice_number=1) [0, 1, 1, 1, 1, 1, 1] >>> total_frequency_distribution(sides_number=4, dice_number=2) [0, 0, 1, 2, 3, 4, 3, 2, 1] """ max_face_number = sides_number max_total = max_face_number * dice_number totals_frequencies = [0] * (max_total + 1) min_face_number = 1 faces_numbers = range(min_face_number, max_face_number + 1) for dice_numbers in product(faces_numbers, repeat=dice_number): total = sum(dice_numbers) totals_frequencies[total] += 1 return totals_frequencies def solution() -> float: """ Returns probability that Pyramidal Peter beats Cubic Colin rounded to seven decimal places in the form 0.abcdefg >>> solution() 0.5731441 """ peter_totals_frequencies = total_frequency_distribution( sides_number=4, dice_number=9 ) colin_totals_frequencies = total_frequency_distribution( sides_number=6, dice_number=6 ) peter_wins_count = 0 min_peter_total = 9 max_peter_total = 4 * 9 min_colin_total = 6 for peter_total in range(min_peter_total, max_peter_total + 1): peter_wins_count += peter_totals_frequencies[peter_total] * sum( colin_totals_frequencies[min_colin_total:peter_total] ) total_games_number = (4**9) * (6**6) peter_win_probability = peter_wins_count / total_games_number rounded_peter_win_probability = round(peter_win_probability, ndigits=7) return rounded_peter_win_probability if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Linear algebra library for Python This module contains classes and functions for doing linear algebra. --- ## Overview ### class Vector - - This class represents a vector of arbitrary size and related operations. **Overview of the methods:** - constructor(components) : init the vector - set(components) : changes the vector components. - \_\_str\_\_() : toString method - component(i): gets the i-th component (0-indexed) - \_\_len\_\_() : gets the size / length of the vector (number of components) - euclidean_length() : returns the eulidean length of the vector - operator + : vector addition - operator - : vector subtraction - operator * : scalar multiplication and dot product - copy() : copies this vector and returns it - change_component(pos,value) : changes the specified component - function zero_vector(dimension) - returns a zero vector of 'dimension' - function unit_basis_vector(dimension, pos) - returns a unit basis vector with a one at index 'pos' (0-indexed) - function axpy(scalar, vector1, vector2) - computes the axpy operation - function random_vector(N, a, b) - returns a random vector of size N, with random integer components between 'a' and 'b' inclusive ### class Matrix - - This class represents a matrix of arbitrary size and operations on it. **Overview of the methods:** - \_\_str\_\_() : returns a string representation - operator * : implements the matrix vector multiplication implements the matrix-scalar multiplication. - change_component(x, y, value) : changes the specified component. - component(x, y) : returns the specified component. - width() : returns the width of the matrix - height() : returns the height of the matrix - determinant() : returns the determinant of the matrix if it is square - operator + : implements the matrix-addition. - operator - : implements the matrix-subtraction - function square_zero_matrix(N) - returns a square zero-matrix of dimension NxN - function random_matrix(W, H, a, b) - returns a random matrix WxH with integer components between 'a' and 'b' inclusive --- ## Documentation This module uses docstrings to enable the use of Python's in-built `help(...)` function. For instance, try `help(Vector)`, `help(unit_basis_vector)`, and `help(CLASSNAME.METHODNAME)`. --- ## Usage Import the module `lib.py` from the **src** directory into your project. Alternatively, you can directly use the Python bytecode file `lib.pyc`. --- ## Tests `src/tests.py` contains Python unit tests which can be run with `python3 -m unittest -v`.
# Linear algebra library for Python This module contains classes and functions for doing linear algebra. --- ## Overview ### class Vector - - This class represents a vector of arbitrary size and related operations. **Overview of the methods:** - constructor(components) : init the vector - set(components) : changes the vector components. - \_\_str\_\_() : toString method - component(i): gets the i-th component (0-indexed) - \_\_len\_\_() : gets the size / length of the vector (number of components) - euclidean_length() : returns the eulidean length of the vector - operator + : vector addition - operator - : vector subtraction - operator * : scalar multiplication and dot product - copy() : copies this vector and returns it - change_component(pos,value) : changes the specified component - function zero_vector(dimension) - returns a zero vector of 'dimension' - function unit_basis_vector(dimension, pos) - returns a unit basis vector with a one at index 'pos' (0-indexed) - function axpy(scalar, vector1, vector2) - computes the axpy operation - function random_vector(N, a, b) - returns a random vector of size N, with random integer components between 'a' and 'b' inclusive ### class Matrix - - This class represents a matrix of arbitrary size and operations on it. **Overview of the methods:** - \_\_str\_\_() : returns a string representation - operator * : implements the matrix vector multiplication implements the matrix-scalar multiplication. - change_component(x, y, value) : changes the specified component. - component(x, y) : returns the specified component. - width() : returns the width of the matrix - height() : returns the height of the matrix - determinant() : returns the determinant of the matrix if it is square - operator + : implements the matrix-addition. - operator - : implements the matrix-subtraction - function square_zero_matrix(N) - returns a square zero-matrix of dimension NxN - function random_matrix(W, H, a, b) - returns a random matrix WxH with integer components between 'a' and 'b' inclusive --- ## Documentation This module uses docstrings to enable the use of Python's in-built `help(...)` function. For instance, try `help(Vector)`, `help(unit_basis_vector)`, and `help(CLASSNAME.METHODNAME)`. --- ## Usage Import the module `lib.py` from the **src** directory into your project. Alternatively, you can directly use the Python bytecode file `lib.pyc`. --- ## Tests `src/tests.py` contains Python unit tests which can be run with `python3 -m unittest -v`.
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" This is a pure Python implementation of the greedy-merge-sort algorithm reference: https://www.geeksforgeeks.org/optimal-file-merge-patterns/ For doctests run following command: python3 -m doctest -v greedy_merge_sort.py Objective Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution, where the resultant file will be generated in minimum time. Approach If the number of sorted files are given, there are many ways to merge them into a single sorted file. This merge can be performed pair wise. To merge a m-record file and a n-record file requires possibly m+n record moves the optimal choice being, merge the two smallest files together at each step (greedy approach). """ def optimal_merge_pattern(files: list) -> float: """Function to merge all the files with optimum cost Args: files [list]: A list of sizes of different files to be merged Returns: optimal_merge_cost [int]: Optimal cost to merge all those files Examples: >>> optimal_merge_pattern([2, 3, 4]) 14 >>> optimal_merge_pattern([5, 10, 20, 30, 30]) 205 >>> optimal_merge_pattern([8, 8, 8, 8, 8]) 96 """ optimal_merge_cost = 0 while len(files) > 1: temp = 0 # Consider two files with minimum cost to be merged for _ in range(2): min_index = files.index(min(files)) temp += files[min_index] files.pop(min_index) files.append(temp) optimal_merge_cost += temp return optimal_merge_cost if __name__ == "__main__": import doctest doctest.testmod()
""" This is a pure Python implementation of the greedy-merge-sort algorithm reference: https://www.geeksforgeeks.org/optimal-file-merge-patterns/ For doctests run following command: python3 -m doctest -v greedy_merge_sort.py Objective Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution, where the resultant file will be generated in minimum time. Approach If the number of sorted files are given, there are many ways to merge them into a single sorted file. This merge can be performed pair wise. To merge a m-record file and a n-record file requires possibly m+n record moves the optimal choice being, merge the two smallest files together at each step (greedy approach). """ def optimal_merge_pattern(files: list) -> float: """Function to merge all the files with optimum cost Args: files [list]: A list of sizes of different files to be merged Returns: optimal_merge_cost [int]: Optimal cost to merge all those files Examples: >>> optimal_merge_pattern([2, 3, 4]) 14 >>> optimal_merge_pattern([5, 10, 20, 30, 30]) 205 >>> optimal_merge_pattern([8, 8, 8, 8, 8]) 96 """ optimal_merge_cost = 0 while len(files) > 1: temp = 0 # Consider two files with minimum cost to be merged for _ in range(2): min_index = files.index(min(files)) temp += files[min_index] files.pop(min_index) files.append(temp) optimal_merge_cost += temp return optimal_merge_cost if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" * Author: Manuel Di Lullo (https://github.com/manueldilullo) * Description: Approximization algorithm for minimum vertex cover problem. Matching Approach. Uses graphs represented with an adjacency list URL: https://mathworld.wolfram.com/MinimumVertexCover.html URL: https://www.princeton.edu/~aaa/Public/Teaching/ORF523/ORF523_Lec6.pdf """ def matching_min_vertex_cover(graph: dict) -> set: """ APX Algorithm for min Vertex Cover using Matching Approach @input: graph (graph stored in an adjacency list where each vertex is represented as an integer) @example: >>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]} >>> matching_min_vertex_cover(graph) {0, 1, 2, 4} """ # chosen_vertices = set of chosen vertices chosen_vertices = set() # edges = list of graph's edges edges = get_edges(graph) # While there are still elements in edges list, take an arbitrary edge # (from_node, to_node) and add his extremity to chosen_vertices and then # remove all arcs adjacent to the from_node and to_node while edges: from_node, to_node = edges.pop() chosen_vertices.add(from_node) chosen_vertices.add(to_node) for edge in edges.copy(): if from_node in edge or to_node in edge: edges.discard(edge) return chosen_vertices def get_edges(graph: dict) -> set: """ Return a set of couples that represents all of the edges. @input: graph (graph stored in an adjacency list where each vertex is represented as an integer) @example: >>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3], 3: [0, 1, 2]} >>> get_edges(graph) {(0, 1), (3, 1), (0, 3), (2, 0), (3, 0), (2, 3), (1, 0), (3, 2), (1, 3)} """ edges = set() for from_node, to_nodes in graph.items(): for to_node in to_nodes: edges.add((from_node, to_node)) return edges if __name__ == "__main__": import doctest doctest.testmod() # graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]} # print(f"Matching vertex cover:\n{matching_min_vertex_cover(graph)}")
""" * Author: Manuel Di Lullo (https://github.com/manueldilullo) * Description: Approximization algorithm for minimum vertex cover problem. Matching Approach. Uses graphs represented with an adjacency list URL: https://mathworld.wolfram.com/MinimumVertexCover.html URL: https://www.princeton.edu/~aaa/Public/Teaching/ORF523/ORF523_Lec6.pdf """ def matching_min_vertex_cover(graph: dict) -> set: """ APX Algorithm for min Vertex Cover using Matching Approach @input: graph (graph stored in an adjacency list where each vertex is represented as an integer) @example: >>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]} >>> matching_min_vertex_cover(graph) {0, 1, 2, 4} """ # chosen_vertices = set of chosen vertices chosen_vertices = set() # edges = list of graph's edges edges = get_edges(graph) # While there are still elements in edges list, take an arbitrary edge # (from_node, to_node) and add his extremity to chosen_vertices and then # remove all arcs adjacent to the from_node and to_node while edges: from_node, to_node = edges.pop() chosen_vertices.add(from_node) chosen_vertices.add(to_node) for edge in edges.copy(): if from_node in edge or to_node in edge: edges.discard(edge) return chosen_vertices def get_edges(graph: dict) -> set: """ Return a set of couples that represents all of the edges. @input: graph (graph stored in an adjacency list where each vertex is represented as an integer) @example: >>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3], 3: [0, 1, 2]} >>> get_edges(graph) {(0, 1), (3, 1), (0, 3), (2, 0), (3, 0), (2, 3), (1, 0), (3, 2), (1, 3)} """ edges = set() for from_node, to_nodes in graph.items(): for to_node in to_nodes: edges.add((from_node, to_node)) return edges if __name__ == "__main__": import doctest doctest.testmod() # graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]} # print(f"Matching vertex cover:\n{matching_min_vertex_cover(graph)}")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU) or swish function. * https://en.wikipedia.org/wiki/Rectifier_(neural_networks) * https://en.wikipedia.org/wiki/Swish_function The function takes a vector x of K real numbers as input and returns x * sigmoid(x). Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x). Extensive experiments shows that Swish consistently matches or outperforms ReLU on deep networks applied to a variety of challenging domains such as image classification and machine translation. This script is inspired by a corresponding research paper. * https://arxiv.org/abs/1710.05941 """ import numpy as np def sigmoid(vector: np.ndarray) -> np.ndarray: """ Mathematical function sigmoid takes a vector x of K real numbers as input and returns 1/ (1 + e^-x). https://en.wikipedia.org/wiki/Sigmoid_function >>> sigmoid(np.array([-1.0, 1.0, 2.0])) array([0.26894142, 0.73105858, 0.88079708]) """ return 1 / (1 + np.exp(-vector)) def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray: """ Implements the Sigmoid Linear Unit (SiLU) or swish function Parameters: vector (np.ndarray): A numpy array consisting of real values Returns: swish_vec (np.ndarray): The input numpy array, after applying swish Examples: >>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0])) array([-0.26894142, 0.73105858, 1.76159416]) >>> sigmoid_linear_unit(np.array([-2])) array([-0.23840584]) """ return vector * sigmoid(vector) if __name__ == "__main__": import doctest doctest.testmod()
""" This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU) or swish function. * https://en.wikipedia.org/wiki/Rectifier_(neural_networks) * https://en.wikipedia.org/wiki/Swish_function The function takes a vector x of K real numbers as input and returns x * sigmoid(x). Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x). Extensive experiments shows that Swish consistently matches or outperforms ReLU on deep networks applied to a variety of challenging domains such as image classification and machine translation. This script is inspired by a corresponding research paper. * https://arxiv.org/abs/1710.05941 """ import numpy as np def sigmoid(vector: np.ndarray) -> np.ndarray: """ Mathematical function sigmoid takes a vector x of K real numbers as input and returns 1/ (1 + e^-x). https://en.wikipedia.org/wiki/Sigmoid_function >>> sigmoid(np.array([-1.0, 1.0, 2.0])) array([0.26894142, 0.73105858, 0.88079708]) """ return 1 / (1 + np.exp(-vector)) def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray: """ Implements the Sigmoid Linear Unit (SiLU) or swish function Parameters: vector (np.ndarray): A numpy array consisting of real values Returns: swish_vec (np.ndarray): The input numpy array, after applying swish Examples: >>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0])) array([-0.26894142, 0.73105858, 1.76159416]) >>> sigmoid_linear_unit(np.array([-2])) array([-0.23840584]) """ return vector * sigmoid(vector) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from math import pi def radians(degree: float) -> float: """ Coverts the given angle from degrees to radians https://en.wikipedia.org/wiki/Radian >>> radians(180) 3.141592653589793 >>> radians(92) 1.6057029118347832 >>> radians(274) 4.782202150464463 >>> radians(109.82) 1.9167205845401725 >>> from math import radians as math_radians >>> all(abs(radians(i)-math_radians(i)) <= 0.00000001 for i in range(-2, 361)) True """ return degree / (180 / pi) if __name__ == "__main__": from doctest import testmod testmod()
from math import pi def radians(degree: float) -> float: """ Coverts the given angle from degrees to radians https://en.wikipedia.org/wiki/Radian >>> radians(180) 3.141592653589793 >>> radians(92) 1.6057029118347832 >>> radians(274) 4.782202150464463 >>> radians(109.82) 1.9167205845401725 >>> from math import radians as math_radians >>> all(abs(radians(i)-math_radians(i)) <= 0.00000001 for i in range(-2, 361)) True """ return degree / (180 / pi) if __name__ == "__main__": from doctest import testmod testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
DIRC eq$%eq$%\L4|fvJU2?[.devcontainer/Dockerfileeq$$eq$$\OCŨUU $c&EzI.devcontainer/devcontainer.jsoneZ3deZ3d\G jERrgk k#.gitattributeseq$$eq$$\I'nImZ`UU.github/CODEOWNERSene;åene;å\\5L͵,$4O7zj%.github/ISSUE_TEMPLATE/bug_report.ymlene;åene;å\cb8| [;{] !.github/ISSUE_TEMPLATE/config.ymlep<ep<\e Y>3UH 7*.github/ISSUE_TEMPLATE/feature_request.ymlene;åene;å\sDT(W%[6 .github/ISSUE_TEMPLATE/other.ymleo`hA.eo`hA.\KV8Ҹ=uԔ ~ U .github/pull_request_template.mdene;åene;å\P 9# rJGW.github/stale.ymleq$$eq$$\E6szIt \`CX.github/workflows/build.ymleqK9OeqK9O\S3bͣOZ&.github/workflows/directory_writer.ymleqK9OeqK9O\TF 8!5Bj?yI\ms #.github/workflows/project_euler.ymleq$$eq$$\R-[s'sb(.github/workflows/ruff.ymlene;åene;å\V׺ꄸn{,) }.ׯI .gitignoreeZ3deZ3d\W4Yu郍nO8 E .gitpod.ymleqRr+eqRr+\5gb;+B\v^.pre-commit-config.yamlene;åene;å\W,q k.vscode/settings.jsoneq$$eq$$\D,(JRs4<fMP}5I7CONTRIBUTING.mdeqZ3eqZ3\F5aClyP;DƢ) DIRECTORY.mdene;åene;å\]D(.*:+F,T. LICENSE.mdene;åene;å\N x5b9n README.mdeo Peo P\E2Se&ηbarithmetic_analysis/README.mdeo Peo P\⛲CK)wZSarithmetic_analysis/__init__.pyeo Peo P\?YrO+ੇL2n arithmetic_analysis/bisection.pyeqK9OeqK9O\  70u+-(:ߵ9+arithmetic_analysis/gaussian_elimination.pyeo Peo P\倈ddûr^7 .arithmetic_analysis/image_data/2D_problems.jpgeo Peo P\ jE6  =PC20arithmetic_analysis/image_data/2D_problems_1.jpgeo Peo P\ ⛲CK)wZS*arithmetic_analysis/image_data/__init__.pyeo Peo P\  zJ^dU#kXL,arithmetic_analysis/in_static_equilibrium.pyeo Peo P\ -l K?k, #arithmetic_analysis/intersection.pyeqK9OeqK9O\zK9&,޺O8.arithmetic_analysis/jacobi_iteration_method.pyeo`hA.eo`hA.\TI;+I.v8'arithmetic_analysis/lu_decomposition.pyeo Peo P\ FolYw$Qg(ȷp^3arithmetic_analysis/newton_forward_interpolation.pyeo Peo P\ fQ'&I3N#$arithmetic_analysis/newton_method.pyeo Peo P\ i,Aw3gL cn[/%arithmetic_analysis/newton_raphson.pyeo Peo P\ r AopaUsYwݮ)arithmetic_analysis/newton_raphson_new.pyeqK9OeqK9O\AҊF m@й`$arithmetic_analysis/secant_method.pyene;åene;å\D?=2#bXaudio_filters/README.mdene;åene;å\⛲CK)wZSaudio_filters/__init__.pyene;åene;å\zh6ΞW./#audio_filters/butterworth_filter.pyene;åene;å\˨S<,vQ*"T1audio_filters/equal_loudness_filter.py.broken.txtene;åene;å\ sC[tM ~Eaudio_filters/iir_filter.pyene;åene;å\,jz=p]CCJY|!audio_filters/loudness_curve.jsonene;åene;å\ {Rva2B+caudio_filters/show_response.pyenf/enf/\vԗ]Z1]Wd Ibacktracking/README.mdeZ3deZ3d\i⛲CK)wZSbacktracking/__init__.pyeo`hA.eo`hA.\jý(sȽPSqk backtracking/all_combinations.pyenf/enf/\kS0lуEDUq backtracking/all_permutations.pyenf/enf/\leT$N*cY [Y}:Q backtracking/all_subsequences.pyenf/enf/\m Sğsl /)backtracking/coloring.pyeq$$eq$$\wUQЇbx_@U{backtracking/combination_sum.pyeqK9OeqK9O\qJAV 2fl(2 d^!backtracking/hamiltonian_cycle.pyenlaenla\r ̈0{uzGh` Ebacktracking/knight_tour.pyeo Peo P\tUn11i]j KLbacktracking/minimax.pyeq$$eq$$\,<79O.&+~qbacktracking/minmax.pyeo Peo P\u ;D6>.^#qUbacktracking/n_queens.pyenf/enf/\vH󰊰_3 ^_Dbacktracking/n_queens_math.pyepEepE\ tBp R-A՜backtracking/power_sum.pyeq$$eq$$\x {ވmXfӞHSm*Nbacktracking/rat_in_maze.pyeqK9OeqK9O\y}i!%BanNy Dbacktracking/sudoku.pyenf/enf/\z 3! ;%(V)backtracking/sum_of_subsets.pyepEepE\B +:2uPfͺXS+backtracking/word_search.pyenf/enf/\`?^뎿\%1uMnbit_manipulation/README.mdeZ<ceZ<c\~⛲CK)wZSbit_manipulation/__init__.pyenf/enf/\b6hٳO!%ԉbw'bit_manipulation/binary_and_operator.pyeZ<ceZ<c\V<iE3M f~-1t(bit_manipulation/binary_count_setbits.pyeZ<ceZ<c\īfD9BHƊ/bit_manipulation/binary_count_trailing_zeros.pyenf/enf/\Nu ?0a&bit_manipulation/binary_or_operator.pyenf/enf/\  KP'/&|m+1}!bit_manipulation/binary_shifts.pyenf/enf/\&a,NB׌ޓXGd*bit_manipulation/binary_twos_complement.pyenf/enf/\b lA6Za[ 'bit_manipulation/binary_xor_operator.pyenf/enf/\>. u}v2Fh%.o3bit_manipulation/count_1s_brian_kernighan_method.pyeo`hA.eo`hA.\ hu'\A69'{,bit_manipulation/count_number_of_one_bits.pyenf/enf/\O cexؗT[yeR#FX.&bit_manipulation/gray_code_sequence.pyenf/enf/\\R!-˔/p.dc=œ#bit_manipulation/highest_set_bit.pyenf/enf/\bf $&i.bit_manipulation/index_of_rightmost_set_bit.pyenlaenla\mBo5.M sbit_manipulation/is_even.pyenf/enf/\n>Q}w6.=Ki#bit_manipulation/is_power_of_two.pyenf/enf/\Xϋm*}\DYj3nd+bit_manipulation/numbers_different_signs.pyeqK9OeqK9O\ +|ݷ#Zw> bit_manipulation/reverse_bits.pyenf/enf/\ ?{wo_ҵMz6bit_manipulation/single_bit_manipulation_operations.pyenf/enf/\\nu/g-z-blockchain/README.mdeZ<ceZ<c\⛲CK)wZSblockchain/__init__.pyeq$$eq$$\)^w*؜†v ]'blockchain/chinese_remainder_theorem.pyeq$$eq$$\ "\cߔ# ["blockchain/diophantine_equation.pyeq$$eq$$\H s\['OGNRO.wblockchain/modular_division.pyenf/enf/\E__tE[pgVboolean_algebra/README.mdeZ<ceZ<c\⛲CK)wZSboolean_algebra/__init__.pyepEepE\{Aw.秎>4wfOboolean_algebra/and_gate.pyepEepE\3k%*׼WuNboolean_algebra/nand_gate.pyeqK9OeqK9O\ %,' 'A“mboolean_algebra/norgate.pyepEepE\TLa$x.vɧhboolean_algebra/not_gate.pyepEepE\~fE?zGgboolean_algebra/or_gate.pyeqK9OeqK9O\{Ag߲(IT^X7#boolean_algebra/quine_mc_cluskey.pyepEepE\E'54lQZuboolean_algebra/xnor_gate.pyepEepE\LO[E QXph忒}boolean_algebra/xor_gate.pyenf/enf/\h3l/E}=zcellular_automata/README.mdeZ<ceZ<c\⛲CK)wZScellular_automata/__init__.pyenf/enf/\ zվ@!Qvд{.V)cellular_automata/conways_game_of_life.pyenf/enf/\ 1 ֑:!XvT.E!cellular_automata/game_of_life.pyenf/enf/\ I6?֯S_C$y'be'cellular_automata/nagel_schrekenberg.pyenf/enf/\ @wDP/p!>+( a$cellular_automata/one_dimensional.pyenf/enf/\ U/ O8E\O8."fciphers/README.mdeZ<ceZ<c\⛲CK)wZSciphers/__init__.pyenf/enf/\Maؓ]3]'* ciphers/a1z26.pyeq$$eq$$\ A3%pT!`"$R)ciphers/affine_cipher.pyenf/enf/\ ј8WrSciphers/atbash.pyenf/enf/\ V{pqA.ciphers/autokey.pyenf/enf/\ WXF5zfٷB7ciphers/baconian_cipher.pyenf/enf/\_ l(Fa6&]ciphers/base16.pyeq$$eq$$\<ĕ.Z)4~ciphers/base32.pyenf/enf/\ [+ })>)< jciphers/base64.pyeq$$eq$$\دѯ]rMTKH{$ciphers/base85.pyenf/enf/\x+\"qh\ciphers/beaufort_cipher.pyenf/enf/\ ^[8@RTQO)GMciphers/bifid.pyenf/enf/\E&(~AR*sZz$ciphers/brute_force_caesar_cipher.pyenf/enf/\ћ3r!NHNZAciphers/caesar_cipher.pyeq$$eq$$\o3~i%6Qciphers/cryptomath_module.pyenf/enf/\a$l6P[0FU4**ciphers/decrypt_caesar_with_chi_squared.pyenf/enf/\V!01WHCH%ciphers/deterministic_miller_rabin.pyepEepE\O 9aF"ciphers/diffie.pyenf/enf/\U/s>X@fFR ciphers/diffie_hellman.pyenf/enf/\xUiXV ciphers/elgamal_key_generator.pyenf/enf/\"z DƇC-tvciphers/enigma_machine2.pyeq$$eq$$\BN)H]F8靅<E|ciphers/hill_cipher.pyenf/enf/\ I*4 qPgciphers/mixed_keyword_cypher.pyenf/enf/\F?I6it=Vt~ KwX"ciphers/mono_alphabetic_ciphers.pyenf/enf/\ apoN!2>/Wqciphers/morse_code.pyeo`hA.eo`hA.\[K5 TLudciphers/onepad_cipher.pyeo Peo P\ ry#YkX@ciphers/playfair_cipher.pyenf/enf/\ d ; NԚ ~4ciphers/polybius.pyenf/enf/\ PI #_')ciphers/porta_cipher.pyeq$$eq$$\n$RS:5}aciphers/prehistoric_men.txtenfl enfl \ A UCS[TGW#ciphers/rabin_miller.pyenfl enfl \ \G}1i,@aBciphers/rail_fence_cipher.pyenfl enfl \Ig!Q']B҉Qhcjm1ciphers/rot13.pyenfl enfl \ZAr-)([.뮰 ciphers/rsa_cipher.pyenfl enfl \'w兙*2Sqciphers/rsa_factorization.pyeqK9OeqK9O\%%s8{]YXM@5:=ciphers/rsa_key_generator.pyenfl enfl \Rʹ|ibۻX ciphers/shuffled_shift_cipher.pyenfl enfl \ 5Gчdtn@ ciphers/simple_keyword_cypher.pyenfl enfl \M)q]eoVwB%ciphers/simple_substitution_cipher.pyeqK9OeqK9O\kRFl|-bciphers/trafid_cipher.pyenfl enfl \/}?5wI"vOIciphers/transposition_cipher.pyeq$$eq$$\pb <„@Wc^4ciphers/transposition_cipher_encrypt_decrypt_file.pyenfl enfl \aa5:ˤz.?ciphers/vigenere_cipher.pyepEepE\68_Q?)PѭS.]Fciphers/xor_cipher.pyeqeq\|Tau=> Kma>compression/README.mdeZ5EbeZ5Eb\⛲CK)wZScompression/__init__.pyenfl enfl \X-R]j-]z`compression/burrows_wheeler.pyenfl enfl \d eS3lB]d?compression/huffman.pyeZ5EbeZ5Eb\DſE |#f|,compression/image_data/PSNR-example-base.pngeZrNbeZrNb\7mwMf?G/compression/image_data/PSNR-example-comp-10.jpgeZrNbeZrNb\⛲CK)wZS"compression/image_data/__init__.pyeZrNbeZrNb\h<u!ǵy쮹V+compression/image_data/compressed_image.pngeZrNbeZrNb\u":X:3 Dי)C#(compression/image_data/example_image.jpgeZrNbeZrNb\nE>VQjHTN 2compression/image_data/example_wikipedia_image.jpgeZrNbeZrNb\G oޛjqt+4Wj)compression/image_data/original_image.pngenfl enfl \fo3JņvF ʀ#compression/lempel_ziv.pyenfl enfl \ ;*9SM8V=$compression/lempel_ziv_decompress.pyenfl enfl \ xr Y \4a'Bcompression/lz77.pyenfl enfl \>(O).2uM)compression/peak_signal_to_noise_ratio.pyenfl enfl \ z-i ]PPC9"compression/run_length_encoding.pyenfl enfl \c/J LE/X>,秞computer_vision/README.mdeZrNbeZrNb\⛲CK)wZScomputer_vision/__init__.pyeq$$eq$$\  x_2W&俊m%computer_vision/cnn_classification.pyeqRr+eqRr+\ 擴yzZ?{EA$computer_vision/flip_augmentation.pyenfl enfl \  R+ï ,@i` computer_vision/harris_corner.pyenfl enfl \ >h)L/Ă10ΰ7computer_vision/horn_schunck.pyenfl enfl \ vey3֩'w |s!computer_vision/mean_threshold.pyeq$$eq$$\ Pmk/O+l]B(&computer_vision/mosaic_augmentation.pyenfl enfl \  ˨-pؽ"""e$computer_vision/pooling_functions.pyenfl enfl \ '=(!YjEf20B6conversions/README.mdeZWaeZWa\⛲CK)wZSconversions/__init__.pyenfl enfl \  A6Dmp[L73conversions/astronomical_length_scale_conversion.pyenfl enfl \J%z]\x9\_ conversions/binary_to_decimal.pyenfl enfl \ [ REr\K $conversions/binary_to_hexadecimal.pyenfl enfl \#Jޕ+Gfuconversions/binary_to_octal.pyenfl enfl \ pqzO*sɦ̦<#Mconversions/decimal_to_any.pyeqK9OeqK9O\<Gȯg֠K^Tq$ conversions/decimal_to_binary.pyeqK9OeqK9O\  <pÀdf֕2+?G*conversions/decimal_to_binary_recursion.pyepEepE\^hc#>׈>mr.%conversions/decimal_to_hexadecimal.pyenfl enfl \L1;Li;Wn?uconversions/decimal_to_octal.pyenfl enfl \ Qk19(ixZT'~\!conversions/energy_conversions.pyenfl enfl \ p1&^H{m=;¨'$conversions/excel_title_to_column.pyenfl enfl \ ܸr\}j̓ժ'conversions/hex_to_bin.pyenfl enfl \ Jh dxE%conversions/hexadecimal_to_decimal.pyeqZ3eqZ3\ Q%^=.i`Z conversions/length_conversion.pyenfl enfl \ }Q4 _[iW"conversions/molecular_chemistry.pyenfl enfl \ ho )sn8M/Tconversions/octal_to_decimal.pyenfl enfl \ YUA81K}:(!!conversions/prefix_conversions.pyenfl enfl \  Dg*/&JQ(conversions/prefix_conversions_string.pyeqZ3eqZ3\  r4RlUy6dy8#conversions/pressure_conversions.pyenfl enfl \ t>I T{.;c!conversions/rgb_hsv_conversion.pyenfl enfl \u*(5YTsMB[*conversions/roman_numerals.pyenfl enfl \ IqMμ4~} conversions/speed_conversions.pyenfl enfl \-al+* $c,1ȣ&conversions/temperature_conversions.pyeqZ3eqZ3\  DҐ  G?|k?3sa!conversions/volume_conversions.pyenfl enfl \(2n hX X'z conversions/weight_conversion.pyeZWaeZWa\%⛲CK)wZSdata_structures/__init__.pyeqK9OeqK9O\ 3EXFڕꊻ&data_structures/arrays/permutations.pyenfl enfl \ >"C077EUë=($data_structures/arrays/prefix_sum.pyenfl enfl \ A OiO7!m-׆B%data_structures/arrays/product_sum.pyeZWaeZWa\5⛲CK)wZS'data_structures/binary_tree/__init__.pyenfl enfl \%LzDR/=,O92U'data_structures/binary_tree/avl_tree.pyeo`hA.eo`hA.\e${Qkke#}0data_structures/binary_tree/basic_binary_tree.pyeqK9OeqK9O\!BL|fH@ba6]1data_structures/binary_tree/binary_search_tree.pyepEepE\@ٵL +7|pgƿ ;data_structures/binary_tree/binary_search_tree_recursive.pyenfl enfl \TN`Z<DӡHF N1data_structures/binary_tree/binary_tree_mirror.pyenfl enfl \ TZN<yUp\r3data_structures/binary_tree/binary_tree_node_sum.pyenfl enfl \ V>_|(s3data_structures/binary_tree/binary_tree_path_sum.pyeo`hA.eo`hA.\ p~'XֱiT5data_structures/binary_tree/binary_tree_traversals.mdeq$$eq$$\*v>+@ő ު&E5data_structures/binary_tree/binary_tree_traversals.pyenfl enfl \ Y1YHHuM1,o8data_structures/binary_tree/diff_views_of_binary_tree.pyeqZ3eqZ3\  Φ:Ho a/data_structures/binary_tree/distribute_coins.pyenfl enfl \":-lFyŒ+data_structures/binary_tree/fenwick_tree.pyenfl enfl \ _WR})STZ ~x = :data_structures/binary_tree/inorder_tree_traversal_2022.pyeo`hA.eo`hA.\ q. .(@X]?,%data_structures/binary_tree/is_bst.pyeqK9OeqK9O\ o/y9nj\k0data_structures/binary_tree/lazy_segment_tree.pyenfl enfl \h e7p;~mKGQ7sb5data_structures/binary_tree/lowest_common_ancestor.pyenfl enfl \ e zpsAI$bYA3data_structures/binary_tree/maximum_fenwick_tree.pyenfenf\31Ņ{p$5data_structures/binary_tree/merge_two_binary_trees.pyenfenf\\N\N~;D՛*,9data_structures/binary_tree/non_recursive_segment_tree.pyenfenf\ XhLQz{<$R3=7A>data_structures/binary_tree/number_of_possible_binary_trees.pyenfenf\ZeN|>e?o+۷-data_structures/binary_tree/red_black_tree.pyeo`hA.eo`hA.\  *_$˴m^^+data_structures/binary_tree/segment_tree.pyenfenf\/wwĕG̪$v1data_structures/binary_tree/segment_tree_other.pyenfenf\ :fT޷8ar$data_structures/binary_tree/treap.pyenfenf\ m[nFsVF+data_structures/binary_tree/wavelet_tree.pyeZWaeZWa\V⛲CK)wZS(data_structures/disjoint_set/__init__.pyeZWaeZWa\WQ3[ ްI kdO6data_structures/disjoint_set/alternate_disjoint_set.pyenfenf\-^ݽN ,data_structures/disjoint_set/disjoint_set.pyeZWaeZWa\Z⛲CK)wZS#data_structures/hashing/__init__.pyenfenf\ n И[3JHkS'M'data_structures/hashing/bloom_filter.pyeo Peo P\[i!LIb_3l_&data_structures/hashing/double_hash.pyeo WYeo WY\ o"Ȼ_ê |M)T#data_structures/hashing/hash_map.pyeo WYeo WY\  |?oI kט*%data_structures/hashing/hash_table.pyenfenf\_N%F<m k0;6data_structures/hashing/hash_table_with_linked_list.pyeZWaeZWa\a⛲CK)wZS1data_structures/hashing/number_theory/__init__.pyenfenf\ %o$fX)YCc6data_structures/hashing/number_theory/prime_numbers.pyeo WYeo WY\c 04 4F ozc {R,data_structures/hashing/quadratic_probing.pyepEepE\ /g1Qk&'*.data_structures/hashing/tests/test_hash_map.pyeZWaeZWa\g⛲CK)wZS data_structures/heap/__init__.pyenfenf\ 1Y ҇#`/3 <%data_structures/heap/binomial_heap.pyeo WYeo WY\O4y/wWdata_structures/heap/heap.pyenfenf\%FϜOJg$data_structures/heap/heap_generic.pyenfenf\ }В&ې'? data_structures/heap/max_heap.pyenfenf\챇d<DNTwJ data_structures/heap/min_heap.pyenlaenla\AC@ګ/c-qX 'data_structures/heap/randomized_heap.pyenfenf\J;'j7=;A `y[!data_structures/heap/skew_heap.pyeqK9OeqK9O\VDdR+\l'data_structures/linked_list/__init__.pyeqK9OeqK9O\2]a74O-a,-;33data_structures/linked_list/circular_linked_list.pyenfenf\+p#ĀsYZH+data_structures/linked_list/deque_doubly.pyeqK9OeqK9O\HlHN$CI Q;=1data_structures/linked_list/doubly_linked_list.pyenfenf\Z ,gvUu,M_5data_structures/linked_list/doubly_linked_list_two.pyeZWaeZWa\vДOd4%uȷ,data_structures/linked_list/from_sequence.pyenfenf\Pcz~;'data_structures/linked_list/has_loop.pyeqK9OeqK9O\*xd$̘,data_structures/linked_list/is_palindrome.pyenfenf\ ;@w,)e.data_structures/linked_list/merge_two_lists.pyenfenf\ִs/%3B<data_structures/linked_list/middle_element_of_linked_list.pyeq$$eq$$\ =V[%@,data_structures/linked_list/print_reverse.pyeqK9OeqK9O\;Ή!ɴ7~rnWR1data_structures/linked_list/singly_linked_list.pyenfenf\18D>RP<Z3&-U(data_structures/linked_list/skip_list.pyeq$$eq$$\&?WVGP!l `d)data_structures/linked_list/swap_nodes.pyeZWaeZWa\⛲CK)wZS!data_structures/queue/__init__.pyeZWaeZWa\ \|)bj~",0'data_structures/queue/circular_queue.pyenfenf\ vb,KΖ {{ ǛP"63data_structures/queue/circular_queue_linked_list.pyeq$$eq$$\2D܆;N+ 5~Cf+data_structures/queue/double_ended_queue.pyenfenf\}:}(5O@aDB%data_structures/queue/linked_queue.pyenfenf\E^fMyf;2data_structures/queue/priority_queue_using_list.pyenfenf\ w $KЎ|MU!e@ &data_structures/queue/queue_by_list.pyenfenf\ x }bU;*9U!u,data_structures/queue/queue_by_two_stacks.pyenfenf\! لQٿP]~'U.data_structures/queue/queue_on_pseudo_stack.pyeZWaeZWa\⛲CK)wZS"data_structures/stacks/__init__.pyenfenf\/<l"\7K.data_structures/stacks/balanced_parentheses.pyenfenf\ RlS1$pqIT:7data_structures/stacks/dijkstras_two_stack_algorithm.pyeqK9OeqK9O\ Q5;r2H*,+4data_structures/stacks/evaluate_postfix_notations.pyenfenf\ 7O_p~;Ck5data_structures/stacks/infix_to_postfix_conversion.pyepEepE\ .om]W \hj0-ѳ4data_structures/stacks/infix_to_prefix_conversion.pyenfenf\} }v}ũ...data_structures/stacks/next_greater_element.pyeqK9OeqK9O\-(Pҋ<`A"G,data_structures/stacks/postfix_evaluation.pyenfenf\d#׵D]}l톍r%m+data_structures/stacks/prefix_evaluation.pyenlaenla\ 'OFHh{Y{Y.YnBdata_structures/stacks/stack.pyenfenf\ { P#n<>B捱#R7data_structures/stacks/stack_with_doubly_linked_list.pyenfenf\ }1΃c*,S7data_structures/stacks/stack_with_singly_linked_list.pyeqK9OeqK9O\86B<fNH$%E,data_structures/stacks/stock_span_problem.pyeZ``eZ``\⛲CK)wZS data_structures/trie/__init__.pyenfenf\ ~=PIhV[;0R"data_structures/trie/radix_tree.pyenfenf\*F:I,QlXP2data_structures/trie/trie.pyeZ``eZ``\⛲CK)wZS$digital_image_processing/__init__.pyeZ``eZ``\I?9nF!Pc=-digital_image_processing/change_brightness.pyenfenf\B~IiG`WgrJo3+digital_image_processing/change_contrast.pyeZ``eZ``\}A8<(ʬ)-(E3/digital_image_processing/convert_to_negative.pyeZ``eZ``\⛲CK)wZS.digital_image_processing/dithering/__init__.pyenfenf\:455<b|!,digital_image_processing/dithering/burkes.pyeZ``eZ``\⛲CK)wZS3digital_image_processing/edge_detection/__init__.pyenfenf\<"8t'SWN͇\0digital_image_processing/edge_detection/canny.pyeZ``eZ``\⛲CK)wZS,digital_image_processing/filters/__init__.pyeqK9OeqK9O\@ HV]?k^ Ok~Db?4digital_image_processing/filters/bilateral_filter.pyeqK9OeqK9O\c) <cHfV ,digital_image_processing/filters/convolve.pyenfenf\  <Zyߣo$4TˁWG0digital_image_processing/filters/gabor_filter.pyeZ``eZ``\އge<q=={_־3digital_image_processing/filters/gaussian_filter.pyeqK9OeqK9O\  UW na+W5Qw8digital_image_processing/filters/local_binary_pattern.pyeZ``eZ``\@VbfmepӶbgy1digital_image_processing/filters/median_filter.pyeZ``eZ``\3(J2$t䈷y[0digital_image_processing/filters/sobel_filter.pyeZ``eZ``\⛲CK)wZS;digital_image_processing/histogram_equalization/__init__.pyenfenf\Sj^w>2@Y$W-LDdigital_image_processing/histogram_equalization/histogram_stretch.pyeZ``eZ``\⛲CK)wZSFdigital_image_processing/histogram_equalization/image_data/__init__.pyeZ``eZ``\H=F$YP)wDdigital_image_processing/histogram_equalization/image_data/input.jpgeZ``eZ``\⛲CK)wZSGdigital_image_processing/histogram_equalization/output_data/__init__.pyeZ``eZ``\οH } W7OUcFdigital_image_processing/histogram_equalization/output_data/output.jpgeZ``eZ``\⛲CK)wZS/digital_image_processing/image_data/__init__.pyeZ``eZ``\vN孝JSxs,digital_image_processing/image_data/lena.jpgeZ``eZ``\;QD\yœ$2-2digital_image_processing/image_data/lena_small.jpgenfenf\Lgh}Oub-digital_image_processing/index_calculation.pyenfenf\  A䛕\͗0\8 7vGdigital_image_processing/morphological_operations/dilation_operation.pyeqRr+eqRr+\ r i}p2.IIGFdigital_image_processing/morphological_operations/erosion_operation.pyeZ``eZ``\⛲CK)wZS+digital_image_processing/resize/__init__.pyeZ``eZ``\H6RX?GPt)digital_image_processing/resize/resize.pyeZ``eZ``\⛲CK)wZS-digital_image_processing/rotation/__init__.pyeqRr+eqRr+\aS$Z ?L-digital_image_processing/rotation/rotation.pyenfenf\C,mi3׎4M!digital_image_processing/sepia.pyeqK9OeqK9O\' ${UѝKed|9digital_image_processing/test_digital_image_processing.pyeZ``eZ``\⛲CK)wZSdivide_and_conquer/__init__.pyeZ``eZ``\ y{$"1 ,divide_and_conquer/closest_pair_of_points.pyenfenf\o?v,@ !divide_and_conquer/convex_hull.pyeZ``eZ``\0fAlNsk<#%divide_and_conquer/heaps_algorithm.pyeZ``eZ``\LMA96!=:-/divide_and_conquer/heaps_algorithm_iterative.pyenfenf\5y+|k divide_and_conquer/inversions.pyenfenf\fjѣRuUCwCǺ)divide_and_conquer/kth_order_statistic.pyenfenf\NķjqTL)W|\)divide_and_conquer/max_difference_pair.pyenfenf\  !L1t$ީ$R;"divide_and_conquer/max_subarray.pyenfenf\ btB+\U divide_and_conquer/mergesort.pyenfenf\()W_EEARa/divide_and_conquer/peak.pyeZ``eZ``\#6$g_ԭ tw'divide_and_conquer/power.pyeqK9߬OeqK9߬O\2~VU1b Bzw4divide_and_conquer/strassen_matrix_multiplication.pyeZ``eZ``\⛲CK)wZSdynamic_programming/__init__.pyeZ``eZ``\Qu/ ھz.~#dynamic_programming/abbreviation.pyenfenf\ nS˱p7A$dynamic_programming/all_construct.pyenfenf\s 1V]gy~[37dynamic_programming/bitmask.pyenfenf\  {tv=CՒb١^YN&dynamic_programming/catalan_numbers.pyenfenf\Z'=_vZ{c@&dynamic_programming/climbing_stairs.pyenfenf\  Od t |)dynamic_programming/combination_sum_iv.pyenfenf\ kwJG2ns#,$dynamic_programming/edit_distance.pyeZ``eZ``\IZw´]PMD dynamic_programming/factorial.pyeZ``eZ``\`􁆣L%0J^FRNy%dynamic_programming/fast_fibonacci.pyenfenf\GVI: EO!} dynamic_programming/fibonacci.pyenfenf\ Cz>QdK_ji dynamic_programming/fizz_buzz.pyeqK9߬OeqK9߬O\taJ<r0t Ū Y%dynamic_programming/floyd_warshall.pyepEepE\1BԽ4hqň)e(dynamic_programming/integer_partition.pyenfenf\M %4ߕg30n=1dynamic_programming/iterating_through_submasks.pyeq$$eq$$\ 4m?o ˹.vMNJAqZ4dynamic_programming/k_means_clustering_tensorflow.pyenfenf\QHZE xP!i3761'dynamic_programming/knapsack.pyenfenf\Ais. bghd1dynamic_programming/longest_common_subsequence.pyenfenf\ D6|Saia_/dynamic_programming/longest_common_substring.pyenfenf\&'7cS%PxA/!5dynamic_programming/longest_increasing_subsequence.pyenfenf\V^) :;ep>dynamic_programming/longest_increasing_subsequence_o(nlogn).pyeo`hA.eo`hA.\ wf2D(d7)1d(dynamic_programming/longest_sub_array.pyenfenf\o̕\{_;Y)dynamic_programming/matrix_chain_order.pyenfenf\o#>m eQ=+dynamic_programming/max_non_adjacent_sum.pyenfenf\ @BXY'7 qd+dynamic_programming/max_product_subarray.pyenfenf\ iCG+Žc>z,|'dynamic_programming/max_subarray_sum.pyeq$$eq$$\ _HpD_W D-E*L-l-dynamic_programming/min_distance_up_bottom.pyenfenf\DTӹTj+xD*dynamic_programming/minimum_coin_change.pyeZ``eZ``\:KU(-4LN#JC(dynamic_programming/minimum_cost_path.pyeq$$eq$$\4=g{`{t7(dynamic_programming/minimum_partition.pyenfenf\ 8h55褠 0dynamic_programming/minimum_size_subarray_sum.pyenfenf\ XI&tkYT:<dynamic_programming/minimum_squares_to_represent_a_number.pyepEepE\=?qQ<fB4){+dynamic_programming/minimum_steps_to_one.pyenfenf\ {ge4C$!e+dynamic_programming/minimum_tickets_cost.pyenfenf\*h^G>h1dynamic_programming/optimal_binary_search_tree.pyenfenf\ [email protected]ӝ+E~A.dynamic_programming/palindrome_partitioning.pyenfenf\R5@"(Tx$F)"dynamic_programming/rod_cutting.pyepEepE\BŁm7T1NR(dynamic_programming/subset_generation.pyenfenf\*X:K0I[lPʎ$dynamic_programming/sum_of_subset.pyenfenf\ 6:vME,)|*?c{dynamic_programming/viterbi.pyenfenf\  @Mzi .bAF{f2LsS3!dynamic_programming/word_break.pyenfenf\  ª 9X-}Jɶ-electronics/apparent_power.pyenfenf\  |8RMQŃ rLk<electronics/builtin_voltage.pyenfenf\  C ͅOɓ1ч$electronics/carrier_concentration.pyenfenf\  WBDfnf.FFg#electronics/circular_convolution.pyenfenf\  |25[electronics/coulombs_law.pyenfenf\ wR9lT$electronics/electric_conductivity.pyeqZ3eqZ3\R嗕`;:/tgI(electronics/electric_power.pyenfenf\ D=b0oM#electronics/electrical_impedance.pyenfenf\ ?wbC~WH$V<˃Helectronics/ind_reactance.pyenfenf\0f7 tWcpelectronics/ohms_law.pyenfenf\ ܺ5`Z{a=zϚ&electronics/real_and_reactive_power.pyenfenf\ 9Uֵҽ(?V#electronics/resistor_equivalence.pyenfenf\ =O;` E57g'/!electronics/resonant_frequency.pyeZ)i_eZ)i_\ ⛲CK)wZSfile_transfer/__init__.pyeZ)i_eZ)i_\ CTϧfǃ2ptF,."hnfile_transfer/mytext.txtenfenf\rI ,c@%1file_transfer/receive_file.pyenfenf\ VG^IqD&qfile_transfer/send_file.pyeZ)i_eZ)i_\ $⛲CK)wZSfile_transfer/tests/__init__.pyeZ)i_eZ)i_\ %*`Dbownl9[gI;%file_transfer/tests/test_send_file.pyenfenf\ d8WI?<!financial/ABOUT.mdenfenf\ ⛲CK)wZSfinancial/__init__.pyenfenf\ :"I0Bk; KJ )financial/equated_monthly_installments.pyenfenf\ U3.'̳4XM'Tfinancial/interest.pyenfenf\ F# i4kLd$Qfinancial/present_value.pyenfenf\ Cm5|Aj=Ytr< financial/price_plus_tax.pyenfenf\ H.@ZعC$fractals/julia_sets.pyenfenf\ Xkؘ Wufractals/koch_snowflake.pyenfenf\ ڙubB9i}fractals/mandelbrot.pyenfenf\  E7/*yfractals/sierpinski_triangle.pyeZ)i_eZ)i_\ 5⛲CK)wZSfuzzy_logic/__init__.pyeq$$eq$$\ G  g=,5}1j;fuzzy_logic/fuzzy_operations.pyeZ)i_eZ)i_\ 9⛲CK)wZSgenetic_algorithm/__init__.pyenfenf\Df\*s!genetic_algorithm/basic_string.pyeZ)i_eZ)i_\ <⛲CK)wZSgeodesy/__init__.pyenfenf\ T%wD1e̐ 5geodesy/haversine_distance.pyenfenf\ HgNQ317K~=(geodesy/lamberts_ellipsoidal_distance.pyeZ)i_eZ)i_\ B⛲CK)wZSgraphics/__init__.pyenfenf\k||"2شCht'Eqgraphics/bezier_curve.pyenfenf\ 2 ngш=vˁm$graphics/vector3_for_2d_rendering.pyeZ)i_eZ)i_\ F⛲CK)wZSgraphs/__init__.pyepEepE\6 sQy깎1R 2graphs/a_star.pyenfenf\BҀE($%,\2QRh0Jgraphs/articulation_points.pyeo`hA.eo`hA.\7 [a#]WQ).graphs/basic_graphs.pyenfenf\F Ⱥ]O#tQcF4lN>lgraphs/bellman_ford.pyeqZ3eqZ3\ Z H& 'V/[і!graphs/bi_directional_dijkstra.pyenfenf\ 7=g*YhǺV0ugraphs/bidirectional_a_star.pyenfenf\Q 0 'OߣO,graphs/bidirectional_breadth_first_search.pyenfenf\ R'YHuaSz-I0ographs/boruvka.pyenfenf\^ W8ulW>*2xgraphs/breadth_first_search.pyenfenf\ +VCD;79pC graphs/breadth_first_search_2.pyenfenf\P ԉJ̈́*ޡI,graphs/breadth_first_search_shortest_path.pyenfenf\  SB1ɕXmR.graphs/breadth_first_search_shortest_path_2.pyenfenf\ 3x|]"7#cQ5graphs/breadth_first_search_zero_one_shortest_path.pyeo`hA.eo`hA.\ =|x@#Zbj ҥ#graphs/check_bipartite_graph_bfs.pyeq$$eq$$\ bdB0DN {pz#graphs/check_bipartite_graph_dfs.pyenfenf\ ̀uVsgraphs/check_cycle.pyenfenf\ c>qKVT|wdggraphs/connected_components.pyenfenf\o P<ǂk]0graphs/depth_first_search.pyeo`hA.eo`hA.\E0r'vbSsښ~ɤL:graphs/depth_first_search_2.pyenfenf\ IyD@rq!)graphs/dijkstra.pyenfenf\HF?Z/Ye1;9graphs/dijkstra_2.pyeo WYeo WY\{E!8KDS6 1graphs/dijkstra_algorithm.pyenfenf\  ~{M!UHgraphs/dijkstra_alternate.pyenfenf\  4=42I6sW7Jgraphs/dijkstra_binary_grid.pyeZ)i_eZ)i_\ ` R\ڏfLHBKgraphs/dinic.pyenfenf\<ތE@c8kC+aI*2graphs/directed_and_undirected_(weighted)_graph.pyenfenf\\wOK+hJͨc3ߵO/graphs/edmonds_karp_multiple_source_and_sink.pyenfenf\kNfb]8graphs/eulerian_path_and_circuit_for_undirected_graph.pyenfenf\62L`v;vgraphs/even_tree.pyenfenf\9 vtZMI]fgraphs/finding_bridges.pyenfenf\  W/F*'kNv`$&graphs/frequent_pattern_graph_miner.pyeZ)i_eZ)i_\ gwT=QnLXngraphs/g_topological_sort.pyenfenf\^8wiG~n+ cgraphs/gale_shapley_bigraph.pyepEepE\ TtvOX`AI1graphs/graph_adjacency_list.pyepEepE\ W M.7t=7^B/ graphs/graph_adjacency_matrix.pyenf#&enf#&\uq5&*5X graphs/graph_list.pyeZ)i_eZ)i_\ l Vϋ8+5Igraphs/graphs_floyd_warshall.pyeqK9߬OeqK9߬O\5ʟ4Wzz)x)graphs/greedy_best_first.pyenf#&enf#&\  i=]@mU73ø!graphs/greedy_min_vertex_cover.pyenf#&enf#&\ *c됚Rn3jrgraphs/kahns_algorithm_long.pyenf#&enf#&\ 1& սUfЄV8graphs/kahns_algorithm_topo.pyenf#&enf#&\ 2 >\ m2j<j v%graphs/karger.pyenf#&enf#&\ % fY-i!mgzrB(|graphs/markov_chain.pyenf#&enf#&\ ZDi 7}#graphs/matching_min_vertex_cover.pyenf#&enf#&\ ,T]l/ʧ -Wgraphs/minimum_path_sum.pyenf#&enf#&\ +<hyHR9E$k'graphs/minimum_spanning_tree_boruvka.pyenf#&enf#&\I7s\"F7K"{('graphs/minimum_spanning_tree_kruskal.pyenf#&enf#&\ CΎnёym Bs(graphs/minimum_spanning_tree_kruskal2.pyenf#&enf#&\T$ZWMC5 5G%graphs/minimum_spanning_tree_prims.pyenf#&enf#&\#MѴq:  &graphs/minimum_spanning_tree_prims2.pyenf#&enf#&\! A'6V ugraphs/multi_heuristic_astar.pyenf#&enf#&\]Źħ*@À猈graphs/page_rank.pyenf#&enf#&\ lYi[uս)HCgraphs/prim.pyenf#&enf#&\ t~M5c'qy;~ graphs/random_graph_generator.pyenf#&enf#&\  9!d1M+uZgraphs/scc_kosaraju.pyenf#&enf#&\  2^\3)Cp1s cp'graphs/strongly_connected_components.pyeqK9߬OeqK9߬O\  \0ʊ OPym}x]}graphs/tarjans_scc.pyenf#&enf#&\ ⛲CK)wZSgraphs/tests/__init__.pyenf#&enf#&\ 0$.LButJ}O.graphs/tests/test_min_spanning_tree_kruskal.pyeq$$eq$$\e(Pxҟ ѽ+graphs/tests/test_min_spanning_tree_prim.pyeo WYeo WY\ BXm@+p+ᄇ@&Dq%greedy_methods/fractional_knapsack.pyenf#&enf#&\ wm;kOW0||%kTn'greedy_methods/fractional_knapsack_2.pyenf#&enf#&\ w \xgdFGM&greedy_methods/minimum_waiting_time.pyenf#&enf#&\ e4DYv{G&'greedy_methods/optimal_merge_pattern.pyeqeq\ X mc`S!SP.F2hashes/README.mdeZ)i_eZ)i_\ ⛲CK)wZShashes/__init__.pyenf#&enf#&\aO;sB Bhashes/adler32.pyeqK9߬OeqK9߬O\ w#48ud1);Ahashes/chaos_machine.pyenf#&enf#&\ KLcP򞕺>i hashes/djb2.pyenf#&enf#&\ ,"܃_wt hashes/elf.pyenf#&enf#&\T7,4Q73*´hashes/enigma_machine.pyeqK9߬OeqK9߬O\ M%ܓ!&aR,ј=Chashes/hamming_code.pyenf#&enf#&\ ѻwVS%Ho#jhashes/luhn.pyenf#&enf#&\,!nȩsshE4@z hashes/md5.pyenf#&enf#&\zC(t}ռ hashes/sdbm.pyeqK9߬OeqK9߬O\iƳ%>C@Jz *2txhashes/sha1.pyeqK9߬OeqK9߬O\ ?s1TeTTehashes/sha256.pyenf#&enf#&\L_Y : SU,knapsack/README.mdeZ)i_eZ)i_\ ⛲CK)wZSknapsack/__init__.pyeZ)i_eZ)i_\ c@[k◒knapsack/greedy_knapsack.pyenf#&enf#&\ vl;[:Blk{lknapsack/knapsack.pyenf#&enf#&\ шj[s}Vpp\'knapsack/recursive_approach_knapsack.pyeZ)i_eZ)i_\ ⛲CK)wZSknapsack/tests/__init__.pyepEepE\~ +-]@݋xWi&knapsack/tests/test_greedy_knapsack.pyepEepE\5$USq}S2s%knapsack/tests/test_knapsack.pyenf#&enf#&\h i5 ^ ΤGǢfCkalinear_algebra/README.mdeZ)i_eZ)i_\ ⛲CK)wZSlinear_algebra/__init__.pyeZ)i_eZ)i_\ ⛲CK)wZSlinear_algebra/src/__init__.pyenf#&enf#&\Lf6he&&eT(linear_algebra/src/conjugate_gradient.pyepEepE\6UntR!NRlinear_algebra/src/lib.pyeqZ3eqZ3\E ؚ>[U[(linear_algebra/src/polynom_for_points.pyenf#&enf#&\ $٥,GV\ۗL|%linear_algebra/src/power_iteration.pyenf#&enf#&\  Jii{0d$linear_algebra/src/rank_of_matrix.pyenf#&enf#&\ GsB>dW8ȧ 0'linear_algebra/src/rayleigh_quotient.pyepEepE\  uM㗓xh*U[&linear_algebra/src/schur_complement.pyepEepE\wnPyW.@ &K;Ca)linear_algebra/src/test_linear_algebra.pyenf#&enf#&\ "?!F>_wx/(linear_algebra/src/transformations_2d.pyeqK9߬OeqK9߬O\ *d _o{sTlinear_programming/simplex.pyeZfr_eZfr_\ ⛲CK)wZSmachine_learning/__init__.pyenf#&enf#&\ \Rz`"Z-ءS8H\=machine_learning/astar.pyenf#&enf#&\ %;'—t?q&w?i(machine_learning/data_transformations.pyeo WYeo WY\|Ѱ,A6X'D9,"i!machine_learning/decision_tree.pyepEepE\ o0EQ5秦 P,machine_learning/dimensionality_reduction.pyeZfr_eZfr_\ ⛲CK)wZS(machine_learning/forecasting/__init__.pyeqZ3eqZ3\ BdUXY?V~(machine_learning/forecasting/ex_data.csveqZ3eqZ3\ m*g4A-#machine_learning/forecasting/run.pyenf#&enf#&\ *~m`v-dz:3machine_learning/gaussian_naive_bayes.py.broken.txtenf#&enf#&\ / ]>o%Œm :machine_learning/gradient_boosting_regressor.py.broken.txteqK9߬OeqK9߬O\ u+[tЂ{Zx$machine_learning/gradient_descent.pyeq$$eq$$\2.|Bxsd8W1?!machine_learning/k_means_clust.pyeq$$eq$$\t*z)&6+(machine_learning/k_nearest_neighbours.pyeq$$eq$$\ dJbBDϭc\FKEmachine_learning/knn_sklearn.pyenf#&enf#&\BGx_i>0machine_learning/linear_discriminant_analysis.pyeqK9߬OeqK9߬O\u:NhJN,.Ϟ%machine_learning/linear_regression.pyenf#&enf#&\ 4⛲CK)wZS4machine_learning/local_weighted_learning/__init__.pyenf#&enf#&\ 8 `MBȆv$w+NCmachine_learning/local_weighted_learning/local_weighted_learning.mdeq$$eq$$\ lύ]Af EL wCmachine_learning/local_weighted_learning/local_weighted_learning.pyeo WYeo WY\ e fƤJ~?5<'machine_learning/logistic_regression.pyeZfr_eZfr_\ ⛲CK)wZS!machine_learning/lstm/__init__.pyeqK9߬OeqK9߬O\ st|FataC99(machine_learning/lstm/lstm_prediction.pyeZfr_eZfr_\ M!p2u-)%machine_learning/lstm/sample_data.csvenf#&enf#&\ oA1rp89wI։ܾ}4machine_learning/multilayer_perceptron_classifier.pyenf#&enf#&\ ?[Ca`^7;?")machine_learning/polynomial_regression.pyenf#&enf#&\ B|2g `'Bvw-27machine_learning/random_forest_classifier.py.broken.txtenf#&enf#&\ DXk)nRo6machine_learning/random_forest_regressor.py.broken.txteZfr_eZfr_\  Fi\;!.n҂3 %machine_learning/scoring_functions.pyenf#&enf#&\ G2Ҵ[Q$7NN8'machine_learning/self_organizing_map.pyeo`hA.eo`hA.\jN>OVi1[[@i3machine_learning/sequential_minimum_optimization.pyenf`/enf`/\XGz#F< Oc!i~h%machine_learning/similarity_search.pyenf`/enf`/\ R$ałQ@ c:+machine_learning/support_vector_machines.pyenf`/enf`/\ ta1^;Љ,machine_learning/word_frequency_functions.pyenf`/enf`/\ H 3igKK2zѵO&machine_learning/xgboost_classifier.pyeqeq\ 9YOƽB(W#%machine_learning/xgboost_regressor.pyeZfr_eZfr_\ ⛲CK)wZSmaths/__init__.pyenf`/enf`/\.aW鍆c5p5+Q maths/abs.pyeq$$eq$$\ tȒREE+ H6y aQx maths/add.pyenf`/enf`/\ M@WM&3AQ;c%$maths/addition_without_arithmetic.pyeZfr_eZfr_\ ~XaўR &2(maths/aliquot_sum.pyeZfr_eZfr_\ MKlW>f[maths/allocation_number.pyeqK9߬OeqK9߬O\ "8}56's,maths/arc_length.pyenf`/enf`/\(Lr?C@c䞶 maths/area.pyeqZ3eqZ3\ LWW69oYmaths/area_under_curve.pyeo`hA.eo`hA.\  &pBxx=u maths/armstrong_numbers.pyeo`hA.eo`hA.\ (7V2 㳽nsqmaths/automorphic_number.pyenf`/enf`/\ Q[=e4W͌2K#maths/average_absolute_deviation.pyenf`/enf`/\ Y>'LCJ@&NdW:hmaths/average_mean.pyeqRr+eqRr+\ ZWH$$ iJge_Umaths/average_median.pyenf`/enf`/\@Afz:jx,mmaths/average_mode.pyenf`/enf`/\ C8fj^B7v8maths/bailey_borwein_plouffe.pyepEepE\ &,T>ЀIR3SZXmaths/basic_maths.pyeq$$eq$$\ h֐Xr/޴maths/binary_exp_mod.pyeq$$eq$$\ iW{Bv5@Rud maths/binary_exponentiation.pyeq$$eq$$\ QK% ]e[8 maths/binary_exponentiation_2.pyeq$$eq$$\ NpA)Ԓ25h^ӻ maths/binary_exponentiation_3.pyeq$$eq$$\ | K=8l'+(Kmaths/binomial_coefficient.pyenf`/enf`/\ y [VՒD>5Inumaths/binomial_distribution.pyeo WYeo WY\ Em '.NK*Omaths/bisection.pyeq$$eq$$\ HdDuF&){3apmaths/carmichael_number.pyeo`hA.eo`hA.\  ϱ|6 ,e#maths/catalan_number.pyenf`/enf`/\ ~吞8!|oֽ maths/ceil.pyenf`/enf`/\ ZXq5.2}P匋maths/check_polygon.pyeZfr_eZfr_\ tb./?k )E6]maths/chudnovsky_algorithm.pyenf`/enf`/\  pj4Ѵcj3amaths/collatz_sequence.pyeq$$eq$$\2@wn ql(gmaths/combinations.pyenf`/enf`/\ ѻNo?|.,maths/decimal_isolate.pyeqZ3eqZ3\ ebq|dZIFUemaths/decimal_to_fraction.pyenf`/enf`/\ g=bEh`:r+&xAu maths/dodecahedron.pyeq%!#eq%!#\ βL((>2!Tc#maths/double_factorial_iterative.pyeq%!#eq%!#\ ɲ: ock0o#maths/double_factorial_recursive.pyenf`/enf`/\ oȾMMͶD.maths/dual_number_automatic_differentiation.pyeqK9߬OeqK9߬O\ I(x-zw9}maths/entropy.pyenf`/enf`/\ Dr){ bxyMMB1maths/euclidean_distance.pyeq%!#eq%!#\ K%C~po~2M_Gtmaths/euclidean_gcd.pyenf`/enf`/\ t0ڥ:*.Xgmaths/euler_method.pyeqRr+eqRr+\ LSvŔۏGl)maths/euler_modified.pyeqK9߬OeqK9߬O\Vdp7%45 fmaths/eulers_totient.pyenf`/enf`/\ I  ]:-ؒ)S%maths/extended_euclidean_algorithm.pyepEepE\ P.'(晙%maths/factorial.pyenf`/enf`/\ M.SeYo50~~f#maths/factors.pyeq%!#eq%!#\ 0;E˨^W(Kf(dmaths/fermat_little_theorem.pyeo WYeo WY\֝A3cG꫃yxmaths/fibonacci.pyeqRr+eqRr+\,hOaKb66maths/find_max.pyeqRr+eqRr+\ >b2C國5l]Rլӽmaths/find_max_recursion.pyeqRr+eqRr+\K+.|cd5YS_udtz9maths/find_min.pyeqRr+eqRr+\ RM^/%oԘNq>I`maths/find_min_recursion.pyenf`/enf`/\ ㋼D(ޡE&.g_maths/floor.pyeq%!#eq%!#\ ޼XvK umaths/gamma.pyeq%!#eq%!#\ Q=k^8Au(bTz?maths/gamma_recursive.pyenf`/enf`/\ >QXI-M4|maths/gaussian.pyeqRr+eqRr+\ :{_QC;s,#maths/gaussian_error_linear_unit.pyenf`/enf`/\  _c#l#jF(.-M~3*maths/gcd_of_n_numbers.pyeZfr_eZfr_\ ܢJJIv쐎*5 maths/greatest_common_divisor.pyeqK9߬OeqK9߬O\ \ |i˪O8|maths/greedy_coin_change.pyeo`hA.eo`hA.\ Eup\ȫ& maths/hamming_numbers.pyeq%!#eq%!#\ i)S?Éj2!\` maths/hardy_ramanujanalgo.pyeo`hA.eo`hA.\ L6wƉȆ`eJfܻmaths/hexagonal_number.pyeZfr_eZfr_\ ⛲CK)wZSmaths/images/__init__.pyeZfr_eZfr_\ !|~!)lM0Ϣmaths/images/gaussian.pngeo WYeo WY\ 'za5w Bju^h&maths/integration_by_simpson_approx.pyenf`/enf`/\ e.sIZ*H ?maths/interquartile_range.pyenf`/enf`/\ cܞ!8KoEO(8maths/is_int_palindrome.pyenf`/enf`/\ j !.d\iAmQ?maths/is_ip_v4_address_valid.pyenf`/enf`/\ f ,8 4>1ИMmaths/is_square_free.pyeqRr+eqRr+\  2D|rQG #.maths/jaccard_similarity.pyenf`/enf`/\ Se%s}" 6+Umaths/juggler_sequence.pyeq%!#eq%!#\vKhU⬈_O<7N0maths/karatsuba.pyeo`hA.eo`hA.\ 0ب_V{b' Zmaths/krishnamurthy_number.pyenf`/enf`/\ @UX1oU%]1&maths/kth_lexicographic_permutation.pyeo`hA.eo`hA.\ :~IXhkrYsI&maths/largest_of_very_large_numbers.pyeq%!#eq%!#\+ ^cyJ :qqXbmaths/least_common_multiple.pyeqZ3eqZ3\ ٭Mc^Gߘmaths/line_length.pyenf`/enf`/\ X(T4J&v3?ިmaths/liouville_lambda.pyenf`/enf`/\  V!y&=`8>$maths/lucas_lehmer_primality_test.pyenf`/enf`/\ 4Z쮧 RIlf,J>maths/lucas_series.pyeq%!#eq%!#\ "X9$kCmaths/maclaurin_series.pyenf`/enf`/\ A9FIVMQԇB{maths/manhattan_distance.pyenf`/enf`/\  |70SV0H| `maths/matrix_exponentiation.pyenf`/enf`/\ k ׮tղmaths/max_sum_sliding_window.pyenf`/enf`/\ UXzKK:#y2lmaths/median_of_two_arrays.pyeq%!#eq%!#\ &h۫X-ogFo maths/miller_rabin.pyenf`/enf`/\ ̊PB*!6maths/mobius_function.pyeZ{^eZ{^\ ;mB}:$/D6whmaths/modular_exponential.pyenf`/enf`/\ GOe޴vz>>?maths/monte_carlo.pyenf`/enf`/\6/p(f*4K[ymaths/monte_carlo_dice.pyeo WYeo WY\ ?H?"kQ[/ QVe maths/nevilles_method.pyeqRr+eqRr+\ ,ޕ 3maths/newton_raphson.pyepEepE\ \g$QqU!maths/number_of_digits.pyeqZ3eqZ3\ _>TQ0 >maths/numerical_integration.pyenf`/enf`/\  `)!LgcB' ,{maths/odd_sieve.pyeo WYeo WY\ 4҇uq maths/perfect_cube.pyeo`hA.eo`hA.\ Q2ғmhjFDmaths/perfect_number.pyenf`/enf`/\ ~hRhvK14x(R[Kmaths/perfect_square.pyeqK9߬OeqK9߬O\ Io`vAr5Mџmaths/persistence.pyeq%!#eq%!#\ ]  ҪF|C-maths/pi_generator.pyenf`/enf`/\ )yr9U뀭O2/"maths/pi_monte_carlo_estimation.pyenf`/enf`/\ ;MN񰭜 o maths/points_are_collinear_3d.pyenf`/enf`/\ POqd6ker Wvmaths/pollard_rho.pyenf`/enf`/\R07a6maths/polynomial_evaluation.pyenf`/enf`/\ ⛲CK)wZSmaths/polynomials/__init__.pyeqZ3eqZ3\ ۋYJ9JUu4maths/polynomials/single_indeterminate_operations.pyeo`hA.eo`hA.\  Щi<e^0Umaths/power_using_recursion.pyepEepE\;c͗qN8HDmaths/prime_check.pyeZ{^eZ{^\ ^\ :mG=,;jmaths/prime_factors.pyeq%!#eq%!#\ )~&L@^јmaths/prime_numbers.pyenf`/enf`/\2[&W&ˁ!maths/prime_sieve_eratosthenes.pyeq%!#eq%!#\ a8(ȓoY=}maths/primelib.pyenf`/enf`/\ s䤾)~ afUT#maths/print_multiplication_table.pyeo`hA.eo`hA.\ M=.KD W($maths/pronic_number.pyeo`hA.eo`hA.\ Gt~` ZcB`u]maths/proth_number.pyenf`/enf`/\ wpM<6b<Umaths/pythagoras.pyeqRr+eqRr+\ _AO·9_^sKZmaths/qr_decomposition.pyeit.Seit.S\5NÐad/* TY,maths/quadratic_equations_complex_numbers.pyeq%!#eq%!#\ JTFTgJ,Wmaths/radians.pyenf`/enf`/\ !,\M~D.\ QV=`tmaths/radix2_fft.pyeq%!#eq%!#\ =EkÑnU&j|u maths/relu.pyenf`/enf`/\ *o)QI"-Vmaths/remove_digit.pyeo WYeo WY\ L~MF>T:T˫omaths/runge_kutta.pyeq%!#eq%!#\ XP;u*(B}07Omaths/segmented_sieve.pyeZ{^eZ{^\ k⛲CK)wZSmaths/series/__init__.pyenf8enf8\ v(Ǽ_{q"|3Hmaths/series/arithmetic.pyenf8enf8\  {b9X]hf=ၼzbmaths/series/geometric.pyeqZ3eqZ3\  Dw3 ė4 maths/series/geometric_series.pyeqK9߬OeqK9߬O\ N P_SoDmaths/series/harmonic.pyenf8enf8\ -qki&`maths/series/harmonic_series.pyenf8enf8\ !FX+Ɣ4iݲh!maths/series/hexagonal_numbers.pyeqZ3eqZ3\ 4?#M~ ҪDmaths/series/p_series.pyenf8enf8\ R P7>2͓0maths/sieve_of_eratosthenes.pyeqRr+eqRr+\ (uqA~Tmaths/sigmoid.pyeq%!#eq%!#\ x-8(`cIrkNmaths/sigmoid_linear_unit.pyeo`hA.eo`hA.\ "&g3JL O-maths/signum.pyepEepE\ q@mÚqqV\ː=pmaths/simpson_rule.pyenf8enf8\ #-<E,maths/simultaneous_linear_equation_solver.pyenf8enf8\ $|nl]]ʲB[ maths/sin.pyenf8enf8\ %0Nɺ^%)F\ʊmaths/sock_merchant.pyenf8enf8\ wRT V \j.[qmaths/softmax.pyeq%!#eq%!#\ z,A?SK096maths/square_root.pyenf8enf8\ >8 0i\O)ˀi!maths/sum_of_arithmetic_series.pyenf8enf8\HH2W -lmdrmaths/sum_of_digits.pyenf8enf8\ yZ#Xԍf `+%maths/sum_of_geometric_progression.pyenf8enf8\ Lݞ kZ?#D$|Xmaths/sum_of_harmonic_series.pyenf8enf8\ OfKLà [M)Ymaths/sumset.pyenf8enf8\ S4`t$Ʃ U5b Xѝ(maths/sylvester_sequence.pyeqRr+eqRr+\ =eݫ>dh !l4 maths/tanh.pyenf8enf8\ >x wƏ maths/test_prime_check.pyeZ{^eZ{^\  M܊ka1 !maths/trapezoidal_rule.pyenf8enf8\ V w[ U=l`:Jmaths/triplet_sum.pyenf8enf8\ Yk+f8͆4ĄyMmaths/twin_prime.pyenf8enf8\ ]ɐ w&'maths/two_pointer.pyenf8enf8\ _N3-lN"8$vE ۸maths/two_sum.pyeo`hA.eo`hA.\ '~k=C|Maz maths/ugly_numbers.pyeqZ3eqZ3\[AXL>7 maths/volume.pyeo`hA.eo`hA.\ ((48$tBΟmaths/weird_number.pyenf8enf8\ 6H?k lz+maths/zellers_congruence.pyeZ{^eZ{^\ ⛲CK)wZSmatrix/__init__.pyenf8enf8\ `o ;z4ҶBmatrix/binary_search_matrix.pyenf8enf8\ dŕI5<Y3r!matrix/count_islands_in_matrix.pyenf8enf8\ a';EO47׮p1matrix/count_negative_numbers_in_sorted_matrix.pyenf8enf8\ ebHa_Ъ2\q<matrix/count_paths.pyenf8enf8\ g ORFC6e#)zV0matrix/cramers_rule_2x2.pyenf8enf8\ :=߂S"8iRĀFmatrix/inverse_of_matrix.pyenf8enf8\ h3ik\ߏ,l4iLu`'matrix/largest_square_area_in_matrix.pyeqZ3eqZ3\ >,>S |bL/matrix/matrix_class.pyeqZ3eqZ3\3cb(AMmatrix/matrix_operation.pyenf8enf8\ l ]@ 07i.@=R{g4matrix/max_area_of_island.pyenf8enf8\  e zVN33matrix/nth_fibonacci_using_matrix_exponentiation.pyenf8enf8\ p'eUȹ:Gomatrix/pascal_triangle.pyenf8enf8\q lۚ7dNmatrix/rotate_matrix.pyeqZ3eqZ3\  ;灿%?z$matrix/searching_in_sorted_matrix.pyeqZ3eqZ3\  %bq}dOɿematrix/sherman_morrison.pyeqK9߬OeqK9߬O\ ? Rzo?i' matrix/spiral_print.pyeZ{^eZ{^\ ⛲CK)wZSmatrix/tests/__init__.pyeZ{^eZ{^\ <V-_ѹQާlmatrix/tests/pytest.iniepEepE\xe_?VµX%matrix/tests/test_matrix_operation.pyeZ{^eZ{^\ ⛲CK)wZSnetworking_flow/__init__.pyeo WYeo WY\ qnyyA['K!networking_flow/ford_fulkerson.pyenf8enf8\ uKE*dƻ֪aPknetworking_flow/minimum_cut.pyepEepE\-ќWr2aeYwJ#0neural_network/2_hidden_layers_neural_network.pyeZ{^eZ{^\ ⛲CK)wZSneural_network/__init__.pyenf8enf8\ tz<NqmQ?}+>neural_network/activation_functions/exponential_linear_unit.pyeqK9߬OeqK9߬O\ _^#>Ji;{D1neural_network/back_propagation_neural_network.pyeq%!#eq%!#\7o5t;dߵBT,neural_network/convolution_neural_network.pyeq%!#eq%!#\ >ްbčǑut +>U e&neural_network/gan.py_tfeqZ3eqZ3\ .d~EN-V3jneural_network/input_data.pyeo`hA.eo`hA.\ ,HxB|{Ůneural_network/perceptron.pyeo WYeo WY\ <#Hs>-r'o8'neural_network/simple_neural_network.pyeZ]eZ]\ ⛲CK)wZSother/__init__.pyenf8enf8\,b.7vc+rnjother/activity_selection.pyenf8enf8\ [B>Sfx77*U>6̚!other/alternative_list_arrange.pyeo`hA.eo`hA.\ 0-B;(mN?` *other/davisb_putnamb_logemannb_loveland.pyeo WYeo WY\ !{κ]zd6"fNVB#other/dijkstra_bankers_algorithm.pyeZ]eZ]\ !&V_ %h$other/doomsday.pyenf8enf8\ /MΝ(@Ð#.'other/fischer_yates_shuffle.pyeZ]eZ]\ DGԫg 'o%05other/gauss_easter.pyeq%!#eq%!#\ C.fhL:*^ULother/graham_scan.pyenf8enf8\ r_E$7AU other/greedy.pyenf8enf8\ 艋Ւ:~˟=C other/guess_the_number_search.pyenf8enf8\ 3g[&fO?onother/h_index.pyenf8enf8\ qi+H5;\noZother/least_recently_used.pyenf8enf8\ )&;`\s_mB1Bother/lfu_cache.pyeq%!#eq%!#\ 1cI۹Vċ&other/linear_congruential_generator.pyenf8enf8\ O'^[N޷H/ A1$uother/lru_cache.pyeqK9߬OeqK9߬O\ z%t4}'`WALother/magicdiamondpattern.pyenf8enf8\ oYe2q04Νh^<other/maximum_subsequence.pyeo WYeo WY\ SȲ B6oother/nested_brackets.pyenf8enf8\ l 5DOT&i other/number_container_system.pyeq%!#eq%!#\  KaaטlBL[zother/password.pyenf8enf8\ ]P 58ܨ&k0H6pother/quine.pyenf8enf8\23[7fA|d᤬other/scoring_algorithm.pyenf8enf8\  81Ye%2`_v other/sdes.pyenf8enf8\ E =(oQUWother/tower_of_hanoi.pyenf8enf8\ ⛲CK)wZSphysics/__init__.pyenf8enf8\ e0}"?+.g_8physics/altitude_pressure.pyepNepN\ n^ta,{P_܉Fphysics/archimedes_principle.pyenf8enf8\ ^`$ \r6d physics/basic_orbital_capture.pyenf8enf8\ ~[Y?$ůg+ˑR E3physics/casimir_effect.pyenf8enf8\ %dh_nu$Zpphysics/centripetal_force.pyenf8enf8\ +n]u~ԥ<$29.physics/grahams_law.pyenfAenfA\ ZqFί]'physics/horizontal_projectile_motion.pyenfAenfA\  Ҋgfnn2%physics/hubble_parameter.pyeqRr+eqRr+\ }]{yegW|, physics/ideal_gas_law.pyenfAenfA\ wn"<-sWl1zxiphysics/kinetic_energy.pyenfAenfA\ ϗ]'~b-physics/lorentz_transformation_four_vector.pyenfAenfA\  dw\1*%abDAphysics/malus_law.pyeqK9߬OeqK9߬O\ V.%+pf .'dTxphysics/n_body_simulation.pyeq%!#eq%!#\  K[/;%physics/newtons_law_of_gravitation.pyenfAenfA\  SxЄ3|`T2r='physics/newtons_second_law_of_motion.pyenfAenfA\ TOov (pB}physics/potential_energy.pyenfAenfA\ G9RGDC physics/rms_speed_of_molecule.pyenfAenfA\ ^H825.cphysics/shear_stress.pyeq%!#eq%!#\ 8eflm8f%@physics/speed_of_sound.pyenfAenfA\^*g驰zpWG Mproject_euler/README.mdeZ]eZ]\ ⛲CK)wZSproject_euler/__init__.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_001/__init__.pyenfAenfA\ LT1G)ʑd3E!project_euler/problem_001/sol1.pyeZ]eZ]\ SpQ3xX(U{يE!project_euler/problem_001/sol2.pyeZ]eZ]\ gAU:Qz]};=h1!project_euler/problem_001/sol3.pyeZ]eZ]\ d<O$2-/5g!project_euler/problem_001/sol4.pyenfAenfA\ >o$ZgUR#_\!project_euler/problem_001/sol5.pyeZ]eZ]\ Gq7oi!project_euler/problem_001/sol6.pyenfAenfA\ jp-Kiy!project_euler/problem_001/sol7.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_002/__init__.pyeZ]eZ]\ Shk ʆí 8!project_euler/problem_002/sol1.pyeZ]eZ]\ Ґ3Цɰjjn4%!project_euler/problem_002/sol2.pyeZ]eZ]\ :uzL! kd!project_euler/problem_002/sol3.pyeZ]eZ]\ !p֨ &Qqpn<]E!project_euler/problem_002/sol4.pyeZ]eZ]\ "D9ў8Y`4"3bg_r!project_euler/problem_002/sol5.pyeZ]eZ]\ $⛲CK)wZS%project_euler/problem_003/__init__.pyenfAenfA\y gA^ڢnTblN!project_euler/problem_003/sol1.pyeZ]eZ]\ & JFz5,o!project_euler/problem_003/sol2.pyenfAenfA\{:N#u: 8!project_euler/problem_003/sol3.pyeZ]eZ]\ )⛲CK)wZS%project_euler/problem_004/__init__.pyenfAenfA\ k7ݔ-kx@9_T!project_euler/problem_004/sol1.pyeZ]eZ]\ +ȀmX|@Ϻg=0!project_euler/problem_004/sol2.pyeZ]eZ]\ -⛲CK)wZS%project_euler/problem_005/__init__.pyenfAenfA\ c_[X0`<ۤ&!project_euler/problem_005/sol1.pyeq%!#eq%!#\ 0>^W](&#M!project_euler/problem_005/sol2.pyeZ]eZ]\ 1⛲CK)wZS%project_euler/problem_006/__init__.pyenfAenfA\ R-aY,3.(]|!project_euler/problem_006/sol1.pyeZ]eZ]\ 3m{Hqja`!project_euler/problem_006/sol2.pyenfAenfA\ +R#<)k#y!project_euler/problem_006/sol3.pyeZ]eZ]\ 5tVnKtSOf!project_euler/problem_006/sol4.pyeZ]eZ]\ 7⛲CK)wZS%project_euler/problem_007/__init__.pyenfAenfA\ /1pOAZ}R!project_euler/problem_007/sol1.pyenfAenfA\} uQaì1t$td!project_euler/problem_007/sol2.pyenfAenfA\ 7wB`ۙ4WTh癭^Q!project_euler/problem_007/sol3.pyeZ]eZ]\ <⛲CK)wZS%project_euler/problem_008/__init__.pyenfAenfA\  iG6hI `f !project_euler/problem_008/sol1.pyenfAenfA\ :1CSv͠vJ]!project_euler/problem_008/sol2.pyenfAenfA\  ^,p-rdVy !project_euler/problem_008/sol3.pyeZ]eZ]\ A⛲CK)wZS%project_euler/problem_009/__init__.pyenfAenfA\\y%ji+6i !project_euler/problem_009/sol1.pyeZ]eZ]\ Ckr*"E6!project_euler/problem_009/sol2.pyenfAenfA\ 74 0cpsQ1G@/!project_euler/problem_009/sol3.pyeZ]eZ]\ F⛲CK)wZS%project_euler/problem_010/__init__.pyenfAenfA\ P-17(N z!I*;!project_euler/problem_010/sol1.pyenfAenfA\ "$\ gR/m !project_euler/problem_010/sol2.pyenfAenfA\ }`WK@ 740!project_euler/problem_010/sol3.pyeZ]eZ]\ K⛲CK)wZS%project_euler/problem_011/__init__.pyeZ]eZ]\ LJE>)1QNFh:"project_euler/problem_011/grid.txtenfAenfA\ 2 E:|X@L4yhs~j!project_euler/problem_011/sol1.pyenfAenfA\ < ۙ=vDi|6!project_euler/problem_011/sol2.pyeZ]eZ]\ P⛲CK)wZS%project_euler/problem_012/__init__.pyenfAenfA\ c+Aig%7TY|,<+!project_euler/problem_012/sol1.pyenfAenfA\ {8 t@mz|!project_euler/problem_012/sol2.pyeZ]eZ]\ T⛲CK)wZS%project_euler/problem_013/__init__.pyeZ]eZ]\ UChm 1kS-S!project_euler/problem_013/num.txtenfAenfA\ ,zAJyP34R!project_euler/problem_013/sol1.pyeZ]eZ]\ X⛲CK)wZS%project_euler/problem_014/__init__.pyenfAenfA\3IY'>,e24V1=!project_euler/problem_014/sol1.pyenfAenfA\p'$HR[2[xe1B̯ɑ!project_euler/problem_014/sol2.pyeZ]eZ]\ \⛲CK)wZS%project_euler/problem_015/__init__.pyenfAenfA\ z C+~o$m!project_euler/problem_015/sol1.pyeZ]eZ]\ _⛲CK)wZS%project_euler/problem_016/__init__.pyenfAenfA\ bXMCo3ND 2OK!project_euler/problem_016/sol1.pyenfAenfA\ h7!.tŐ|4RX}\%*!project_euler/problem_016/sol2.pyeZ]eZ]\ c⛲CK)wZS%project_euler/problem_017/__init__.pyeZ]eZ]\ diZNZǁR@^7a!project_euler/problem_017/sol1.pyeZ]eZ]\ f⛲CK)wZS%project_euler/problem_018/__init__.pyenfAenfA\ |Bp0aHzpNr8|%project_euler/problem_018/solution.pyeZ]eZ]\ hh6~Є.}yX&project_euler/problem_018/triangle.txteZ]eZ]\ j⛲CK)wZS%project_euler/problem_019/__init__.pyenfAenfA\ 8}O45ĵ!4!project_euler/problem_019/sol1.pyeZ]eZ]\ m⛲CK)wZS%project_euler/problem_020/__init__.pyeZ]eZ]\ nǴrNT<yhdyr2!project_euler/problem_020/sol1.pyenfAenfA\ gnjSRZ\6 !project_euler/problem_020/sol2.pyeZ]eZ]\ pKO(_4,!project_euler/problem_020/sol3.pyeZ]eZ]\ q;, ߦ> e`{s!project_euler/problem_020/sol4.pyeZ]eZ]\ s⛲CK)wZS%project_euler/problem_021/__init__.pyenfAenfA\ 55%>gΛfU!project_euler/problem_021/sol1.pyeZ]eZ]\ v⛲CK)wZS%project_euler/problem_022/__init__.pyeZ]eZ]\ wo{lL.4㭇s(project_euler/problem_022/p022_names.txteZ]eZ]\ x)$^t0p]z;lh!project_euler/problem_022/sol1.pyeZ]eZ]\ yZhn,k_HT3ـ!project_euler/problem_022/sol2.pyeZ]eZ]\ {⛲CK)wZS%project_euler/problem_023/__init__.pyenfAenfA\ Njrp&f!project_euler/problem_023/sol1.pyeZ]eZ]\ ~⛲CK)wZS%project_euler/problem_024/__init__.pyeZ]eZ]\ cx`Y9X*!project_euler/problem_024/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_025/__init__.pyenfAenfA\ 4d׆,x @;S !project_euler/problem_025/sol1.pyenfAenfA\HoI蟴e F !project_euler/problem_025/sol2.pyenfAenfA\ O : 0f5Q׏!project_euler/problem_025/sol3.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_026/__init__.pyenfAenfA\BJͳt c!project_euler/problem_026/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_027/__init__.pyenfAenfA\ . a>+OQE#'nB!project_euler/problem_027/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_028/__init__.pyenfAenfA\ xO-Y3Ɣ!project_euler/problem_028/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_029/__init__.pyenfAenfA\ U٨U5Oc3Tc\!project_euler/problem_029/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_030/__init__.pyenfAenfA\ /,kNNω\S'Dv]z!project_euler/problem_030/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_031/__init__.pyeZ]eZ]\ @81u).y+!project_euler/problem_031/sol1.pyeZ]eZ]\ 8KtʵYDtDZc!project_euler/problem_031/sol2.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_032/__init__.pyenfAenfA\ W)w#`FrcM&"project_euler/problem_032/sol32.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_033/__init__.pyenfAenfA\ ^2BKj{4/ 52Yp !project_euler/problem_033/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_034/__init__.pyenfAenfA\#獄2ۻz`i__"!project_euler/problem_034/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_035/__init__.pyeq%!#eq%!#\ H?g;i &!project_euler/problem_035/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_036/__init__.pyenfAenfA\ '5n|d+ֈ+J!project_euler/problem_036/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_037/__init__.pyenfAenfA\  v˖JrxP!project_euler/problem_037/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_038/__init__.pyenfAenfA\  OП}Dž)8#V!project_euler/problem_038/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_039/__init__.pyeZ]eZ]\ AYHhtoa!project_euler/problem_039/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_040/__init__.pyeZ]eZ]\ ji7w#N2RPc!project_euler/problem_040/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_041/__init__.pyenfAenfA\ S.vtB yV!project_euler/problem_041/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_042/__init__.pyenfAenfA\ RN@@r0r?]B'project_euler/problem_042/solution42.pyeZ]eZ]\ ?گ:BQ ͗t#project_euler/problem_042/words.txteZ]eZ]\ y-`H,oRbQ%project_euler/problem_043/__init__.pyenfAenfA\ 3 ɕ}v|qx(Kpk!project_euler/problem_043/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_044/__init__.pyenfAenfA\ &;ujemω}sT"%!project_euler/problem_044/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_045/__init__.pyenfAenfA\ !,-v 177<!project_euler/problem_045/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_046/__init__.pyenfAenfA\ ' ݛo>akzG!project_euler/problem_046/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_047/__init__.pyeZ]eZ]\  8r .|=v7!project_euler/problem_047/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_048/__init__.pyenfAenfA\ ZE8]N~jSw˲W)!project_euler/problem_048/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_049/__init__.pyenfAenfA\ \u`ݮB);޵{P}!!project_euler/problem_049/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_050/__init__.pyenfAenfA\ no+]x>!project_euler/problem_050/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_051/__init__.pyenfAenfA\ ( ΒDUs3h!project_euler/problem_051/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_052/__init__.pyenfAenfA\ 5!c6(W+ !project_euler/problem_052/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_053/__init__.pyeZ]eZ]\ 7&Y]zAӻX!project_euler/problem_053/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_054/__init__.pyeZ]eZ]\ u0Had׷)project_euler/problem_054/poker_hands.txtenfAenfA\ 6 ߥ֊wpZ<u.!project_euler/problem_054/sol1.pyepNepN\ W5yG&]])n,project_euler/problem_054/test_poker_hand.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_055/__init__.pyeZ]eZ]\  iT:iQ۶ZEu!project_euler/problem_055/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_056/__init__.pyenfAenfA\ rņ/i,l!project_euler/problem_056/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_057/__init__.pyeZ]eZ]\ GX'o'!project_euler/problem_057/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_058/__init__.pyenfAenfA\I jX K6ӆip!project_euler/problem_058/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_059/__init__.pyeZ]eZ]\ ޳$rR,iYo)project_euler/problem_059/p059_cipher.txtenfAenfA\ $;NaFtďY!project_euler/problem_059/sol1.pyeZ]eZ]\ `'7@t9j/ g\Yʸ!)project_euler/problem_059/test_cipher.txteZ]eZ]\ ⛲CK)wZS%project_euler/problem_062/__init__.pyenfJenfJ\ 8N>Q;W4BN!project_euler/problem_062/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_063/__init__.pyenfJenfJ\  .Vp~Bb&ȁsZ!project_euler/problem_063/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_064/__init__.pyenfJenfJ\ v/^Mݿ!project_euler/problem_064/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_065/__init__.pyenfJenfJ\ ` Gs׼,8jw  T"!project_euler/problem_065/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_067/__init__.pyenfJenfJ\ +Ag͈@٭"!project_euler/problem_067/sol1.pyenfJenfJ\ %.qpp+)|39䍁!project_euler/problem_067/sol2.pyeZ]eZ]\ ;.+8-NJYuKZu+&project_euler/problem_067/triangle.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_068/__init__.pyenfJenfJ\ ρKW>pDA.D'!project_euler/problem_068/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_069/__init__.pyenfJenfJ\ ]aVFh*/IƲ!project_euler/problem_069/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_070/__init__.pyeqK9߬OeqK9߬O\  '?7uh<\FpCb!project_euler/problem_070/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_071/__init__.pyeZ]eZ]\  A[~St ]UJ8sk[!project_euler/problem_071/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_072/__init__.pyeq%!#eq%!#\ @1V6h),!project_euler/problem_072/sol1.pyeZ]eZ]\ ,:SP0PO~!project_euler/problem_072/sol2.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_073/__init__.pyenfJenfJ\ `+fv <>?ڭ0!!project_euler/problem_073/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_074/__init__.pyenfJenfJ\  עWON5.BY|5!project_euler/problem_074/sol1.pyenfJenfJ\ -K#㇘-z"eA6G!project_euler/problem_074/sol2.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_075/__init__.pyenfJenfJ\  w!N=;!project_euler/problem_075/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_076/__init__.pyeZ]eZ]\ Q`~ajXel !project_euler/problem_076/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_077/__init__.pyenfJenfJ\ "`P^%ᬃ!project_euler/problem_077/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_078/__init__.pyenfJenfJ\ 4~Y8f;NMPZ~SVnE!project_euler/problem_078/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_079/__init__.pyenfJenfJ\ AVs$U3m87$project_euler/problem_079/keylog.txtenfJenfJ\ @,p$HW٠ ]3[1!)project_euler/problem_079/keylog_test.txtenfJenfJ\ ~JCu Gdǖ!project_euler/problem_079/sol1.pyeZ]eZ]\ +⛲CK)wZS%project_euler/problem_080/__init__.pyenfJenfJ\ .NiحvYZӪ@~!project_euler/problem_080/sol1.pyeZ]eZ]\ .⛲CK)wZS%project_euler/problem_081/__init__.pyeZ]eZ]\ /zIS"A6IƳ4'˜$project_euler/problem_081/matrix.txtenfJenfJ\ $kTNag]v!project_euler/problem_081/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_082/__init__.pyenfJenfJ\ zIS"A6IƳ4'˜#project_euler/problem_082/input.txtenfJenfJ\ 7{P܈w/(Ņ)8!project_euler/problem_082/sol1.pyenfJenfJ\ av}oӕ^8)project_euler/problem_082/test_matrix.txteZZ\eZZ\\ 7⛲CK)wZS%project_euler/problem_085/__init__.pyenfJenfJ\ %IAcR>vA|Z !project_euler/problem_085/sol1.pyeZZ\eZZ\\ :⛲CK)wZS%project_euler/problem_086/__init__.pyenfJenfJ\ JI^z\{DL!project_euler/problem_086/sol1.pyeZZ\eZZ\\ =⛲CK)wZS%project_euler/problem_087/__init__.pyeZZ\eZZ\\ >DH'&u!project_euler/problem_087/sol1.pyeZZ\eZZ\\ @y-`H,oRbQ%project_euler/problem_089/__init__.pyeZZ\eZZ\\ A!B̩-΀⡺K}1project_euler/problem_089/numeralcleanup_test.txteZZ\eZZ\\ B&yPe5Z[h;=kDL>(project_euler/problem_089/p089_roman.txtenfJenfJ\ & 1Y um[ (q!project_euler/problem_089/sol1.pyeZZ\eZZ\\ E⛲CK)wZS%project_euler/problem_091/__init__.pyeZZ\eZZ\\ Fllp"ELTm26s!project_euler/problem_091/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_092/__init__.pyenfJenfJ\  ? {Oa~4LXn!project_euler/problem_092/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_094/__init__.pyenfJenfJ\ &ª09=S$!project_euler/problem_094/sol1.pyeZZ\eZZ\\ Ny-`H,oRbQ%project_euler/problem_097/__init__.pyenfJenfJ\ (`Â4˵C-!project_euler/problem_097/sol1.pyeZZ\eZZ\\ Q⛲CK)wZS%project_euler/problem_099/__init__.pyeZZ\eZZ\\ R6=";&q \ބ &project_euler/problem_099/base_exp.txtenfJenfJ\ =V!X<1ONDG!project_euler/problem_099/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_100/__init__.pyenfJenfJ\ 6sxj߱U<Ur!project_euler/problem_100/sol1.pyeZZ\eZZ\\ X⛲CK)wZS%project_euler/problem_101/__init__.pyenfJenfJ\ @5yjb]q3ik!project_euler/problem_101/sol1.pyeZZ\eZZ\\ [⛲CK)wZS%project_euler/problem_102/__init__.pyeZZ\eZZ\\ \g?Ay,@Dr,project_euler/problem_102/p102_triangles.txtenfJenfJ\ ' UOncaUer0!project_euler/problem_102/sol1.pyeZZ\eZZ\\ ^7\e ,project_euler/problem_102/test_triangles.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_104/__init__.pyenfJenfJ\  `oۏ O-5L0!project_euler/problem_104/sol1.pyeZZ\eZZ\\ c⛲CK)wZS%project_euler/problem_107/__init__.pyeZZ\eZZ\\ d6+hꋐ:f^X*project_euler/problem_107/p107_network.txtenfJenfJ\ *sFYK{39!project_euler/problem_107/sol1.pyeZZ\eZZ\\ fzW ʺd**b})*project_euler/problem_107/test_network.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_109/__init__.pyenfJenfJ\  ]Y Qkq.\!project_euler/problem_109/sol1.pyeZZ\eZZ\\ k⛲CK)wZS%project_euler/problem_112/__init__.pyeqK9߬OeqK9߬O\ k5eJ=$!project_euler/problem_112/sol1.pyeZZ\eZZ\\ n⛲CK)wZS%project_euler/problem_113/__init__.pyenfJenfJ\ gP wbF-`n|!project_euler/problem_113/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_114/__init__.pyenfJenfJ\ A +%R[6LU14!project_euler/problem_114/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_115/__init__.pyenfJenfJ\ 5M_ p!project_euler/problem_115/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_116/__init__.pyenfJenfJ\ >Z@n*@̱!project_euler/problem_116/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_117/__init__.pyenfJenfJ\ !DTšeV+BN!project_euler/problem_117/sol1.pyeZZ\eZZ\\ }⛲CK)wZS%project_euler/problem_119/__init__.pyenfJenfJ\ +`͡y,5cm|!project_euler/problem_119/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_120/__init__.pyeZZ\eZZ\\ :h!!E`qU?dS8!project_euler/problem_120/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_121/__init__.pyenfJenfJ\ gSdZT!project_euler/problem_121/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_123/__init__.pyenfJenfJ\ , LݙRO)ݥ!project_euler/problem_123/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_125/__init__.pyenfJenfJ\ aoo/S H.&!project_euler/problem_125/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_129/__init__.pyeZZ\eZZ\\ .:'-iLB!project_euler/problem_129/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_131/__init__.pyenfJenfJ\ >^tMxi5!project_euler/problem_131/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_135/__init__.pyeq%!#eq%!#\ 9Ehi)q!project_euler/problem_135/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_144/__init__.pyenfJenfJ\ O,fb#i!project_euler/problem_144/sol1.pyenfJenfJ\ "⛲CK)wZS%project_euler/problem_145/__init__.pyepNepN\  Ea/6 8>Wg>ժY!project_euler/problem_145/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_173/__init__.pyenfJenfJ\ TTb̹; x\P!project_euler/problem_173/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_174/__init__.pyeZZ\eZZ\\ %ZekPO^7i2r !project_euler/problem_174/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_180/__init__.pyenfJenfJ\ -MʧkRUE'%!project_euler/problem_180/sol1.pyenfJenfJ\ '⛲CK)wZS%project_euler/problem_187/__init__.pyepNepN\ >*p#+K'yL!4!project_euler/problem_187/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_188/__init__.pyenfJenfJ\ N)'F/H0!project_euler/problem_188/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_191/__init__.pyenfJenfJ\ h kTC`2Jۄ̪!project_euler/problem_191/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_203/__init__.pyenfJenfJ\ Gڔ6$j|0moF:kS!project_euler/problem_203/sol1.pyenfJenfJ\ *⛲CK)wZS%project_euler/problem_205/__init__.pyenfJenfJ\ +c2tOZ^ҠW2!project_euler/problem_205/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_206/__init__.pyeZZ\eZZ\\ +2wp9e!project_euler/problem_206/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_207/__init__.pyenfJenfJ\ s +5ǚ' L|c!project_euler/problem_207/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_234/__init__.pyenfJenfJ\  { n 8%@ !project_euler/problem_234/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_301/__init__.pyenfJenfJ\ 'KI@3-ǑgBz!project_euler/problem_301/sol1.pyenfJenfJ\ -⛲CK)wZS%project_euler/problem_493/__init__.pyeq%!#eq%!#\ ɇR0CȞz $ǚ?!project_euler/problem_493/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_551/__init__.pyenfJenfJ\ t&,^B<LYpe!project_euler/problem_551/sol1.pyenfJenfJ\ 0⛲CK)wZS%project_euler/problem_587/__init__.pyenfJenfJ\ 1 oa]lmUsx@!project_euler/problem_587/sol1.pyenfJenfJ\ 7⛲CK)wZS%project_euler/problem_686/__init__.pyenfJenfJ\ 8uQQ2r6d !project_euler/problem_686/sol1.pyenfJenfJ\ :⛲CK)wZS%project_euler/problem_800/__init__.pyenfJenfJ\ ;\x{q^-˵ !project_euler/problem_800/sol1.pyeq%!#eq%!#\  WC߂/_ȳ$pyproject.tomlenfJenfJ\ um<dWD]$Equantum/README.mdeZZ\eZZ\\ ⛲CK)wZSquantum/__init__.pyeq%!#eq%!#\ l ®A]bٟ+_quantum/bb84.pyeq%!#eq%!#\ i[^5v:_LEWOMLquantum/deutsch_jozsa.pyeq%!#eq%!#\ !}U4WB!df~quantum/half_adder.pyeq%!#eq%!#\ #'-z*:Ϊ`\quantum/not_gate.pyenfJenfJ\ Av*µHuC -rquantum/q_fourier_transform.pyeqK9߬OeqK9߬O\ f1Q`V0\.A*Hquantum/q_full_adder.pyeq%!#eq%!#\ 2䓲CC7Rquantum/quantum_entanglement.pyenfJenfJ\ D*nYizUH&quantum/quantum_random.py.DISABLED.txteq%!#eq%!#\  ]@ quantum/quantum_teleportation.pyeq%!#eq%!#\ "B@D].quantum/ripple_adder_classic.pyeq%!#eq%!#\ >`[1J O u:Cquantum/single_qubit_measure.pyeq%!#eq%!#\  ;1/]שPquantum/superdense_coding.pyeqK9߬OeqK9߬O\l'R=T.@!2t*requirements.txteZZ\eZZ\\ ⛲CK)wZSscheduling/__init__.pyenfJenfJ\ 1͸!b6%Lys%scheduling/first_come_first_served.pyeqK9߬OeqK9߬O\ YPS*:`sI%)scheduling/highest_response_ratio_next.pyenfJenfJ\ K{#W_2C9)2Ea@*scheduling/job_sequencing_with_deadline.pyenfJenfJ\ L0t<ťSyeYW(scheduling/multi_level_feedback_queue.pyenfJenfJ\ M itM0XN~* /scheduling/non_preemptive_shortest_job_first.pyenfJenfJ\ 3M٥S 7m7sIJ''wEscheduling/round_robin.pyenfJenfJ\ ] sd|<qFҫp scheduling/shortest_job_first.pyeZZ\eZZ\\ ⛲CK)wZSscripts/__init__.pyenlaenla\ H$oe Ź,hscripts/build_directory_md.pyenfTSenfTS\ ,CvGf SDo="scripts/project_euler_answers.jsonenfTSenfTS\ So#qҀemant2scripts/validate_filenames.pyenfTSenfTS\ w J&wLX5~scripts/validate_solutions.pyeZ[eZ[\ ⛲CK)wZSsearches/__init__.pyeo`hA.eo`hA.\#Oeb"bߺ_޽<{0searches/binary_search.pyenfTSenfTS\ N#oAB~t D&!searches/binary_tree_traversal.pyeZ[eZ[\ hƅnEE_5^ searches/double_linear_search.pyeZ[eZ[\ H>L I(ۂ*searches/double_linear_search_recursion.pyenfTSenfTS\ 6 UӞؗ2yNc searches/fibonacci_search.pyenfTSenfTS\ zN'Vzݢzsearches/hill_climbing.pyenfTSenfTS\ 2IL&2޾ގ< searches/interpolation_search.pyenfTSenfTS\ 7;x dgh_g^searches/jump_search.pyenfTSenfTS\  Wnֺ?j searches/linear_search.pyeZ[eZ[\ ^ތMj~ lsearches/quick_select.pyeZ[eZ[\ iϟ5\V6%ݚy-<"searches/sentinel_linear_search.pyenfTSenfTS\ =siJd:Hg.e searches/simple_binary_search.pyenfTSenfTS\ n="] "@ l6 ݥk5 searches/simulated_annealing.pyenfTSenfTS\ J*C٘Yvbq"ۨ5searches/tabu_search.pyeZ[eZ[\ FbzEv;searches/tabu_test_data.txtenfTSenfTS\ 6/Ңb+RvM{searches/ternary_search.pyeqeq\ ΰ |+~*=PHбsorts/README.mdeZ[eZ[\ ⛲CK)wZSsorts/__init__.pyenfTSenfTS\ sd=ϴUGxZX1[sorts/bead_sort.pyeo WYeo WY\ =AU * S^a?sorts/binary_insertion_sort.pyenfTSenfTS\  _zE5]5asorts/bitonic_sort.pyenfTSenfTS\ )? U&Sd: 5MOTwsorts/bogo_sort.pyeo`hA.eo`hA.\ *}6*[A_ep@B^2sorts/bubble_sort.pyeqK9߬OeqK9߬O\{E&|n{-sorts/bucket_sort.pyenfTSenfTS\ R'Պ<CB]T53HUsorts/circle_sort.pyeo WYeo WY\.81h*Ϙ ;+?#ͼsorts/cocktail_shaker_sort.pyenfTSenfTS\ :<TmL6z_տsorts/comb_sort.pyeqK9߬OeqK9߬O\ İ2=[.RI*sorts/counting_sort.pyeqK9߬OeqK9߬O\ ߀o@Dy+`c} RMsorts/cycle_sort.pyeqK9߬OeqK9߬O\ \gE lCp~nKsorts/double_sort.pyenfTSenfTS\ TPu:{Q x6 !sorts/dutch_national_flag_sort.pyenfTSenfTS\ U犝 #VI9sorts/exchange_sort.pyenfTSenfTS\ yE+@NrYsorts/external_sort.pyeZ[eZ[\ l&bdKT%jsorts/gnome_sort.pyeZ[eZ[\ Mʇ؜!1_@$M:FLbgRsorts/heap_sort.pyeqK9߬OeqK9߬O\ m[`ә,#>0bJ Bsorts/insertion_sort.pyeo WYeo WY\ IdZ۷|UCsorts/intro_sort.pyenfTSenfTS\ B "2ytat֥o+8X!nsorts/iterative_merge_sort.pyenfTSenfTS\ 2J[ޠ?aᇊdʪ4sorts/merge_insertion_sort.pyeo WYeo WY\ T &n$m|}sorts/merge_sort.pyenfTSenfTS\ VGLuؿy%\6̽sorts/msd_radix_sort.pyeZ[eZ[\ [ACHW)sorts/natural_sort.pyenfTSenfTS\ '@;0- E('sorts/normal_distribution_quick_sort.mdenfTSenfTS\ ]}KñxUt3'sorts/odd_even_sort.pyeqK9߬OeqK9߬O\ Kև Vy;?ML;(sorts/odd_even_transposition_parallel.pyenfTSenfTS\ mx*q^!Rc_W/sorts/odd_even_transposition_single_threaded.pyeZ[eZ[\ Ds5;r@~&@>)Ksorts/pancake_sort.pyenfTSenfTS\ c 2` RO!dsorts/patience_sort.pyenfTSenfTS\ 9>mL oqG:|8~sorts/pigeon_sort.pyeZ[eZ[\ #PEL _S]0sorts/pigeonhole_sort.pyenfTSenfTS\ :>>HC޺wsorts/quick_sort.pyeZ[eZ[\ % (md"L2 sorts/quick_sort_3_partition.pyenfTSenfTS\ +abI >}5tsorts/radix_sort.pyeq%!#eq%!#\ % F6ϯ1n\-sorts/random_normal_distribution_quicksort.pyeq%!#eq%!#\ 'tgA~",u# sorts/random_pivot_quick_sort.pyeo`hA.eo`hA.\ 1Y>[n4}98> K'sorts/recursive_bubble_sort.pyenfTSenfTS\ )}WaC~77׳{!sorts/recursive_insertion_sort.pyenfTSenfTS\ WL,@Ue-qP 6h"sorts/recursive_mergesort_array.pyenfTSenfTS\  s_D ~0hsorts/recursive_quick_sort.pyeq%!#eq%!#\ Lpp "EVp,yVsorts/selection_sort.pyeq%!#eq%!#\ p)Ѽ w`'bsorts/shell_sort.pyenfTSenfTS\ X]{s5A ~Tsorts/shrink_shell_sort.pyenfTSenfTS\ = se='@T2aOsorts/slowsort.pyeq%!#eq%!#\ [͚[4 VyIl/sorts/stooge_sort.pyenfTSenfTS\ cLӖV~c]Dsorts/strand_sort.pyenfTSenfTS\ Q(&(h2sorts/tim_sort.pyeq%!#eq%!#\ YWS 83T<sorts/topological_sort.pyepNepN\ kxθ:?1kZ2T!sorts/tree_sort.pyeZ[eZ[\ 3D/zX+,?d}sorts/unknown_sort.pyeZ[eZ[\ 4<`oV>]暗sorts/wiggle_sort.pyeZ[eZ[\ 6⛲CK)wZSstrings/__init__.pyenfTSenfTS\ *KOhՎ8wJT(strings/aho_corasick.pyenfTSenfTS\ Y݊tM6l}GF),%strings/alternative_string_arrange.pyenfTSenfTS\ ZEߤ8gUpstrings/anagrams.pyenf\enf\\ [ڿRoush5bstrings/anagrams.txtenf\enf\\ \w _K\A tD"strings/autocomplete_using_trie.pyenf\enf\\ ]N&B (-Me 'U}Ystrings/barcode_validator.pyenf\enf\\ @ s/tB|l޴b}strings/boyer_moore_search.pyenf\enf\\ ?!S8|nQ1_W1strings/can_string_be_rearranged_as_palindrome.pyeo WYeo WY\ M|+SM50I.\strings/capitalize.pyenf\enf\\M#Ϲ!ðmT$AnKstrings/check_anagrams.pyenf\enf\\ a xEt cxDDt strings/credit_card_validator.pyeo WYeo WY\ uT1(xN-strings/detecting_english_programmatically.pyenfeenfe\ f<u9v*@dSڪstrings/dictionary.txtenfeenfe\ g+3?A$c9'S~[9strings/dna.pyeo WYeo WY\ az7q܄PFtmstrings/frequency_finder.pyenfeenfe\ j#I*a[d8strings/hamming_distance.pyenfeenfe\ kc{"mg?ZG"%!strings/indian_phone_validator.pyenfeenfe\ lmֶi+O 3h#strings/is_contains_unique_chars.pyenfeenfe\ mh٬-ExuSo~istrings/is_isogram.pyenfeenfe\ n ȸ1Pt6strings/is_pangram.pyenfeenfe\ p V`n:ff{&.vstrings/is_spain_national_id.pyenfeenfe\ tldV^=锈>˵<ʗGR$strings/is_srilankan_phone_number.pyeZZeZZ\ SM:׾8(Z>strings/jaro_winkler.pyeo`hA.eo`hA.\ 20sVwZ֍}&Qstrings/join.pyeq%!#eq%!#\ Tq;5TqgV-tstrings/knuth_morris_pratt.pyenfeenfe\ {MÛes7by#strings/levenshtein_distance.pyepNepN\ t<뫑ȁ1QCVsstrings/lower.pyenfeenfe\ Ō|Ddbյw~strings/manacher.pyenfeenfe\ E pMJi!p_%strings/min_cost_string_conversion.pyenfeenfe\ 1Yr iN&:4strings/naive_string_search.pyenfeenfe\ wq JG2 #Ie}strings/ngram.pyenfeenfe\ x =z#_R절Hstrings/palindrome.pyenfeenfe\ Ye5 !{O]|}7$strings/prefix_function.pyeqK9߬OeqK9߬O\  ,av =4?'strings/rabin_karp.pyeZZeZZ\ `Z'Ro g=Me=SMstrings/remove_duplicate.pyeq%!#eq%!#\ \*|%[\W/Ustrings/reverse_letters.pyeq%!#eq%!#\ .]9Q?@v&Mjstrings/reverse_long_words.pyeZZeZZ\ bVPL \a~Gstrings/reverse_words.pyenfeenfe\ {U3zc/, h+N|*strings/snake_case_to_camel_pascal_case.pyeZZeZZ\ db+@25Ep$3t:strings/split.pyeo WYeo WY\  =G-q:i3i-strings/string_switch_case.pyenfeenfe\  #$9ӯ7)ystrings/text_justification.pyenfeenfe\  [\Rk#&DOSez}-strings/top_k_frequent_words.pyepNepN\ V^@`ަwbstrings/upper.pyenfeenfe\ i4C$ _#|V5Zq strings/wave.pyenfeenfe\  4̩q?]'erJ$strings/wildcard_pattern_matching.pyenfeenfe\ 0fZqekhhJGstrings/word_occurrence.pyeo`hA.eo`hA.\ -&~{5܉K97 Cstrings/word_patterns.pyenfHwenfHw\ & KWcd:}˫v0strings/words.txtenfenf\  {ڵ6V e[Estrings/z_function.pyeZXeZX\ r⛲CK)wZSweb_programming/__init__.pyeZXeZX\ s̗~~Ag^ RVweb_programming/co2_emission.pyeqK9߬OeqK9߬O\ ] | 3ݤY*web_programming/convert_number_to_words.pyeqZ3eqZ3\ C;] |;0Fn#(web_programming/covid_stats_via_xpath.pyenfenf\ ^m1++խ)[$ PIa'web_programming/crawl_google_results.pyenfenf\ #*= 잺SfM1`0web_programming/crawl_google_scholar_citation.pyenfenf\ ;ZL3b#%web_programming/currency_converter.pyeqZ3eqZ3\ DNQٔ]v!n'٧&web_programming/current_stock_price.pyeo WYeo WY\ )>ȩZ IBbFC"web_programming/current_weather.pyeZXeZX\ zY$NpI* ӕ_Z"web_programming/daily_horoscope.pyenfenf\  DGEIuTG$0?E]4web_programming/download_images_from_google_query.pyenfenf\  Nxשd!^ͩL]"web_programming/emails_from_url.pyeqK9߬OeqK9߬O\ `r;ݍ,qfD{tt.web_programming/fetch_anime_and_play.py.BROKENeZXeZX\ ~,{i2ޡJU߷>!web_programming/fetch_bbc_news.pyenfenf\ DЪN{cIӹ^>)aQ$web_programming/fetch_github_info.pyenfenf\ yZ 9']6E.n0web_programming/fetch_jobs.pyenfenf\ jW^tzZ0web_programming/fetch_quotes.pyenfenf\  QrX X&web_programming/fetch_well_rx_price.pyeqK9߬OeqK9߬O\ k ǖy?"y+A9*web_programming/get_amazon_product_data.pyeZXeZX\ KnV)Mw5b .web_programming/get_imdb_top_250_movies_csv.pyenlaenla\ _q29xvRK$6bZweb_programming/get_imdbtop.pyeqK9߬OeqK9߬O\ |jTbpl)Emʢ0web_programming/get_top_billionaires.py.disabledenfenf\ hQJFj-p#web_programming/get_top_hn_posts.pyenfenf\ :iqW',B@?"web_programming/get_user_tweets.pyenfenf\ 3I>f8;wjweb_programming/giphy.pyenfenf\ $Q&ãG/A/8)$web_programming/instagram_crawler.pyeo WYeo WY\ !gM}fi(y(! web_programming/instagram_pic.pyeZXeZX\ /$<PQ=1z-/x"web_programming/instagram_video.pyenfenf\ mѧoNsKweb_programming/nasa_data.pyenfenf\ 6f~Deu?Ix&web_programming/open_google_results.pyenfenf\  Z-8m))web_programming/random_anime_character.pyeZXeZX\  /G/*-`{~lZ B)web_programming/recaptcha_verification.pyeq%!#eq%!#\ 9\(9Ş{bZs+Aweb_programming/reddit.pyenfenf\  )/ ʲfiа;0{['web_programming/search_books_by_isbn.pyenfenf\ ^ֶLu)V4^w$9p web_programming/slack_message.pyeZXeZX\ [-|x-T}wY)web_programming/test_fetch_github_info.pyenfenf\ ʁLvU=(Ipf&web_programming/world_covid19_stats.pyTREE1293 47 Դ ,Ԋk`Tmaths160 3 ,Pa&Vzd7images2 0 )|A(U!series8 0 Ψ `_AZ:{ gpolynomials2 0 BP'%A sBother25 0 Q-tG>8'Osorts54 0 mɢqGZZgσgraphs61 1 KHڰFAo|tests3 0 ,jw;Va$$AtǸBhashes13 0 mZܫZtGIJomatrix20 1 HjRΟCP'P*_tests3 0 t&G+צY*.github11 2 Pk^/-+workflows4 0 ^1iZISSUE_TEMPLATE4 0 ,"VGy"vf 6.vscode1 0 6 ,kHrޖyciphers45 0 JJtjWgeodesy3 0 t6 %Ophysics20 0 `*bX*3+ quantum14 0 [Q;C UEΩdscripts5 0 <'2q@ 6ς*4Ostrings51 0 b\ *WB fractals4 0 WkW( ҽL|Jgraphics3 0 +e+d܅EVknapsack8 1 -;Ĺ'{Qk__tests3 0 -v>g 0searches17 0 +툽#>WZhfinancial6 0 F"X H{h*Qblockchain5 0 Gn ),ȥdscheduling8 0 z}[b[ ޶"*kcompression16 1 xw?9D镯 O>oimage_data7 0 broe\x-.`r`conversions27 0 g8hƚ;ُMelectronics13 0 {q>!> 3fuzzy_logic2 0 (hFNXxbdbacktracking18 0 6Aoj ;ą.devcontainer2 0 7qhg)?yl$`audio_filters7 0 WraNn9afile_transfer6 1 "gzjkx_tests2 0 ָ)0<kཫproject_euler317 127 SS&TMproblem_0018 0 ;|znLWooproblem_0026 0 K'3Ϋ:U(])fproblem_0034 0 )e™7e0{~w$problem_0043 0 L 7քu @problem_0053 0 ,}RrX3Ԥ 9xOproblem_0065 0 Hs7{`1 Iwproblem_0074 0 V٧cէlOVproblem_0084 0 (xFpbHd`problem_0094 0 L3tn.*gW?Z;problem_0104 0 _)· +"0problem_0114 0 m_~ʼnt=6+2problem_0123 0 Y)>ň;5problem_0133 0 #3yyXkNproblem_0143 0 (D74qͧproblem_0152 0 K] CQqnproblem_0163 0 ë}ޤmkLproblem_0172 0 y=,y)шOt" Hbproblem_0183 0 bc6m'US:#problem_0192 0 tg6~|bg16hproblem_0205 0 h(p Ost 3_problem_0212 0 y *,9problem_0224 0 sK[媘M problem_0232 0 -\ƆgRKJproblem_0242 0 t S'Kj"1Rȥ problem_0254 0 R)<4=PƼb3V~ problem_0262 0 (0&`^ YbV problem_0272 0 'Լ϶C_j.o_Dproblem_0282 0 )mOqo &4problem_0292 0 e="ߠ?وzproblem_0302 0 t (SU) 0-wproblem_0313 0 cʸ"@+%problem_0322 0 P0ʤ>u^{nG=problem_0332 0 JyfJ)-L4VZpproblem_0342 0 ؊ ?8udrdGb)problem_0352 0 hr.A%3C"{problem_0362 0 {(CVF\n[<t`problem_0372 0 4"|J S$problem_0382 0 J i3P*Lproblem_0392 0 -} -/ny]problem_0402 0 P *uСXA?(problem_0412 0 9%44މ[problem_0423 0 $Cz\Pɝ!8(problem_0432 0 zuޕHsۖ$problem_0442 0 ~g| $Diproblem_0452 0 qj=lQdsproblem_0462 0 ] 'cP^${ў̤axproblem_0472 0 L ifJ%Q@{Eproblem_0482 0 4x)u9*Âproblem_0492 0 uo^LX6{{h. Rproblem_0502 0 rbtR~problem_0512 0 V Q[T*5*^XV7vproblem_0522 0 ~Hpproblem_0532 0 \vET-" E^problem_0544 0 XNgK eproblem_0552 0 Ggmd~GGw,B_problem_0562 0 접"5ZBihܵ8)problem_0572 0 | K^κV-problem_0582 0 uɢS]ۖd5uproblem_0594 0 &w T", {73}problem_0622 0 yDc}tZIke<iproblem_0632 0 XzZNmXproblem_0642 0 rsݙmproblem_0652 0 nhΛH)OM%problem_0674 0 /8VVv3"L=nN7problem_0682 0 F:X4|wnot+n3\problem_0692 0 H ʫm)wproblem_0702 0 @R8 qMei~TEproblem_0712 0 h."y۾Nf;awproblem_0723 0 C9m|X>U~#dJTϒproblem_0732 0 >=&XXN604=nproblem_0743 0 d&&rc~e.[ƭproblem_0752 0 ~ߕZ/Mproblem_0762 0 v= txZْqproblem_0772 0 DaC!TPzBproblem_0782 0 ncp>(דfDproblem_0794 0 ȘsA62~4BΓlproblem_0802 0 IUb²5problem_0813 0 _g.48c^\$problem_0824 0 ˗/@U(%problem_0852 0 0du UwZproblem_0862 0 N[Txy |d'aproblem_0872 0 WRg\:3iproblem_0894 0 g|P xqGproblem_0912 0 h3c=6}LӞbproblem_0922 0 zi(ŔIgV1Niproblem_0942 0 W5w%8Pޫrproblem_0972 0 (ڎ<m TY]<2 problem_0993 0 J=sq\o-q,problem_1002 0 =wo{ћЙF9_problem_1012 0 ׋@}5tproblem_1024 0 COqdD#_fproblem_1042 0 ֕f9tV y[0Tproblem_1074 0 m~ ЯN*8Jeproblem_1092 0 pEuG=_k9problem_1122 0 @GcBǸrƐ@Cproblem_1132 0 wpJeOIZSD:\kproblem_1142 0 4M} Gs)rR6cproblem_1152 0 ۦf3#aA2݉problem_1162 0 TJ٥$ʂ$(-problem_1172 0 |&TEic ,wLnproblem_1192 0 |{/hck$.J+J>problem_1202 0 2)X')0c6problem_1212 0 \J$@LVproblem_1232 0 [J_xpMlKproblem_1252 0 Z͡^/%sLS#problem_1292 0 f//]@V76problem_1312 0 lkPdPOproblem_1352 0 {\By/Iproblem_1442 0 K9uX!pP񅘍problem_1452 0 v(3Be޳,ؘ0֍problem_1732 0 !$>,ppRpproblem_1742 0 z0Y$A۾RtV8problem_1802 0 p ! yz^Fproblem_1872 0 b@Q=4problem_1882 0 _\Z% $ZAtFgproblem_1912 0 jMd,ѡHWproblem_2032 0 .-fm^*E\)problem_2052 0  n(θh[cKproblem_2062 0 m* ptP[problem_2072 0 U4E9ɯJ{)problem_2342 0 %KpTgO#<problem_3012 0 +ϠmvS6problem_4932 0 H-b[،>/ҧ28problem_5512 0 ~4˜ ]ihÍ~problem_5872 0 6Hn>"٣8s4problem_6862 0 P|j蛗ieðproblem_8002 0 '+0oM:՝greedy_methods4 0 6 RuY<&linear_algebra12 1 "1zh`ugsrc10 0 b$ Kg䯰,&:P$neural_network9 1 )Ul:<af`activation_functions1 0 'ޱqI0\ubsk|@boolean_algebra10 0 w\Ύ۹bj?,T.Ŏcomputer_vision9 0 ^0U#}OkSG Y.fdata_structures90 9 L/|;domd R'*heap8 0 U?8cb~7m#trie3 0 0r"V\Kä0aOBsqueue9 0 /^"`.uZ!Ĝharrays3 0 U7G?s-ûw stacks13 0 _-<341,rFhashing10 2 ~oY T1tests1 0 oFU5@嗯Bnumber_theory2 0 wϔ61roF6@binary_tree26 0 ,3z2*5o̥Mlinked_list14 0 =>54E ; l/tdisjoint_set3 0 K^ꆤ$|50]Unetworking_flow3 0 }zF$PY^Sweb_programming37 0 lpұ;iTbit_manipulation19 0 kVlV iܼȳbmachine_learning35 3 B_$~fڃdmBlstm3 0 B-"3Gn 7'forecasting3 0 &km~$ps<local_weighted_learning3 0 xtjוTJC#cellular_automata6 0 G6S d Ͱbgenetic_algorithm2 0 B~[ػ2Miˈdivide_and_conquer13 0 L[Ac23Wo$?linear_programming1 0 oh'h' -arithmetic_analysis16 1 *haj7>image_data3 0 'PZbwұ-_dynamic_programming41 0 Sb~digital_image_processing34 8 %0:^`|XdyVresize2 0 -$?T =JQJifilters8 0 ?bcq:+rotation2 0 [Սf^&Ԯdithering2 0 H?ejOB pimage_data3 0 Lu(X,aӕm=edge_detection2 0 5 M I$^histogram_equalization6 2 _O/!;G<?DLimage_data2 0 ‰דM>+V\output_data2 0 yMmAmorphological_operations2 0 Eo.N|V{\E L`dwXRyR̳
DIRC eq$%eq$%\L4|fvJU2?[.devcontainer/Dockerfileeq$$eq$$\OCŨUU $c&EzI.devcontainer/devcontainer.jsoneZ3deZ3d\G jERrgk k#.gitattributeseq$$eq$$\I'nImZ`UU.github/CODEOWNERSene;åene;å\\5L͵,$4O7zj%.github/ISSUE_TEMPLATE/bug_report.ymlene;åene;å\cb8| [;{] !.github/ISSUE_TEMPLATE/config.ymlep<ep<\e Y>3UH 7*.github/ISSUE_TEMPLATE/feature_request.ymlene;åene;å\sDT(W%[6 .github/ISSUE_TEMPLATE/other.ymleo`hA.eo`hA.\KV8Ҹ=uԔ ~ U .github/pull_request_template.mdene;åene;å\P 9# rJGW.github/stale.ymleq$$eq$$\E6szIt \`CX.github/workflows/build.ymleqK9OeqK9O\S3bͣOZ&.github/workflows/directory_writer.ymleqK9OeqK9O\TF 8!5Bj?yI\ms #.github/workflows/project_euler.ymleq$$eq$$\R-[s'sb(.github/workflows/ruff.ymlene;åene;å\V׺ꄸn{,) }.ׯI .gitignoreeZ3deZ3d\W4Yu郍nO8 E .gitpod.ymleqRr+eqRr+\5gb;+B\v^.pre-commit-config.yamlene;åene;å\W,q k.vscode/settings.jsoneq$$eq$$\D,(JRs4<fMP}5I7CONTRIBUTING.mdeqZ3eqZ3\F5aClyP;DƢ) DIRECTORY.mdene;åene;å\]D(.*:+F,T. LICENSE.mdene;åene;å\N x5b9n README.mdeo Peo P\E2Se&ηbarithmetic_analysis/README.mdeo Peo P\⛲CK)wZSarithmetic_analysis/__init__.pyeo Peo P\?YrO+ੇL2n arithmetic_analysis/bisection.pyeqK9OeqK9O\  70u+-(:ߵ9+arithmetic_analysis/gaussian_elimination.pyeo Peo P\倈ddûr^7 .arithmetic_analysis/image_data/2D_problems.jpgeo Peo P\ jE6  =PC20arithmetic_analysis/image_data/2D_problems_1.jpgeo Peo P\ ⛲CK)wZS*arithmetic_analysis/image_data/__init__.pyeo Peo P\  zJ^dU#kXL,arithmetic_analysis/in_static_equilibrium.pyeo Peo P\ -l K?k, #arithmetic_analysis/intersection.pyeqK9OeqK9O\zK9&,޺O8.arithmetic_analysis/jacobi_iteration_method.pyeo`hA.eo`hA.\TI;+I.v8'arithmetic_analysis/lu_decomposition.pyeo Peo P\ FolYw$Qg(ȷp^3arithmetic_analysis/newton_forward_interpolation.pyeo Peo P\ fQ'&I3N#$arithmetic_analysis/newton_method.pyeo Peo P\ i,Aw3gL cn[/%arithmetic_analysis/newton_raphson.pyeo Peo P\ r AopaUsYwݮ)arithmetic_analysis/newton_raphson_new.pyeqK9OeqK9O\AҊF m@й`$arithmetic_analysis/secant_method.pyene;åene;å\D?=2#bXaudio_filters/README.mdene;åene;å\⛲CK)wZSaudio_filters/__init__.pyene;åene;å\zh6ΞW./#audio_filters/butterworth_filter.pyene;åene;å\˨S<,vQ*"T1audio_filters/equal_loudness_filter.py.broken.txtene;åene;å\ sC[tM ~Eaudio_filters/iir_filter.pyene;åene;å\,jz=p]CCJY|!audio_filters/loudness_curve.jsonene;åene;å\ {Rva2B+caudio_filters/show_response.pyenf/enf/\vԗ]Z1]Wd Ibacktracking/README.mdeZ3deZ3d\i⛲CK)wZSbacktracking/__init__.pyeo`hA.eo`hA.\jý(sȽPSqk backtracking/all_combinations.pyenf/enf/\kS0lуEDUq backtracking/all_permutations.pyenf/enf/\leT$N*cY [Y}:Q backtracking/all_subsequences.pyenf/enf/\m Sğsl /)backtracking/coloring.pyeq$$eq$$\wUQЇbx_@U{backtracking/combination_sum.pyeqK9OeqK9O\qJAV 2fl(2 d^!backtracking/hamiltonian_cycle.pyenlaenla\r ̈0{uzGh` Ebacktracking/knight_tour.pyeo Peo P\tUn11i]j KLbacktracking/minimax.pyeq$$eq$$\,<79O.&+~qbacktracking/minmax.pyeo Peo P\u ;D6>.^#qUbacktracking/n_queens.pyenf/enf/\vH󰊰_3 ^_Dbacktracking/n_queens_math.pyepEepE\ tBp R-A՜backtracking/power_sum.pyeq$$eq$$\x {ވmXfӞHSm*Nbacktracking/rat_in_maze.pyeqK9OeqK9O\y}i!%BanNy Dbacktracking/sudoku.pyenf/enf/\z 3! ;%(V)backtracking/sum_of_subsets.pyepEepE\B +:2uPfͺXS+backtracking/word_search.pyenf/enf/\`?^뎿\%1uMnbit_manipulation/README.mdeZ<ceZ<c\~⛲CK)wZSbit_manipulation/__init__.pyenf/enf/\b6hٳO!%ԉbw'bit_manipulation/binary_and_operator.pyeZ<ceZ<c\V<iE3M f~-1t(bit_manipulation/binary_count_setbits.pyeZ<ceZ<c\īfD9BHƊ/bit_manipulation/binary_count_trailing_zeros.pyenf/enf/\Nu ?0a&bit_manipulation/binary_or_operator.pyenf/enf/\  KP'/&|m+1}!bit_manipulation/binary_shifts.pyenf/enf/\&a,NB׌ޓXGd*bit_manipulation/binary_twos_complement.pyenf/enf/\b lA6Za[ 'bit_manipulation/binary_xor_operator.pyenf/enf/\>. u}v2Fh%.o3bit_manipulation/count_1s_brian_kernighan_method.pyeo`hA.eo`hA.\ hu'\A69'{,bit_manipulation/count_number_of_one_bits.pyenf/enf/\O cexؗT[yeR#FX.&bit_manipulation/gray_code_sequence.pyenf/enf/\\R!-˔/p.dc=œ#bit_manipulation/highest_set_bit.pyenf/enf/\bf $&i.bit_manipulation/index_of_rightmost_set_bit.pyenlaenla\mBo5.M sbit_manipulation/is_even.pyenf/enf/\n>Q}w6.=Ki#bit_manipulation/is_power_of_two.pyenf/enf/\Xϋm*}\DYj3nd+bit_manipulation/numbers_different_signs.pyeqK9OeqK9O\ +|ݷ#Zw> bit_manipulation/reverse_bits.pyenf/enf/\ ?{wo_ҵMz6bit_manipulation/single_bit_manipulation_operations.pyenf/enf/\\nu/g-z-blockchain/README.mdeZ<ceZ<c\⛲CK)wZSblockchain/__init__.pyeq$$eq$$\)^w*؜†v ]'blockchain/chinese_remainder_theorem.pyeq$$eq$$\ "\cߔ# ["blockchain/diophantine_equation.pyeq$$eq$$\H s\['OGNRO.wblockchain/modular_division.pyenf/enf/\E__tE[pgVboolean_algebra/README.mdeZ<ceZ<c\⛲CK)wZSboolean_algebra/__init__.pyepEepE\{Aw.秎>4wfOboolean_algebra/and_gate.pyepEepE\3k%*׼WuNboolean_algebra/nand_gate.pyeqK9OeqK9O\ %,' 'A“mboolean_algebra/norgate.pyepEepE\TLa$x.vɧhboolean_algebra/not_gate.pyepEepE\~fE?zGgboolean_algebra/or_gate.pyeqK9OeqK9O\{Ag߲(IT^X7#boolean_algebra/quine_mc_cluskey.pyepEepE\E'54lQZuboolean_algebra/xnor_gate.pyepEepE\LO[E QXph忒}boolean_algebra/xor_gate.pyenf/enf/\h3l/E}=zcellular_automata/README.mdeZ<ceZ<c\⛲CK)wZScellular_automata/__init__.pyenf/enf/\ zվ@!Qvд{.V)cellular_automata/conways_game_of_life.pyenf/enf/\ 1 ֑:!XvT.E!cellular_automata/game_of_life.pyenf/enf/\ I6?֯S_C$y'be'cellular_automata/nagel_schrekenberg.pyenf/enf/\ @wDP/p!>+( a$cellular_automata/one_dimensional.pyenf/enf/\ U/ O8E\O8."fciphers/README.mdeZ<ceZ<c\⛲CK)wZSciphers/__init__.pyenf/enf/\Maؓ]3]'* ciphers/a1z26.pyeq$$eq$$\ A3%pT!`"$R)ciphers/affine_cipher.pyenf/enf/\ ј8WrSciphers/atbash.pyenf/enf/\ V{pqA.ciphers/autokey.pyenf/enf/\ WXF5zfٷB7ciphers/baconian_cipher.pyenf/enf/\_ l(Fa6&]ciphers/base16.pyeq$$eq$$\<ĕ.Z)4~ciphers/base32.pyenf/enf/\ [+ })>)< jciphers/base64.pyeq$$eq$$\دѯ]rMTKH{$ciphers/base85.pyenf/enf/\x+\"qh\ciphers/beaufort_cipher.pyenf/enf/\ ^[8@RTQO)GMciphers/bifid.pyenf/enf/\E&(~AR*sZz$ciphers/brute_force_caesar_cipher.pyenf/enf/\ћ3r!NHNZAciphers/caesar_cipher.pyeq$$eq$$\o3~i%6Qciphers/cryptomath_module.pyenf/enf/\a$l6P[0FU4**ciphers/decrypt_caesar_with_chi_squared.pyenf/enf/\V!01WHCH%ciphers/deterministic_miller_rabin.pyepEepE\O 9aF"ciphers/diffie.pyenf/enf/\U/s>X@fFR ciphers/diffie_hellman.pyenf/enf/\xUiXV ciphers/elgamal_key_generator.pyenf/enf/\"z DƇC-tvciphers/enigma_machine2.pyeq$$eq$$\BN)H]F8靅<E|ciphers/hill_cipher.pyenf/enf/\ I*4 qPgciphers/mixed_keyword_cypher.pyenf/enf/\F?I6it=Vt~ KwX"ciphers/mono_alphabetic_ciphers.pyenf/enf/\ apoN!2>/Wqciphers/morse_code.pyeo`hA.eo`hA.\[K5 TLudciphers/onepad_cipher.pyeo Peo P\ ry#YkX@ciphers/playfair_cipher.pyenf/enf/\ d ; NԚ ~4ciphers/polybius.pyenf/enf/\ PI #_')ciphers/porta_cipher.pyeq$$eq$$\n$RS:5}aciphers/prehistoric_men.txtenfl enfl \ A UCS[TGW#ciphers/rabin_miller.pyenfl enfl \ \G}1i,@aBciphers/rail_fence_cipher.pyenfl enfl \Ig!Q']B҉Qhcjm1ciphers/rot13.pyenfl enfl \ZAr-)([.뮰 ciphers/rsa_cipher.pyenfl enfl \'w兙*2Sqciphers/rsa_factorization.pyeqK9OeqK9O\%%s8{]YXM@5:=ciphers/rsa_key_generator.pyenfl enfl \Rʹ|ibۻX ciphers/shuffled_shift_cipher.pyenfl enfl \ 5Gчdtn@ ciphers/simple_keyword_cypher.pyenfl enfl \M)q]eoVwB%ciphers/simple_substitution_cipher.pyeqK9OeqK9O\kRFl|-bciphers/trafid_cipher.pyenfl enfl \/}?5wI"vOIciphers/transposition_cipher.pyeq$$eq$$\pb <„@Wc^4ciphers/transposition_cipher_encrypt_decrypt_file.pyenfl enfl \aa5:ˤz.?ciphers/vigenere_cipher.pyepEepE\68_Q?)PѭS.]Fciphers/xor_cipher.pyeqeq\|Tau=> Kma>compression/README.mdeZ5EbeZ5Eb\⛲CK)wZScompression/__init__.pyenfl enfl \X-R]j-]z`compression/burrows_wheeler.pyenfl enfl \d eS3lB]d?compression/huffman.pyeZ5EbeZ5Eb\DſE |#f|,compression/image_data/PSNR-example-base.pngeZrNbeZrNb\7mwMf?G/compression/image_data/PSNR-example-comp-10.jpgeZrNbeZrNb\⛲CK)wZS"compression/image_data/__init__.pyeZrNbeZrNb\h<u!ǵy쮹V+compression/image_data/compressed_image.pngeZrNbeZrNb\u":X:3 Dי)C#(compression/image_data/example_image.jpgeZrNbeZrNb\nE>VQjHTN 2compression/image_data/example_wikipedia_image.jpgeZrNbeZrNb\G oޛjqt+4Wj)compression/image_data/original_image.pngenfl enfl \fo3JņvF ʀ#compression/lempel_ziv.pyenfl enfl \ ;*9SM8V=$compression/lempel_ziv_decompress.pyenfl enfl \ xr Y \4a'Bcompression/lz77.pyenfl enfl \>(O).2uM)compression/peak_signal_to_noise_ratio.pyenfl enfl \ z-i ]PPC9"compression/run_length_encoding.pyenfl enfl \c/J LE/X>,秞computer_vision/README.mdeZrNbeZrNb\⛲CK)wZScomputer_vision/__init__.pyeq$$eq$$\  x_2W&俊m%computer_vision/cnn_classification.pyeqRr+eqRr+\ 擴yzZ?{EA$computer_vision/flip_augmentation.pyenfl enfl \  R+ï ,@i` computer_vision/harris_corner.pyenfl enfl \ >h)L/Ă10ΰ7computer_vision/horn_schunck.pyenfl enfl \ vey3֩'w |s!computer_vision/mean_threshold.pyeq$$eq$$\ Pmk/O+l]B(&computer_vision/mosaic_augmentation.pyenfl enfl \  ˨-pؽ"""e$computer_vision/pooling_functions.pyenfl enfl \ '=(!YjEf20B6conversions/README.mdeZWaeZWa\⛲CK)wZSconversions/__init__.pyenfl enfl \  A6Dmp[L73conversions/astronomical_length_scale_conversion.pyenfl enfl \J%z]\x9\_ conversions/binary_to_decimal.pyenfl enfl \ [ REr\K $conversions/binary_to_hexadecimal.pyenfl enfl \#Jޕ+Gfuconversions/binary_to_octal.pyenfl enfl \ pqzO*sɦ̦<#Mconversions/decimal_to_any.pyeqK9OeqK9O\<Gȯg֠K^Tq$ conversions/decimal_to_binary.pyeqK9OeqK9O\  <pÀdf֕2+?G*conversions/decimal_to_binary_recursion.pyepEepE\^hc#>׈>mr.%conversions/decimal_to_hexadecimal.pyenfl enfl \L1;Li;Wn?uconversions/decimal_to_octal.pyenfl enfl \ Qk19(ixZT'~\!conversions/energy_conversions.pyenfl enfl \ p1&^H{m=;¨'$conversions/excel_title_to_column.pyenfl enfl \ ܸr\}j̓ժ'conversions/hex_to_bin.pyenfl enfl \ Jh dxE%conversions/hexadecimal_to_decimal.pyeqZ3eqZ3\ Q%^=.i`Z conversions/length_conversion.pyenfl enfl \ }Q4 _[iW"conversions/molecular_chemistry.pyenfl enfl \ ho )sn8M/Tconversions/octal_to_decimal.pyenfl enfl \ YUA81K}:(!!conversions/prefix_conversions.pyenfl enfl \  Dg*/&JQ(conversions/prefix_conversions_string.pyeqZ3eqZ3\  r4RlUy6dy8#conversions/pressure_conversions.pyenfl enfl \ t>I T{.;c!conversions/rgb_hsv_conversion.pyenfl enfl \u*(5YTsMB[*conversions/roman_numerals.pyenfl enfl \ IqMμ4~} conversions/speed_conversions.pyenfl enfl \-al+* $c,1ȣ&conversions/temperature_conversions.pyeqZ3eqZ3\  DҐ  G?|k?3sa!conversions/volume_conversions.pyenfl enfl \(2n hX X'z conversions/weight_conversion.pyeZWaeZWa\%⛲CK)wZSdata_structures/__init__.pyeqK9OeqK9O\ 3EXFڕꊻ&data_structures/arrays/permutations.pyenfl enfl \ >"C077EUë=($data_structures/arrays/prefix_sum.pyenfl enfl \ A OiO7!m-׆B%data_structures/arrays/product_sum.pyeZWaeZWa\5⛲CK)wZS'data_structures/binary_tree/__init__.pyenfl enfl \%LzDR/=,O92U'data_structures/binary_tree/avl_tree.pyeo`hA.eo`hA.\e${Qkke#}0data_structures/binary_tree/basic_binary_tree.pyeqK9OeqK9O\!BL|fH@ba6]1data_structures/binary_tree/binary_search_tree.pyepEepE\@ٵL +7|pgƿ ;data_structures/binary_tree/binary_search_tree_recursive.pyenfl enfl \TN`Z<DӡHF N1data_structures/binary_tree/binary_tree_mirror.pyenfl enfl \ TZN<yUp\r3data_structures/binary_tree/binary_tree_node_sum.pyenfl enfl \ V>_|(s3data_structures/binary_tree/binary_tree_path_sum.pyeo`hA.eo`hA.\ p~'XֱiT5data_structures/binary_tree/binary_tree_traversals.mdeq$$eq$$\*v>+@ő ު&E5data_structures/binary_tree/binary_tree_traversals.pyenfl enfl \ Y1YHHuM1,o8data_structures/binary_tree/diff_views_of_binary_tree.pyeqZ3eqZ3\  Φ:Ho a/data_structures/binary_tree/distribute_coins.pyenfl enfl \":-lFyŒ+data_structures/binary_tree/fenwick_tree.pyenfl enfl \ _WR})STZ ~x = :data_structures/binary_tree/inorder_tree_traversal_2022.pyeo`hA.eo`hA.\ q. .(@X]?,%data_structures/binary_tree/is_bst.pyeqK9OeqK9O\ o/y9nj\k0data_structures/binary_tree/lazy_segment_tree.pyenfl enfl \h e7p;~mKGQ7sb5data_structures/binary_tree/lowest_common_ancestor.pyenfl enfl \ e zpsAI$bYA3data_structures/binary_tree/maximum_fenwick_tree.pyenfenf\31Ņ{p$5data_structures/binary_tree/merge_two_binary_trees.pyenfenf\\N\N~;D՛*,9data_structures/binary_tree/non_recursive_segment_tree.pyenfenf\ XhLQz{<$R3=7A>data_structures/binary_tree/number_of_possible_binary_trees.pyenfenf\ZeN|>e?o+۷-data_structures/binary_tree/red_black_tree.pyeo`hA.eo`hA.\  *_$˴m^^+data_structures/binary_tree/segment_tree.pyenfenf\/wwĕG̪$v1data_structures/binary_tree/segment_tree_other.pyenfenf\ :fT޷8ar$data_structures/binary_tree/treap.pyenfenf\ m[nFsVF+data_structures/binary_tree/wavelet_tree.pyeZWaeZWa\V⛲CK)wZS(data_structures/disjoint_set/__init__.pyeZWaeZWa\WQ3[ ްI kdO6data_structures/disjoint_set/alternate_disjoint_set.pyenfenf\-^ݽN ,data_structures/disjoint_set/disjoint_set.pyeZWaeZWa\Z⛲CK)wZS#data_structures/hashing/__init__.pyenfenf\ n И[3JHkS'M'data_structures/hashing/bloom_filter.pyeo Peo P\[i!LIb_3l_&data_structures/hashing/double_hash.pyeo WYeo WY\ o"Ȼ_ê |M)T#data_structures/hashing/hash_map.pyeo WYeo WY\  |?oI kט*%data_structures/hashing/hash_table.pyenfenf\_N%F<m k0;6data_structures/hashing/hash_table_with_linked_list.pyeZWaeZWa\a⛲CK)wZS1data_structures/hashing/number_theory/__init__.pyenfenf\ %o$fX)YCc6data_structures/hashing/number_theory/prime_numbers.pyeo WYeo WY\c 04 4F ozc {R,data_structures/hashing/quadratic_probing.pyepEepE\ /g1Qk&'*.data_structures/hashing/tests/test_hash_map.pyeZWaeZWa\g⛲CK)wZS data_structures/heap/__init__.pyenfenf\ 1Y ҇#`/3 <%data_structures/heap/binomial_heap.pyeo WYeo WY\O4y/wWdata_structures/heap/heap.pyenfenf\%FϜOJg$data_structures/heap/heap_generic.pyenfenf\ }В&ې'? data_structures/heap/max_heap.pyenfenf\챇d<DNTwJ data_structures/heap/min_heap.pyenlaenla\AC@ګ/c-qX 'data_structures/heap/randomized_heap.pyenfenf\J;'j7=;A `y[!data_structures/heap/skew_heap.pyeqK9OeqK9O\VDdR+\l'data_structures/linked_list/__init__.pyeqK9OeqK9O\2]a74O-a,-;33data_structures/linked_list/circular_linked_list.pyenfenf\+p#ĀsYZH+data_structures/linked_list/deque_doubly.pyeqK9OeqK9O\HlHN$CI Q;=1data_structures/linked_list/doubly_linked_list.pyenfenf\Z ,gvUu,M_5data_structures/linked_list/doubly_linked_list_two.pyeZWaeZWa\vДOd4%uȷ,data_structures/linked_list/from_sequence.pyenfenf\Pcz~;'data_structures/linked_list/has_loop.pyeqK9OeqK9O\*xd$̘,data_structures/linked_list/is_palindrome.pyenfenf\ ;@w,)e.data_structures/linked_list/merge_two_lists.pyenfenf\ִs/%3B<data_structures/linked_list/middle_element_of_linked_list.pyeq$$eq$$\ =V[%@,data_structures/linked_list/print_reverse.pyeqK9OeqK9O\;Ή!ɴ7~rnWR1data_structures/linked_list/singly_linked_list.pyenfenf\18D>RP<Z3&-U(data_structures/linked_list/skip_list.pyeq$$eq$$\&?WVGP!l `d)data_structures/linked_list/swap_nodes.pyeZWaeZWa\⛲CK)wZS!data_structures/queue/__init__.pyeZWaeZWa\ \|)bj~",0'data_structures/queue/circular_queue.pyenfenf\ vb,KΖ {{ ǛP"63data_structures/queue/circular_queue_linked_list.pyeq$$eq$$\2D܆;N+ 5~Cf+data_structures/queue/double_ended_queue.pyenfenf\}:}(5O@aDB%data_structures/queue/linked_queue.pyenfenf\E^fMyf;2data_structures/queue/priority_queue_using_list.pyenfenf\ w $KЎ|MU!e@ &data_structures/queue/queue_by_list.pyenfenf\ x }bU;*9U!u,data_structures/queue/queue_by_two_stacks.pyenfenf\! لQٿP]~'U.data_structures/queue/queue_on_pseudo_stack.pyeZWaeZWa\⛲CK)wZS"data_structures/stacks/__init__.pyenfenf\/<l"\7K.data_structures/stacks/balanced_parentheses.pyenfenf\ RlS1$pqIT:7data_structures/stacks/dijkstras_two_stack_algorithm.pyeqK9OeqK9O\ Q5;r2H*,+4data_structures/stacks/evaluate_postfix_notations.pyenfenf\ 7O_p~;Ck5data_structures/stacks/infix_to_postfix_conversion.pyepEepE\ .om]W \hj0-ѳ4data_structures/stacks/infix_to_prefix_conversion.pyenfenf\} }v}ũ...data_structures/stacks/next_greater_element.pyeqK9OeqK9O\-(Pҋ<`A"G,data_structures/stacks/postfix_evaluation.pyenfenf\d#׵D]}l톍r%m+data_structures/stacks/prefix_evaluation.pyenlaenla\ 'OFHh{Y{Y.YnBdata_structures/stacks/stack.pyenfenf\ { P#n<>B捱#R7data_structures/stacks/stack_with_doubly_linked_list.pyenfenf\ }1΃c*,S7data_structures/stacks/stack_with_singly_linked_list.pyeqK9OeqK9O\86B<fNH$%E,data_structures/stacks/stock_span_problem.pyeZ``eZ``\⛲CK)wZS data_structures/trie/__init__.pyenfenf\ ~=PIhV[;0R"data_structures/trie/radix_tree.pyenfenf\*F:I,QlXP2data_structures/trie/trie.pyeZ``eZ``\⛲CK)wZS$digital_image_processing/__init__.pyeZ``eZ``\I?9nF!Pc=-digital_image_processing/change_brightness.pyenfenf\B~IiG`WgrJo3+digital_image_processing/change_contrast.pyeZ``eZ``\}A8<(ʬ)-(E3/digital_image_processing/convert_to_negative.pyeZ``eZ``\⛲CK)wZS.digital_image_processing/dithering/__init__.pyenfenf\:455<b|!,digital_image_processing/dithering/burkes.pyeZ``eZ``\⛲CK)wZS3digital_image_processing/edge_detection/__init__.pyenfenf\<"8t'SWN͇\0digital_image_processing/edge_detection/canny.pyeZ``eZ``\⛲CK)wZS,digital_image_processing/filters/__init__.pyeqK9OeqK9O\@ HV]?k^ Ok~Db?4digital_image_processing/filters/bilateral_filter.pyeqK9OeqK9O\c) <cHfV ,digital_image_processing/filters/convolve.pyenfenf\  <Zyߣo$4TˁWG0digital_image_processing/filters/gabor_filter.pyeZ``eZ``\އge<q=={_־3digital_image_processing/filters/gaussian_filter.pyeqK9OeqK9O\  UW na+W5Qw8digital_image_processing/filters/local_binary_pattern.pyeZ``eZ``\@VbfmepӶbgy1digital_image_processing/filters/median_filter.pyeZ``eZ``\3(J2$t䈷y[0digital_image_processing/filters/sobel_filter.pyeZ``eZ``\⛲CK)wZS;digital_image_processing/histogram_equalization/__init__.pyenfenf\Sj^w>2@Y$W-LDdigital_image_processing/histogram_equalization/histogram_stretch.pyeZ``eZ``\⛲CK)wZSFdigital_image_processing/histogram_equalization/image_data/__init__.pyeZ``eZ``\H=F$YP)wDdigital_image_processing/histogram_equalization/image_data/input.jpgeZ``eZ``\⛲CK)wZSGdigital_image_processing/histogram_equalization/output_data/__init__.pyeZ``eZ``\οH } W7OUcFdigital_image_processing/histogram_equalization/output_data/output.jpgeZ``eZ``\⛲CK)wZS/digital_image_processing/image_data/__init__.pyeZ``eZ``\vN孝JSxs,digital_image_processing/image_data/lena.jpgeZ``eZ``\;QD\yœ$2-2digital_image_processing/image_data/lena_small.jpgenfenf\Lgh}Oub-digital_image_processing/index_calculation.pyenfenf\  A䛕\͗0\8 7vGdigital_image_processing/morphological_operations/dilation_operation.pyeqRr+eqRr+\ r i}p2.IIGFdigital_image_processing/morphological_operations/erosion_operation.pyeZ``eZ``\⛲CK)wZS+digital_image_processing/resize/__init__.pyeZ``eZ``\H6RX?GPt)digital_image_processing/resize/resize.pyeZ``eZ``\⛲CK)wZS-digital_image_processing/rotation/__init__.pyeqRr+eqRr+\aS$Z ?L-digital_image_processing/rotation/rotation.pyenfenf\C,mi3׎4M!digital_image_processing/sepia.pyeqK9OeqK9O\' ${UѝKed|9digital_image_processing/test_digital_image_processing.pyeZ``eZ``\⛲CK)wZSdivide_and_conquer/__init__.pyeZ``eZ``\ y{$"1 ,divide_and_conquer/closest_pair_of_points.pyenfenf\o?v,@ !divide_and_conquer/convex_hull.pyeZ``eZ``\0fAlNsk<#%divide_and_conquer/heaps_algorithm.pyeZ``eZ``\LMA96!=:-/divide_and_conquer/heaps_algorithm_iterative.pyenfenf\5y+|k divide_and_conquer/inversions.pyenfenf\fjѣRuUCwCǺ)divide_and_conquer/kth_order_statistic.pyenfenf\NķjqTL)W|\)divide_and_conquer/max_difference_pair.pyenfenf\  !L1t$ީ$R;"divide_and_conquer/max_subarray.pyenfenf\ btB+\U divide_and_conquer/mergesort.pyenfenf\()W_EEARa/divide_and_conquer/peak.pyeZ``eZ``\#6$g_ԭ tw'divide_and_conquer/power.pyeqK9߬OeqK9߬O\2~VU1b Bzw4divide_and_conquer/strassen_matrix_multiplication.pyeZ``eZ``\⛲CK)wZSdynamic_programming/__init__.pyeZ``eZ``\Qu/ ھz.~#dynamic_programming/abbreviation.pyenfenf\ nS˱p7A$dynamic_programming/all_construct.pyenfenf\s 1V]gy~[37dynamic_programming/bitmask.pyenfenf\  {tv=CՒb١^YN&dynamic_programming/catalan_numbers.pyenfenf\Z'=_vZ{c@&dynamic_programming/climbing_stairs.pyenfenf\  Od t |)dynamic_programming/combination_sum_iv.pyenfenf\ kwJG2ns#,$dynamic_programming/edit_distance.pyeZ``eZ``\IZw´]PMD dynamic_programming/factorial.pyeZ``eZ``\`􁆣L%0J^FRNy%dynamic_programming/fast_fibonacci.pyenfenf\GVI: EO!} dynamic_programming/fibonacci.pyenfenf\ Cz>QdK_ji dynamic_programming/fizz_buzz.pyeqK9߬OeqK9߬O\taJ<r0t Ū Y%dynamic_programming/floyd_warshall.pyepEepE\1BԽ4hqň)e(dynamic_programming/integer_partition.pyenfenf\M %4ߕg30n=1dynamic_programming/iterating_through_submasks.pyeq$$eq$$\ 4m?o ˹.vMNJAqZ4dynamic_programming/k_means_clustering_tensorflow.pyenfenf\QHZE xP!i3761'dynamic_programming/knapsack.pyenfenf\Ais. bghd1dynamic_programming/longest_common_subsequence.pyenfenf\ D6|Saia_/dynamic_programming/longest_common_substring.pyenfenf\&'7cS%PxA/!5dynamic_programming/longest_increasing_subsequence.pyenfenf\V^) :;ep>dynamic_programming/longest_increasing_subsequence_o(nlogn).pyeo`hA.eo`hA.\ wf2D(d7)1d(dynamic_programming/longest_sub_array.pyenfenf\o̕\{_;Y)dynamic_programming/matrix_chain_order.pyenfenf\o#>m eQ=+dynamic_programming/max_non_adjacent_sum.pyenfenf\ @BXY'7 qd+dynamic_programming/max_product_subarray.pyenfenf\ iCG+Žc>z,|'dynamic_programming/max_subarray_sum.pyeq$$eq$$\ _HpD_W D-E*L-l-dynamic_programming/min_distance_up_bottom.pyenfenf\DTӹTj+xD*dynamic_programming/minimum_coin_change.pyeZ``eZ``\:KU(-4LN#JC(dynamic_programming/minimum_cost_path.pyeq$$eq$$\4=g{`{t7(dynamic_programming/minimum_partition.pyenfenf\ 8h55褠 0dynamic_programming/minimum_size_subarray_sum.pyenfenf\ XI&tkYT:<dynamic_programming/minimum_squares_to_represent_a_number.pyepEepE\=?qQ<fB4){+dynamic_programming/minimum_steps_to_one.pyenfenf\ {ge4C$!e+dynamic_programming/minimum_tickets_cost.pyenfenf\*h^G>h1dynamic_programming/optimal_binary_search_tree.pyenfenf\ [email protected]ӝ+E~A.dynamic_programming/palindrome_partitioning.pyenfenf\R5@"(Tx$F)"dynamic_programming/rod_cutting.pyepEepE\BŁm7T1NR(dynamic_programming/subset_generation.pyenfenf\*X:K0I[lPʎ$dynamic_programming/sum_of_subset.pyenfenf\ 6:vME,)|*?c{dynamic_programming/viterbi.pyenfenf\  @Mzi .bAF{f2LsS3!dynamic_programming/word_break.pyenfenf\  ª 9X-}Jɶ-electronics/apparent_power.pyenfenf\  |8RMQŃ rLk<electronics/builtin_voltage.pyenfenf\  C ͅOɓ1ч$electronics/carrier_concentration.pyenfenf\  WBDfnf.FFg#electronics/circular_convolution.pyenfenf\  |25[electronics/coulombs_law.pyenfenf\ wR9lT$electronics/electric_conductivity.pyeqZ3eqZ3\R嗕`;:/tgI(electronics/electric_power.pyenfenf\ D=b0oM#electronics/electrical_impedance.pyenfenf\ ?wbC~WH$V<˃Helectronics/ind_reactance.pyenfenf\0f7 tWcpelectronics/ohms_law.pyenfenf\ ܺ5`Z{a=zϚ&electronics/real_and_reactive_power.pyenfenf\ 9Uֵҽ(?V#electronics/resistor_equivalence.pyenfenf\ =O;` E57g'/!electronics/resonant_frequency.pyeZ)i_eZ)i_\ ⛲CK)wZSfile_transfer/__init__.pyeZ)i_eZ)i_\ CTϧfǃ2ptF,."hnfile_transfer/mytext.txtenfenf\rI ,c@%1file_transfer/receive_file.pyenfenf\ VG^IqD&qfile_transfer/send_file.pyeZ)i_eZ)i_\ $⛲CK)wZSfile_transfer/tests/__init__.pyeZ)i_eZ)i_\ %*`Dbownl9[gI;%file_transfer/tests/test_send_file.pyenfenf\ d8WI?<!financial/ABOUT.mdenfenf\ ⛲CK)wZSfinancial/__init__.pyenfenf\ :"I0Bk; KJ )financial/equated_monthly_installments.pyenfenf\ U3.'̳4XM'Tfinancial/interest.pyenfenf\ F# i4kLd$Qfinancial/present_value.pyenfenf\ Cm5|Aj=Ytr< financial/price_plus_tax.pyenfenf\ H.@ZعC$fractals/julia_sets.pyenfenf\ Xkؘ Wufractals/koch_snowflake.pyenfenf\ ڙubB9i}fractals/mandelbrot.pyenfenf\  E7/*yfractals/sierpinski_triangle.pyeZ)i_eZ)i_\ 5⛲CK)wZSfuzzy_logic/__init__.pyeq$$eq$$\ G  g=,5}1j;fuzzy_logic/fuzzy_operations.pyeZ)i_eZ)i_\ 9⛲CK)wZSgenetic_algorithm/__init__.pyenfenf\Df\*s!genetic_algorithm/basic_string.pyeZ)i_eZ)i_\ <⛲CK)wZSgeodesy/__init__.pyenfenf\ T%wD1e̐ 5geodesy/haversine_distance.pyenfenf\ HgNQ317K~=(geodesy/lamberts_ellipsoidal_distance.pyeZ)i_eZ)i_\ B⛲CK)wZSgraphics/__init__.pyenfenf\k||"2شCht'Eqgraphics/bezier_curve.pyenfenf\ 2 ngш=vˁm$graphics/vector3_for_2d_rendering.pyeZ)i_eZ)i_\ F⛲CK)wZSgraphs/__init__.pyepEepE\6 sQy깎1R 2graphs/a_star.pyenfenf\BҀE($%,\2QRh0Jgraphs/articulation_points.pyeo`hA.eo`hA.\7 [a#]WQ).graphs/basic_graphs.pyenfenf\F Ⱥ]O#tQcF4lN>lgraphs/bellman_ford.pyeqZ3eqZ3\ Z H& 'V/[і!graphs/bi_directional_dijkstra.pyenfenf\ 7=g*YhǺV0ugraphs/bidirectional_a_star.pyenfenf\Q 0 'OߣO,graphs/bidirectional_breadth_first_search.pyenfenf\ R'YHuaSz-I0ographs/boruvka.pyenfenf\^ W8ulW>*2xgraphs/breadth_first_search.pyenfenf\ +VCD;79pC graphs/breadth_first_search_2.pyenfenf\P ԉJ̈́*ޡI,graphs/breadth_first_search_shortest_path.pyenfenf\  SB1ɕXmR.graphs/breadth_first_search_shortest_path_2.pyenfenf\ 3x|]"7#cQ5graphs/breadth_first_search_zero_one_shortest_path.pyeo`hA.eo`hA.\ =|x@#Zbj ҥ#graphs/check_bipartite_graph_bfs.pyeq$$eq$$\ bdB0DN {pz#graphs/check_bipartite_graph_dfs.pyenfenf\ ̀uVsgraphs/check_cycle.pyenfenf\ c>qKVT|wdggraphs/connected_components.pyenfenf\o P<ǂk]0graphs/depth_first_search.pyeo`hA.eo`hA.\E0r'vbSsښ~ɤL:graphs/depth_first_search_2.pyenfenf\ IyD@rq!)graphs/dijkstra.pyenfenf\HF?Z/Ye1;9graphs/dijkstra_2.pyeo WYeo WY\{E!8KDS6 1graphs/dijkstra_algorithm.pyenfenf\  ~{M!UHgraphs/dijkstra_alternate.pyenfenf\  4=42I6sW7Jgraphs/dijkstra_binary_grid.pyeZ)i_eZ)i_\ ` R\ڏfLHBKgraphs/dinic.pyenfenf\<ތE@c8kC+aI*2graphs/directed_and_undirected_(weighted)_graph.pyenfenf\\wOK+hJͨc3ߵO/graphs/edmonds_karp_multiple_source_and_sink.pyenfenf\kNfb]8graphs/eulerian_path_and_circuit_for_undirected_graph.pyenfenf\62L`v;vgraphs/even_tree.pyenfenf\9 vtZMI]fgraphs/finding_bridges.pyenfenf\  W/F*'kNv`$&graphs/frequent_pattern_graph_miner.pyeZ)i_eZ)i_\ gwT=QnLXngraphs/g_topological_sort.pyenfenf\^8wiG~n+ cgraphs/gale_shapley_bigraph.pyepEepE\ TtvOX`AI1graphs/graph_adjacency_list.pyepEepE\ W M.7t=7^B/ graphs/graph_adjacency_matrix.pyenf#&enf#&\uq5&*5X graphs/graph_list.pyeZ)i_eZ)i_\ l Vϋ8+5Igraphs/graphs_floyd_warshall.pyeqK9߬OeqK9߬O\5ʟ4Wzz)x)graphs/greedy_best_first.pyenf#&enf#&\  i=]@mU73ø!graphs/greedy_min_vertex_cover.pyenf#&enf#&\ *c됚Rn3jrgraphs/kahns_algorithm_long.pyenf#&enf#&\ 1& սUfЄV8graphs/kahns_algorithm_topo.pyenf#&enf#&\ 2 >\ m2j<j v%graphs/karger.pyenf#&enf#&\ % fY-i!mgzrB(|graphs/markov_chain.pyenf#&enf#&\ ZDi 7}#graphs/matching_min_vertex_cover.pyenf#&enf#&\ ,T]l/ʧ -Wgraphs/minimum_path_sum.pyenf#&enf#&\ +<hyHR9E$k'graphs/minimum_spanning_tree_boruvka.pyenf#&enf#&\I7s\"F7K"{('graphs/minimum_spanning_tree_kruskal.pyenf#&enf#&\ CΎnёym Bs(graphs/minimum_spanning_tree_kruskal2.pyenf#&enf#&\T$ZWMC5 5G%graphs/minimum_spanning_tree_prims.pyenf#&enf#&\#MѴq:  &graphs/minimum_spanning_tree_prims2.pyenf#&enf#&\! A'6V ugraphs/multi_heuristic_astar.pyenf#&enf#&\]Źħ*@À猈graphs/page_rank.pyenf#&enf#&\ lYi[uս)HCgraphs/prim.pyenf#&enf#&\ t~M5c'qy;~ graphs/random_graph_generator.pyenf#&enf#&\  9!d1M+uZgraphs/scc_kosaraju.pyenf#&enf#&\  2^\3)Cp1s cp'graphs/strongly_connected_components.pyeqK9߬OeqK9߬O\  \0ʊ OPym}x]}graphs/tarjans_scc.pyenf#&enf#&\ ⛲CK)wZSgraphs/tests/__init__.pyenf#&enf#&\ 0$.LButJ}O.graphs/tests/test_min_spanning_tree_kruskal.pyeq$$eq$$\e(Pxҟ ѽ+graphs/tests/test_min_spanning_tree_prim.pyeo WYeo WY\ BXm@+p+ᄇ@&Dq%greedy_methods/fractional_knapsack.pyenf#&enf#&\ wm;kOW0||%kTn'greedy_methods/fractional_knapsack_2.pyenf#&enf#&\ w \xgdFGM&greedy_methods/minimum_waiting_time.pyenf#&enf#&\ e4DYv{G&'greedy_methods/optimal_merge_pattern.pyeqeq\ X mc`S!SP.F2hashes/README.mdeZ)i_eZ)i_\ ⛲CK)wZShashes/__init__.pyenf#&enf#&\aO;sB Bhashes/adler32.pyeqK9߬OeqK9߬O\ w#48ud1);Ahashes/chaos_machine.pyenf#&enf#&\ KLcP򞕺>i hashes/djb2.pyenf#&enf#&\ ,"܃_wt hashes/elf.pyenf#&enf#&\T7,4Q73*´hashes/enigma_machine.pyeqK9߬OeqK9߬O\ M%ܓ!&aR,ј=Chashes/hamming_code.pyenf#&enf#&\ ѻwVS%Ho#jhashes/luhn.pyenf#&enf#&\,!nȩsshE4@z hashes/md5.pyenf#&enf#&\zC(t}ռ hashes/sdbm.pyeqK9߬OeqK9߬O\iƳ%>C@Jz *2txhashes/sha1.pyeqK9߬OeqK9߬O\ ?s1TeTTehashes/sha256.pyenf#&enf#&\L_Y : SU,knapsack/README.mdeZ)i_eZ)i_\ ⛲CK)wZSknapsack/__init__.pyeZ)i_eZ)i_\ c@[k◒knapsack/greedy_knapsack.pyenf#&enf#&\ vl;[:Blk{lknapsack/knapsack.pyenf#&enf#&\ шj[s}Vpp\'knapsack/recursive_approach_knapsack.pyeZ)i_eZ)i_\ ⛲CK)wZSknapsack/tests/__init__.pyepEepE\~ +-]@݋xWi&knapsack/tests/test_greedy_knapsack.pyepEepE\5$USq}S2s%knapsack/tests/test_knapsack.pyenf#&enf#&\h i5 ^ ΤGǢfCkalinear_algebra/README.mdeZ)i_eZ)i_\ ⛲CK)wZSlinear_algebra/__init__.pyeZ)i_eZ)i_\ ⛲CK)wZSlinear_algebra/src/__init__.pyenf#&enf#&\Lf6he&&eT(linear_algebra/src/conjugate_gradient.pyepEepE\6UntR!NRlinear_algebra/src/lib.pyeqZ3eqZ3\E ؚ>[U[(linear_algebra/src/polynom_for_points.pyenf#&enf#&\ $٥,GV\ۗL|%linear_algebra/src/power_iteration.pyenf#&enf#&\  Jii{0d$linear_algebra/src/rank_of_matrix.pyenf#&enf#&\ GsB>dW8ȧ 0'linear_algebra/src/rayleigh_quotient.pyepEepE\  uM㗓xh*U[&linear_algebra/src/schur_complement.pyepEepE\wnPyW.@ &K;Ca)linear_algebra/src/test_linear_algebra.pyenf#&enf#&\ "?!F>_wx/(linear_algebra/src/transformations_2d.pyeqK9߬OeqK9߬O\ *d _o{sTlinear_programming/simplex.pyeZfr_eZfr_\ ⛲CK)wZSmachine_learning/__init__.pyenf#&enf#&\ \Rz`"Z-ءS8H\=machine_learning/astar.pyenf#&enf#&\ %;'—t?q&w?i(machine_learning/data_transformations.pyeo WYeo WY\|Ѱ,A6X'D9,"i!machine_learning/decision_tree.pyepEepE\ o0EQ5秦 P,machine_learning/dimensionality_reduction.pyeZfr_eZfr_\ ⛲CK)wZS(machine_learning/forecasting/__init__.pyeqZ3eqZ3\ BdUXY?V~(machine_learning/forecasting/ex_data.csveqZ3eqZ3\ m*g4A-#machine_learning/forecasting/run.pyenf#&enf#&\ *~m`v-dz:3machine_learning/gaussian_naive_bayes.py.broken.txtenf#&enf#&\ / ]>o%Œm :machine_learning/gradient_boosting_regressor.py.broken.txteqK9߬OeqK9߬O\ u+[tЂ{Zx$machine_learning/gradient_descent.pyeq$$eq$$\2.|Bxsd8W1?!machine_learning/k_means_clust.pyeq$$eq$$\t*z)&6+(machine_learning/k_nearest_neighbours.pyeq$$eq$$\ dJbBDϭc\FKEmachine_learning/knn_sklearn.pyenf#&enf#&\BGx_i>0machine_learning/linear_discriminant_analysis.pyeqK9߬OeqK9߬O\u:NhJN,.Ϟ%machine_learning/linear_regression.pyenf#&enf#&\ 4⛲CK)wZS4machine_learning/local_weighted_learning/__init__.pyenf#&enf#&\ 8 `MBȆv$w+NCmachine_learning/local_weighted_learning/local_weighted_learning.mdeq$$eq$$\ lύ]Af EL wCmachine_learning/local_weighted_learning/local_weighted_learning.pyeo WYeo WY\ e fƤJ~?5<'machine_learning/logistic_regression.pyeZfr_eZfr_\ ⛲CK)wZS!machine_learning/lstm/__init__.pyeqK9߬OeqK9߬O\ st|FataC99(machine_learning/lstm/lstm_prediction.pyeZfr_eZfr_\ M!p2u-)%machine_learning/lstm/sample_data.csvenf#&enf#&\ oA1rp89wI։ܾ}4machine_learning/multilayer_perceptron_classifier.pyenf#&enf#&\ ?[Ca`^7;?")machine_learning/polynomial_regression.pyenf#&enf#&\ B|2g `'Bvw-27machine_learning/random_forest_classifier.py.broken.txtenf#&enf#&\ DXk)nRo6machine_learning/random_forest_regressor.py.broken.txteZfr_eZfr_\  Fi\;!.n҂3 %machine_learning/scoring_functions.pyenf#&enf#&\ G2Ҵ[Q$7NN8'machine_learning/self_organizing_map.pyeo`hA.eo`hA.\jN>OVi1[[@i3machine_learning/sequential_minimum_optimization.pyenf`/enf`/\XGz#F< Oc!i~h%machine_learning/similarity_search.pyenf`/enf`/\ R$ałQ@ c:+machine_learning/support_vector_machines.pyenf`/enf`/\ ta1^;Љ,machine_learning/word_frequency_functions.pyenf`/enf`/\ H 3igKK2zѵO&machine_learning/xgboost_classifier.pyeqeq\ 9YOƽB(W#%machine_learning/xgboost_regressor.pyeZfr_eZfr_\ ⛲CK)wZSmaths/__init__.pyenf`/enf`/\.aW鍆c5p5+Q maths/abs.pyeq$$eq$$\ tȒREE+ H6y aQx maths/add.pyenf`/enf`/\ M@WM&3AQ;c%$maths/addition_without_arithmetic.pyeZfr_eZfr_\ ~XaўR &2(maths/aliquot_sum.pyeZfr_eZfr_\ MKlW>f[maths/allocation_number.pyeqK9߬OeqK9߬O\ "8}56's,maths/arc_length.pyenf`/enf`/\(Lr?C@c䞶 maths/area.pyeqZ3eqZ3\ LWW69oYmaths/area_under_curve.pyeo`hA.eo`hA.\  &pBxx=u maths/armstrong_numbers.pyeo`hA.eo`hA.\ (7V2 㳽nsqmaths/automorphic_number.pyenf`/enf`/\ Q[=e4W͌2K#maths/average_absolute_deviation.pyenf`/enf`/\ Y>'LCJ@&NdW:hmaths/average_mean.pyeqRr+eqRr+\ ZWH$$ iJge_Umaths/average_median.pyenf`/enf`/\@Afz:jx,mmaths/average_mode.pyenf`/enf`/\ C8fj^B7v8maths/bailey_borwein_plouffe.pyepEepE\ &,T>ЀIR3SZXmaths/basic_maths.pyeq$$eq$$\ h֐Xr/޴maths/binary_exp_mod.pyeq$$eq$$\ iW{Bv5@Rud maths/binary_exponentiation.pyeq$$eq$$\ QK% ]e[8 maths/binary_exponentiation_2.pyeq$$eq$$\ NpA)Ԓ25h^ӻ maths/binary_exponentiation_3.pyeq$$eq$$\ | K=8l'+(Kmaths/binomial_coefficient.pyenf`/enf`/\ y [VՒD>5Inumaths/binomial_distribution.pyeo WYeo WY\ Em '.NK*Omaths/bisection.pyeq$$eq$$\ HdDuF&){3apmaths/carmichael_number.pyeo`hA.eo`hA.\  ϱ|6 ,e#maths/catalan_number.pyenf`/enf`/\ ~吞8!|oֽ maths/ceil.pyenf`/enf`/\ ZXq5.2}P匋maths/check_polygon.pyeZfr_eZfr_\ tb./?k )E6]maths/chudnovsky_algorithm.pyenf`/enf`/\  pj4Ѵcj3amaths/collatz_sequence.pyeq$$eq$$\2@wn ql(gmaths/combinations.pyenf`/enf`/\ ѻNo?|.,maths/decimal_isolate.pyeqZ3eqZ3\ ebq|dZIFUemaths/decimal_to_fraction.pyenf`/enf`/\ g=bEh`:r+&xAu maths/dodecahedron.pyeq%!#eq%!#\ βL((>2!Tc#maths/double_factorial_iterative.pyeq%!#eq%!#\ ɲ: ock0o#maths/double_factorial_recursive.pyenf`/enf`/\ oȾMMͶD.maths/dual_number_automatic_differentiation.pyeqK9߬OeqK9߬O\ I(x-zw9}maths/entropy.pyenf`/enf`/\ Dr){ bxyMMB1maths/euclidean_distance.pyeq%!#eq%!#\ K%C~po~2M_Gtmaths/euclidean_gcd.pyenf`/enf`/\ t0ڥ:*.Xgmaths/euler_method.pyeqRr+eqRr+\ LSvŔۏGl)maths/euler_modified.pyeqK9߬OeqK9߬O\Vdp7%45 fmaths/eulers_totient.pyenf`/enf`/\ I  ]:-ؒ)S%maths/extended_euclidean_algorithm.pyepEepE\ P.'(晙%maths/factorial.pyenf`/enf`/\ M.SeYo50~~f#maths/factors.pyeq%!#eq%!#\ 0;E˨^W(Kf(dmaths/fermat_little_theorem.pyeo WYeo WY\֝A3cG꫃yxmaths/fibonacci.pyeqRr+eqRr+\,hOaKb66maths/find_max.pyeqRr+eqRr+\ >b2C國5l]Rլӽmaths/find_max_recursion.pyeqRr+eqRr+\K+.|cd5YS_udtz9maths/find_min.pyeqRr+eqRr+\ RM^/%oԘNq>I`maths/find_min_recursion.pyenf`/enf`/\ ㋼D(ޡE&.g_maths/floor.pyeq%!#eq%!#\ ޼XvK umaths/gamma.pyeq%!#eq%!#\ Q=k^8Au(bTz?maths/gamma_recursive.pyenf`/enf`/\ >QXI-M4|maths/gaussian.pyeqRr+eqRr+\ :{_QC;s,#maths/gaussian_error_linear_unit.pyenf`/enf`/\  _c#l#jF(.-M~3*maths/gcd_of_n_numbers.pyeZfr_eZfr_\ ܢJJIv쐎*5 maths/greatest_common_divisor.pyeqK9߬OeqK9߬O\ \ |i˪O8|maths/greedy_coin_change.pyeo`hA.eo`hA.\ Eup\ȫ& maths/hamming_numbers.pyeq%!#eq%!#\ i)S?Éj2!\` maths/hardy_ramanujanalgo.pyeo`hA.eo`hA.\ L6wƉȆ`eJfܻmaths/hexagonal_number.pyeZfr_eZfr_\ ⛲CK)wZSmaths/images/__init__.pyeZfr_eZfr_\ !|~!)lM0Ϣmaths/images/gaussian.pngeo WYeo WY\ 'za5w Bju^h&maths/integration_by_simpson_approx.pyenf`/enf`/\ e.sIZ*H ?maths/interquartile_range.pyenf`/enf`/\ cܞ!8KoEO(8maths/is_int_palindrome.pyenf`/enf`/\ j !.d\iAmQ?maths/is_ip_v4_address_valid.pyenf`/enf`/\ f ,8 4>1ИMmaths/is_square_free.pyeqRr+eqRr+\  2D|rQG #.maths/jaccard_similarity.pyenf`/enf`/\ Se%s}" 6+Umaths/juggler_sequence.pyeq%!#eq%!#\vKhU⬈_O<7N0maths/karatsuba.pyeo`hA.eo`hA.\ 0ب_V{b' Zmaths/krishnamurthy_number.pyenf`/enf`/\ @UX1oU%]1&maths/kth_lexicographic_permutation.pyeo`hA.eo`hA.\ :~IXhkrYsI&maths/largest_of_very_large_numbers.pyeq%!#eq%!#\+ ^cyJ :qqXbmaths/least_common_multiple.pyeqZ3eqZ3\ ٭Mc^Gߘmaths/line_length.pyenf`/enf`/\ X(T4J&v3?ިmaths/liouville_lambda.pyenf`/enf`/\  V!y&=`8>$maths/lucas_lehmer_primality_test.pyenf`/enf`/\ 4Z쮧 RIlf,J>maths/lucas_series.pyeq%!#eq%!#\ "X9$kCmaths/maclaurin_series.pyenf`/enf`/\ A9FIVMQԇB{maths/manhattan_distance.pyenf`/enf`/\  |70SV0H| `maths/matrix_exponentiation.pyenf`/enf`/\ k ׮tղmaths/max_sum_sliding_window.pyenf`/enf`/\ UXzKK:#y2lmaths/median_of_two_arrays.pyeq%!#eq%!#\ &h۫X-ogFo maths/miller_rabin.pyenf`/enf`/\ ̊PB*!6maths/mobius_function.pyeZ{^eZ{^\ ;mB}:$/D6whmaths/modular_exponential.pyenf`/enf`/\ GOe޴vz>>?maths/monte_carlo.pyenf`/enf`/\6/p(f*4K[ymaths/monte_carlo_dice.pyeo WYeo WY\ ?H?"kQ[/ QVe maths/nevilles_method.pyeqRr+eqRr+\ ,ޕ 3maths/newton_raphson.pyepEepE\ \g$QqU!maths/number_of_digits.pyeqZ3eqZ3\ _>TQ0 >maths/numerical_integration.pyenf`/enf`/\  `)!LgcB' ,{maths/odd_sieve.pyeo WYeo WY\ 4҇uq maths/perfect_cube.pyeo`hA.eo`hA.\ Q2ғmhjFDmaths/perfect_number.pyenf`/enf`/\ ~hRhvK14x(R[Kmaths/perfect_square.pyeqK9߬OeqK9߬O\ Io`vAr5Mџmaths/persistence.pyeq%!#eq%!#\ ]  ҪF|C-maths/pi_generator.pyenf`/enf`/\ )yr9U뀭O2/"maths/pi_monte_carlo_estimation.pyenf`/enf`/\ ;MN񰭜 o maths/points_are_collinear_3d.pyenf`/enf`/\ POqd6ker Wvmaths/pollard_rho.pyenf`/enf`/\R07a6maths/polynomial_evaluation.pyenf`/enf`/\ ⛲CK)wZSmaths/polynomials/__init__.pyeqZ3eqZ3\ ۋYJ9JUu4maths/polynomials/single_indeterminate_operations.pyeo`hA.eo`hA.\  Щi<e^0Umaths/power_using_recursion.pyepEepE\;c͗qN8HDmaths/prime_check.pyeZ{^eZ{^\ ^\ :mG=,;jmaths/prime_factors.pyeq%!#eq%!#\ )~&L@^јmaths/prime_numbers.pyenf`/enf`/\2[&W&ˁ!maths/prime_sieve_eratosthenes.pyeq%!#eq%!#\ a8(ȓoY=}maths/primelib.pyenf`/enf`/\ s䤾)~ afUT#maths/print_multiplication_table.pyeo`hA.eo`hA.\ M=.KD W($maths/pronic_number.pyeo`hA.eo`hA.\ Gt~` ZcB`u]maths/proth_number.pyenf`/enf`/\ wpM<6b<Umaths/pythagoras.pyeqRr+eqRr+\ _AO·9_^sKZmaths/qr_decomposition.pyeit.Seit.S\5NÐad/* TY,maths/quadratic_equations_complex_numbers.pyeq%!#eq%!#\ JTFTgJ,Wmaths/radians.pyenf`/enf`/\ !,\M~D.\ QV=`tmaths/radix2_fft.pyeq%!#eq%!#\ =EkÑnU&j|u maths/relu.pyenf`/enf`/\ *o)QI"-Vmaths/remove_digit.pyeo WYeo WY\ L~MF>T:T˫omaths/runge_kutta.pyeq%!#eq%!#\ XP;u*(B}07Omaths/segmented_sieve.pyeZ{^eZ{^\ k⛲CK)wZSmaths/series/__init__.pyenf8enf8\ v(Ǽ_{q"|3Hmaths/series/arithmetic.pyenf8enf8\  {b9X]hf=ၼzbmaths/series/geometric.pyeqZ3eqZ3\  Dw3 ė4 maths/series/geometric_series.pyeqK9߬OeqK9߬O\ N P_SoDmaths/series/harmonic.pyenf8enf8\ -qki&`maths/series/harmonic_series.pyenf8enf8\ !FX+Ɣ4iݲh!maths/series/hexagonal_numbers.pyeqZ3eqZ3\ 4?#M~ ҪDmaths/series/p_series.pyenf8enf8\ R P7>2͓0maths/sieve_of_eratosthenes.pyeqRr+eqRr+\ (uqA~Tmaths/sigmoid.pyeq%!#eq%!#\ x-8(`cIrkNmaths/sigmoid_linear_unit.pyeo`hA.eo`hA.\ "&g3JL O-maths/signum.pyepEepE\ q@mÚqqV\ː=pmaths/simpson_rule.pyenf8enf8\ #-<E,maths/simultaneous_linear_equation_solver.pyenf8enf8\ $|nl]]ʲB[ maths/sin.pyenf8enf8\ %0Nɺ^%)F\ʊmaths/sock_merchant.pyenf8enf8\ wRT V \j.[qmaths/softmax.pyeq%!#eq%!#\ z,A?SK096maths/square_root.pyenf8enf8\ >8 0i\O)ˀi!maths/sum_of_arithmetic_series.pyenf8enf8\HH2W -lmdrmaths/sum_of_digits.pyenf8enf8\ yZ#Xԍf `+%maths/sum_of_geometric_progression.pyenf8enf8\ Lݞ kZ?#D$|Xmaths/sum_of_harmonic_series.pyenf8enf8\ OfKLà [M)Ymaths/sumset.pyenf8enf8\ S4`t$Ʃ U5b Xѝ(maths/sylvester_sequence.pyeqRr+eqRr+\ =eݫ>dh !l4 maths/tanh.pyenf8enf8\ >x wƏ maths/test_prime_check.pyeZ{^eZ{^\  M܊ka1 !maths/trapezoidal_rule.pyenf8enf8\ V w[ U=l`:Jmaths/triplet_sum.pyenf8enf8\ Yk+f8͆4ĄyMmaths/twin_prime.pyenf8enf8\ ]ɐ w&'maths/two_pointer.pyenf8enf8\ _N3-lN"8$vE ۸maths/two_sum.pyeo`hA.eo`hA.\ '~k=C|Maz maths/ugly_numbers.pyeqZ3eqZ3\[AXL>7 maths/volume.pyeo`hA.eo`hA.\ ((48$tBΟmaths/weird_number.pyenf8enf8\ 6H?k lz+maths/zellers_congruence.pyeZ{^eZ{^\ ⛲CK)wZSmatrix/__init__.pyenf8enf8\ `o ;z4ҶBmatrix/binary_search_matrix.pyenf8enf8\ dŕI5<Y3r!matrix/count_islands_in_matrix.pyenf8enf8\ a';EO47׮p1matrix/count_negative_numbers_in_sorted_matrix.pyenf8enf8\ ebHa_Ъ2\q<matrix/count_paths.pyenf8enf8\ g ORFC6e#)zV0matrix/cramers_rule_2x2.pyenf8enf8\ :=߂S"8iRĀFmatrix/inverse_of_matrix.pyenf8enf8\ h3ik\ߏ,l4iLu`'matrix/largest_square_area_in_matrix.pyeqZ3eqZ3\ >,>S |bL/matrix/matrix_class.pyeqZ3eqZ3\3cb(AMmatrix/matrix_operation.pyenf8enf8\ l ]@ 07i.@=R{g4matrix/max_area_of_island.pyenf8enf8\  e zVN33matrix/nth_fibonacci_using_matrix_exponentiation.pyenf8enf8\ p'eUȹ:Gomatrix/pascal_triangle.pyenf8enf8\q lۚ7dNmatrix/rotate_matrix.pyeqZ3eqZ3\  ;灿%?z$matrix/searching_in_sorted_matrix.pyeqZ3eqZ3\  %bq}dOɿematrix/sherman_morrison.pyeqK9߬OeqK9߬O\ ? Rzo?i' matrix/spiral_print.pyeZ{^eZ{^\ ⛲CK)wZSmatrix/tests/__init__.pyeZ{^eZ{^\ <V-_ѹQާlmatrix/tests/pytest.iniepEepE\xe_?VµX%matrix/tests/test_matrix_operation.pyeZ{^eZ{^\ ⛲CK)wZSnetworking_flow/__init__.pyeo WYeo WY\ qnyyA['K!networking_flow/ford_fulkerson.pyenf8enf8\ uKE*dƻ֪aPknetworking_flow/minimum_cut.pyepEepE\-ќWr2aeYwJ#0neural_network/2_hidden_layers_neural_network.pyeZ{^eZ{^\ ⛲CK)wZSneural_network/__init__.pyenf8enf8\ tz<NqmQ?}+>neural_network/activation_functions/exponential_linear_unit.pyeqK9߬OeqK9߬O\ _^#>Ji;{D1neural_network/back_propagation_neural_network.pyeq%!#eq%!#\7o5t;dߵBT,neural_network/convolution_neural_network.pyeq%!#eq%!#\ >ްbčǑut +>U e&neural_network/gan.py_tfeqZ3eqZ3\ .d~EN-V3jneural_network/input_data.pyeo`hA.eo`hA.\ ,HxB|{Ůneural_network/perceptron.pyeo WYeo WY\ <#Hs>-r'o8'neural_network/simple_neural_network.pyeZ]eZ]\ ⛲CK)wZSother/__init__.pyenf8enf8\,b.7vc+rnjother/activity_selection.pyenf8enf8\ [B>Sfx77*U>6̚!other/alternative_list_arrange.pyeo`hA.eo`hA.\ 0-B;(mN?` *other/davisb_putnamb_logemannb_loveland.pyeo WYeo WY\ !{κ]zd6"fNVB#other/dijkstra_bankers_algorithm.pyeZ]eZ]\ !&V_ %h$other/doomsday.pyenf8enf8\ /MΝ(@Ð#.'other/fischer_yates_shuffle.pyeZ]eZ]\ DGԫg 'o%05other/gauss_easter.pyeq%!#eq%!#\ C.fhL:*^ULother/graham_scan.pyenf8enf8\ r_E$7AU other/greedy.pyenf8enf8\ 艋Ւ:~˟=C other/guess_the_number_search.pyenf8enf8\ 3g[&fO?onother/h_index.pyenf8enf8\ qi+H5;\noZother/least_recently_used.pyenf8enf8\ )&;`\s_mB1Bother/lfu_cache.pyeq%!#eq%!#\ 1cI۹Vċ&other/linear_congruential_generator.pyenf8enf8\ O'^[N޷H/ A1$uother/lru_cache.pyeqK9߬OeqK9߬O\ z%t4}'`WALother/magicdiamondpattern.pyenf8enf8\ oYe2q04Νh^<other/maximum_subsequence.pyeo WYeo WY\ SȲ B6oother/nested_brackets.pyenf8enf8\ l 5DOT&i other/number_container_system.pyeq%!#eq%!#\  KaaטlBL[zother/password.pyenf8enf8\ ]P 58ܨ&k0H6pother/quine.pyenf8enf8\23[7fA|d᤬other/scoring_algorithm.pyenf8enf8\  81Ye%2`_v other/sdes.pyenf8enf8\ E =(oQUWother/tower_of_hanoi.pyenf8enf8\ ⛲CK)wZSphysics/__init__.pyenf8enf8\ e0}"?+.g_8physics/altitude_pressure.pyepNepN\ n^ta,{P_܉Fphysics/archimedes_principle.pyenf8enf8\ ^`$ \r6d physics/basic_orbital_capture.pyenf8enf8\ ~[Y?$ůg+ˑR E3physics/casimir_effect.pyenf8enf8\ %dh_nu$Zpphysics/centripetal_force.pyenf8enf8\ +n]u~ԥ<$29.physics/grahams_law.pyenfAenfA\ ZqFί]'physics/horizontal_projectile_motion.pyenfAenfA\  Ҋgfnn2%physics/hubble_parameter.pyeqRr+eqRr+\ }]{yegW|, physics/ideal_gas_law.pyenfAenfA\ wn"<-sWl1zxiphysics/kinetic_energy.pyenfAenfA\ ϗ]'~b-physics/lorentz_transformation_four_vector.pyenfAenfA\  dw\1*%abDAphysics/malus_law.pyeqK9߬OeqK9߬O\ V.%+pf .'dTxphysics/n_body_simulation.pyeq%!#eq%!#\  K[/;%physics/newtons_law_of_gravitation.pyenfAenfA\  SxЄ3|`T2r='physics/newtons_second_law_of_motion.pyenfAenfA\ TOov (pB}physics/potential_energy.pyenfAenfA\ G9RGDC physics/rms_speed_of_molecule.pyenfAenfA\ ^H825.cphysics/shear_stress.pyeq%!#eq%!#\ 8eflm8f%@physics/speed_of_sound.pyenfAenfA\^*g驰zpWG Mproject_euler/README.mdeZ]eZ]\ ⛲CK)wZSproject_euler/__init__.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_001/__init__.pyenfAenfA\ LT1G)ʑd3E!project_euler/problem_001/sol1.pyeZ]eZ]\ SpQ3xX(U{يE!project_euler/problem_001/sol2.pyeZ]eZ]\ gAU:Qz]};=h1!project_euler/problem_001/sol3.pyeZ]eZ]\ d<O$2-/5g!project_euler/problem_001/sol4.pyenfAenfA\ >o$ZgUR#_\!project_euler/problem_001/sol5.pyeZ]eZ]\ Gq7oi!project_euler/problem_001/sol6.pyenfAenfA\ jp-Kiy!project_euler/problem_001/sol7.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_002/__init__.pyeZ]eZ]\ Shk ʆí 8!project_euler/problem_002/sol1.pyeZ]eZ]\ Ґ3Цɰjjn4%!project_euler/problem_002/sol2.pyeZ]eZ]\ :uzL! kd!project_euler/problem_002/sol3.pyeZ]eZ]\ !p֨ &Qqpn<]E!project_euler/problem_002/sol4.pyeZ]eZ]\ "D9ў8Y`4"3bg_r!project_euler/problem_002/sol5.pyeZ]eZ]\ $⛲CK)wZS%project_euler/problem_003/__init__.pyenfAenfA\y gA^ڢnTblN!project_euler/problem_003/sol1.pyeZ]eZ]\ & JFz5,o!project_euler/problem_003/sol2.pyenfAenfA\{:N#u: 8!project_euler/problem_003/sol3.pyeZ]eZ]\ )⛲CK)wZS%project_euler/problem_004/__init__.pyenfAenfA\ k7ݔ-kx@9_T!project_euler/problem_004/sol1.pyeZ]eZ]\ +ȀmX|@Ϻg=0!project_euler/problem_004/sol2.pyeZ]eZ]\ -⛲CK)wZS%project_euler/problem_005/__init__.pyenfAenfA\ c_[X0`<ۤ&!project_euler/problem_005/sol1.pyeq%!#eq%!#\ 0>^W](&#M!project_euler/problem_005/sol2.pyeZ]eZ]\ 1⛲CK)wZS%project_euler/problem_006/__init__.pyenfAenfA\ R-aY,3.(]|!project_euler/problem_006/sol1.pyeZ]eZ]\ 3m{Hqja`!project_euler/problem_006/sol2.pyenfAenfA\ +R#<)k#y!project_euler/problem_006/sol3.pyeZ]eZ]\ 5tVnKtSOf!project_euler/problem_006/sol4.pyeZ]eZ]\ 7⛲CK)wZS%project_euler/problem_007/__init__.pyenfAenfA\ /1pOAZ}R!project_euler/problem_007/sol1.pyenfAenfA\} uQaì1t$td!project_euler/problem_007/sol2.pyenfAenfA\ 7wB`ۙ4WTh癭^Q!project_euler/problem_007/sol3.pyeZ]eZ]\ <⛲CK)wZS%project_euler/problem_008/__init__.pyenfAenfA\  iG6hI `f !project_euler/problem_008/sol1.pyenfAenfA\ :1CSv͠vJ]!project_euler/problem_008/sol2.pyenfAenfA\  ^,p-rdVy !project_euler/problem_008/sol3.pyeZ]eZ]\ A⛲CK)wZS%project_euler/problem_009/__init__.pyenfAenfA\\y%ji+6i !project_euler/problem_009/sol1.pyeZ]eZ]\ Ckr*"E6!project_euler/problem_009/sol2.pyenfAenfA\ 74 0cpsQ1G@/!project_euler/problem_009/sol3.pyeZ]eZ]\ F⛲CK)wZS%project_euler/problem_010/__init__.pyenfAenfA\ P-17(N z!I*;!project_euler/problem_010/sol1.pyenfAenfA\ "$\ gR/m !project_euler/problem_010/sol2.pyenfAenfA\ }`WK@ 740!project_euler/problem_010/sol3.pyeZ]eZ]\ K⛲CK)wZS%project_euler/problem_011/__init__.pyeZ]eZ]\ LJE>)1QNFh:"project_euler/problem_011/grid.txtenfAenfA\ 2 E:|X@L4yhs~j!project_euler/problem_011/sol1.pyenfAenfA\ < ۙ=vDi|6!project_euler/problem_011/sol2.pyeZ]eZ]\ P⛲CK)wZS%project_euler/problem_012/__init__.pyenfAenfA\ c+Aig%7TY|,<+!project_euler/problem_012/sol1.pyenfAenfA\ {8 t@mz|!project_euler/problem_012/sol2.pyeZ]eZ]\ T⛲CK)wZS%project_euler/problem_013/__init__.pyeZ]eZ]\ UChm 1kS-S!project_euler/problem_013/num.txtenfAenfA\ ,zAJyP34R!project_euler/problem_013/sol1.pyeZ]eZ]\ X⛲CK)wZS%project_euler/problem_014/__init__.pyenfAenfA\3IY'>,e24V1=!project_euler/problem_014/sol1.pyenfAenfA\p'$HR[2[xe1B̯ɑ!project_euler/problem_014/sol2.pyeZ]eZ]\ \⛲CK)wZS%project_euler/problem_015/__init__.pyenfAenfA\ z C+~o$m!project_euler/problem_015/sol1.pyeZ]eZ]\ _⛲CK)wZS%project_euler/problem_016/__init__.pyenfAenfA\ bXMCo3ND 2OK!project_euler/problem_016/sol1.pyenfAenfA\ h7!.tŐ|4RX}\%*!project_euler/problem_016/sol2.pyeZ]eZ]\ c⛲CK)wZS%project_euler/problem_017/__init__.pyeZ]eZ]\ diZNZǁR@^7a!project_euler/problem_017/sol1.pyeZ]eZ]\ f⛲CK)wZS%project_euler/problem_018/__init__.pyenfAenfA\ |Bp0aHzpNr8|%project_euler/problem_018/solution.pyeZ]eZ]\ hh6~Є.}yX&project_euler/problem_018/triangle.txteZ]eZ]\ j⛲CK)wZS%project_euler/problem_019/__init__.pyenfAenfA\ 8}O45ĵ!4!project_euler/problem_019/sol1.pyeZ]eZ]\ m⛲CK)wZS%project_euler/problem_020/__init__.pyeZ]eZ]\ nǴrNT<yhdyr2!project_euler/problem_020/sol1.pyenfAenfA\ gnjSRZ\6 !project_euler/problem_020/sol2.pyeZ]eZ]\ pKO(_4,!project_euler/problem_020/sol3.pyeZ]eZ]\ q;, ߦ> e`{s!project_euler/problem_020/sol4.pyeZ]eZ]\ s⛲CK)wZS%project_euler/problem_021/__init__.pyenfAenfA\ 55%>gΛfU!project_euler/problem_021/sol1.pyeZ]eZ]\ v⛲CK)wZS%project_euler/problem_022/__init__.pyeZ]eZ]\ wo{lL.4㭇s(project_euler/problem_022/p022_names.txteZ]eZ]\ x)$^t0p]z;lh!project_euler/problem_022/sol1.pyeZ]eZ]\ yZhn,k_HT3ـ!project_euler/problem_022/sol2.pyeZ]eZ]\ {⛲CK)wZS%project_euler/problem_023/__init__.pyenfAenfA\ Njrp&f!project_euler/problem_023/sol1.pyeZ]eZ]\ ~⛲CK)wZS%project_euler/problem_024/__init__.pyeZ]eZ]\ cx`Y9X*!project_euler/problem_024/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_025/__init__.pyenfAenfA\ 4d׆,x @;S !project_euler/problem_025/sol1.pyenfAenfA\HoI蟴e F !project_euler/problem_025/sol2.pyenfAenfA\ O : 0f5Q׏!project_euler/problem_025/sol3.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_026/__init__.pyenfAenfA\BJͳt c!project_euler/problem_026/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_027/__init__.pyenfAenfA\ . a>+OQE#'nB!project_euler/problem_027/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_028/__init__.pyenfAenfA\ xO-Y3Ɣ!project_euler/problem_028/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_029/__init__.pyenfAenfA\ U٨U5Oc3Tc\!project_euler/problem_029/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_030/__init__.pyenfAenfA\ /,kNNω\S'Dv]z!project_euler/problem_030/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_031/__init__.pyeZ]eZ]\ @81u).y+!project_euler/problem_031/sol1.pyeZ]eZ]\ 8KtʵYDtDZc!project_euler/problem_031/sol2.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_032/__init__.pyenfAenfA\ W)w#`FrcM&"project_euler/problem_032/sol32.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_033/__init__.pyenfAenfA\ ^2BKj{4/ 52Yp !project_euler/problem_033/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_034/__init__.pyenfAenfA\#獄2ۻz`i__"!project_euler/problem_034/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_035/__init__.pyeq%!#eq%!#\ H?g;i &!project_euler/problem_035/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_036/__init__.pyenfAenfA\ '5n|d+ֈ+J!project_euler/problem_036/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_037/__init__.pyenfAenfA\  v˖JrxP!project_euler/problem_037/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_038/__init__.pyenfAenfA\  OП}Dž)8#V!project_euler/problem_038/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_039/__init__.pyeZ]eZ]\ AYHhtoa!project_euler/problem_039/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_040/__init__.pyeZ]eZ]\ ji7w#N2RPc!project_euler/problem_040/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_041/__init__.pyenfAenfA\ S.vtB yV!project_euler/problem_041/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_042/__init__.pyenfAenfA\ RN@@r0r?]B'project_euler/problem_042/solution42.pyeZ]eZ]\ ?گ:BQ ͗t#project_euler/problem_042/words.txteZ]eZ]\ y-`H,oRbQ%project_euler/problem_043/__init__.pyenfAenfA\ 3 ɕ}v|qx(Kpk!project_euler/problem_043/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_044/__init__.pyenfAenfA\ &;ujemω}sT"%!project_euler/problem_044/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_045/__init__.pyenfAenfA\ !,-v 177<!project_euler/problem_045/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_046/__init__.pyenfAenfA\ ' ݛo>akzG!project_euler/problem_046/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_047/__init__.pyeZ]eZ]\  8r .|=v7!project_euler/problem_047/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_048/__init__.pyenfAenfA\ ZE8]N~jSw˲W)!project_euler/problem_048/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_049/__init__.pyenfAenfA\ \u`ݮB);޵{P}!!project_euler/problem_049/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_050/__init__.pyenfAenfA\ no+]x>!project_euler/problem_050/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_051/__init__.pyenfAenfA\ ( ΒDUs3h!project_euler/problem_051/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_052/__init__.pyenfAenfA\ 5!c6(W+ !project_euler/problem_052/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_053/__init__.pyeZ]eZ]\ 7&Y]zAӻX!project_euler/problem_053/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_054/__init__.pyeZ]eZ]\ u0Had׷)project_euler/problem_054/poker_hands.txtenfAenfA\ 6 ߥ֊wpZ<u.!project_euler/problem_054/sol1.pyepNepN\ W5yG&]])n,project_euler/problem_054/test_poker_hand.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_055/__init__.pyeZ]eZ]\  iT:iQ۶ZEu!project_euler/problem_055/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_056/__init__.pyenfAenfA\ rņ/i,l!project_euler/problem_056/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_057/__init__.pyeZ]eZ]\ GX'o'!project_euler/problem_057/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_058/__init__.pyenfAenfA\I jX K6ӆip!project_euler/problem_058/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_059/__init__.pyeZ]eZ]\ ޳$rR,iYo)project_euler/problem_059/p059_cipher.txtenfAenfA\ $;NaFtďY!project_euler/problem_059/sol1.pyeZ]eZ]\ `'7@t9j/ g\Yʸ!)project_euler/problem_059/test_cipher.txteZ]eZ]\ ⛲CK)wZS%project_euler/problem_062/__init__.pyenfJenfJ\ 8N>Q;W4BN!project_euler/problem_062/sol1.pyeZ]eZ]\ y-`H,oRbQ%project_euler/problem_063/__init__.pyenfJenfJ\  .Vp~Bb&ȁsZ!project_euler/problem_063/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_064/__init__.pyenfJenfJ\ v/^Mݿ!project_euler/problem_064/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_065/__init__.pyenfJenfJ\ ` Gs׼,8jw  T"!project_euler/problem_065/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_067/__init__.pyenfJenfJ\ +Ag͈@٭"!project_euler/problem_067/sol1.pyenfJenfJ\ %.qpp+)|39䍁!project_euler/problem_067/sol2.pyeZ]eZ]\ ;.+8-NJYuKZu+&project_euler/problem_067/triangle.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_068/__init__.pyenfJenfJ\ ρKW>pDA.D'!project_euler/problem_068/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_069/__init__.pyenfJenfJ\ ]aVFh*/IƲ!project_euler/problem_069/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_070/__init__.pyeqK9߬OeqK9߬O\  '?7uh<\FpCb!project_euler/problem_070/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_071/__init__.pyeZ]eZ]\  A[~St ]UJ8sk[!project_euler/problem_071/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_072/__init__.pyeq%!#eq%!#\ @1V6h),!project_euler/problem_072/sol1.pyeZ]eZ]\ ,:SP0PO~!project_euler/problem_072/sol2.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_073/__init__.pyenfJenfJ\ `+fv <>?ڭ0!!project_euler/problem_073/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_074/__init__.pyenfJenfJ\  עWON5.BY|5!project_euler/problem_074/sol1.pyenfJenfJ\ -K#㇘-z"eA6G!project_euler/problem_074/sol2.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_075/__init__.pyenfJenfJ\  w!N=;!project_euler/problem_075/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_076/__init__.pyeZ]eZ]\ Q`~ajXel !project_euler/problem_076/sol1.pyeZ]eZ]\ ⛲CK)wZS%project_euler/problem_077/__init__.pyenfJenfJ\ "`P^%ᬃ!project_euler/problem_077/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_078/__init__.pyenfJenfJ\ 4~Y8f;NMPZ~SVnE!project_euler/problem_078/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_079/__init__.pyenfJenfJ\ AVs$U3m87$project_euler/problem_079/keylog.txtenfJenfJ\ @,p$HW٠ ]3[1!)project_euler/problem_079/keylog_test.txtenfJenfJ\ ~JCu Gdǖ!project_euler/problem_079/sol1.pyeZ]eZ]\ +⛲CK)wZS%project_euler/problem_080/__init__.pyenfJenfJ\ .NiحvYZӪ@~!project_euler/problem_080/sol1.pyeZ]eZ]\ .⛲CK)wZS%project_euler/problem_081/__init__.pyeZ]eZ]\ /zIS"A6IƳ4'˜$project_euler/problem_081/matrix.txtenfJenfJ\ $kTNag]v!project_euler/problem_081/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_082/__init__.pyenfJenfJ\ zIS"A6IƳ4'˜#project_euler/problem_082/input.txtenfJenfJ\ 7{P܈w/(Ņ)8!project_euler/problem_082/sol1.pyenfJenfJ\ av}oӕ^8)project_euler/problem_082/test_matrix.txteZZ\eZZ\\ 7⛲CK)wZS%project_euler/problem_085/__init__.pyenfJenfJ\ %IAcR>vA|Z !project_euler/problem_085/sol1.pyeZZ\eZZ\\ :⛲CK)wZS%project_euler/problem_086/__init__.pyenfJenfJ\ JI^z\{DL!project_euler/problem_086/sol1.pyeZZ\eZZ\\ =⛲CK)wZS%project_euler/problem_087/__init__.pyeZZ\eZZ\\ >DH'&u!project_euler/problem_087/sol1.pyeZZ\eZZ\\ @y-`H,oRbQ%project_euler/problem_089/__init__.pyeZZ\eZZ\\ A!B̩-΀⡺K}1project_euler/problem_089/numeralcleanup_test.txteZZ\eZZ\\ B&yPe5Z[h;=kDL>(project_euler/problem_089/p089_roman.txtenfJenfJ\ & 1Y um[ (q!project_euler/problem_089/sol1.pyeZZ\eZZ\\ E⛲CK)wZS%project_euler/problem_091/__init__.pyeZZ\eZZ\\ Fllp"ELTm26s!project_euler/problem_091/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_092/__init__.pyenfJenfJ\  ? {Oa~4LXn!project_euler/problem_092/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_094/__init__.pyenfJenfJ\ &ª09=S$!project_euler/problem_094/sol1.pyeZZ\eZZ\\ Ny-`H,oRbQ%project_euler/problem_097/__init__.pyenfJenfJ\ (`Â4˵C-!project_euler/problem_097/sol1.pyeZZ\eZZ\\ Q⛲CK)wZS%project_euler/problem_099/__init__.pyeZZ\eZZ\\ R6=";&q \ބ &project_euler/problem_099/base_exp.txtenfJenfJ\ =V!X<1ONDG!project_euler/problem_099/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_100/__init__.pyenfJenfJ\ 6sxj߱U<Ur!project_euler/problem_100/sol1.pyeZZ\eZZ\\ X⛲CK)wZS%project_euler/problem_101/__init__.pyenfJenfJ\ @5yjb]q3ik!project_euler/problem_101/sol1.pyeZZ\eZZ\\ [⛲CK)wZS%project_euler/problem_102/__init__.pyeZZ\eZZ\\ \g?Ay,@Dr,project_euler/problem_102/p102_triangles.txtenfJenfJ\ ' UOncaUer0!project_euler/problem_102/sol1.pyeZZ\eZZ\\ ^7\e ,project_euler/problem_102/test_triangles.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_104/__init__.pyenfJenfJ\  `oۏ O-5L0!project_euler/problem_104/sol1.pyeZZ\eZZ\\ c⛲CK)wZS%project_euler/problem_107/__init__.pyeZZ\eZZ\\ d6+hꋐ:f^X*project_euler/problem_107/p107_network.txtenfJenfJ\ *sFYK{39!project_euler/problem_107/sol1.pyeZZ\eZZ\\ fzW ʺd**b})*project_euler/problem_107/test_network.txtenfJenfJ\ ⛲CK)wZS%project_euler/problem_109/__init__.pyenfJenfJ\  ]Y Qkq.\!project_euler/problem_109/sol1.pyeZZ\eZZ\\ k⛲CK)wZS%project_euler/problem_112/__init__.pyeqK9߬OeqK9߬O\ k5eJ=$!project_euler/problem_112/sol1.pyeZZ\eZZ\\ n⛲CK)wZS%project_euler/problem_113/__init__.pyenfJenfJ\ gP wbF-`n|!project_euler/problem_113/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_114/__init__.pyenfJenfJ\ A +%R[6LU14!project_euler/problem_114/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_115/__init__.pyenfJenfJ\ 5M_ p!project_euler/problem_115/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_116/__init__.pyenfJenfJ\ >Z@n*@̱!project_euler/problem_116/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_117/__init__.pyenfJenfJ\ !DTšeV+BN!project_euler/problem_117/sol1.pyeZZ\eZZ\\ }⛲CK)wZS%project_euler/problem_119/__init__.pyenfJenfJ\ +`͡y,5cm|!project_euler/problem_119/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_120/__init__.pyeZZ\eZZ\\ :h!!E`qU?dS8!project_euler/problem_120/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_121/__init__.pyenfJenfJ\ gSdZT!project_euler/problem_121/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_123/__init__.pyenfJenfJ\ , LݙRO)ݥ!project_euler/problem_123/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_125/__init__.pyenfJenfJ\ aoo/S H.&!project_euler/problem_125/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_129/__init__.pyeZZ\eZZ\\ .:'-iLB!project_euler/problem_129/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_131/__init__.pyenfJenfJ\ >^tMxi5!project_euler/problem_131/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_135/__init__.pyeq%!#eq%!#\ 9Ehi)q!project_euler/problem_135/sol1.pyenfJenfJ\ ⛲CK)wZS%project_euler/problem_144/__init__.pyenfJenfJ\ O,fb#i!project_euler/problem_144/sol1.pyenfJenfJ\ "⛲CK)wZS%project_euler/problem_145/__init__.pyepNepN\  Ea/6 8>Wg>ժY!project_euler/problem_145/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_173/__init__.pyenfJenfJ\ TTb̹; x\P!project_euler/problem_173/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_174/__init__.pyeZZ\eZZ\\ %ZekPO^7i2r !project_euler/problem_174/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_180/__init__.pyenfJenfJ\ -MʧkRUE'%!project_euler/problem_180/sol1.pyenfJenfJ\ '⛲CK)wZS%project_euler/problem_187/__init__.pyepNepN\ >*p#+K'yL!4!project_euler/problem_187/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_188/__init__.pyenfJenfJ\ N)'F/H0!project_euler/problem_188/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_191/__init__.pyenfJenfJ\ h kTC`2Jۄ̪!project_euler/problem_191/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_203/__init__.pyenfJenfJ\ Gڔ6$j|0moF:kS!project_euler/problem_203/sol1.pyenfJenfJ\ *⛲CK)wZS%project_euler/problem_205/__init__.pyenfJenfJ\ +c2tOZ^ҠW2!project_euler/problem_205/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_206/__init__.pyeZZ\eZZ\\ +2wp9e!project_euler/problem_206/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_207/__init__.pyenfJenfJ\ s +5ǚ' L|c!project_euler/problem_207/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_234/__init__.pyenfJenfJ\  { n 8%@ !project_euler/problem_234/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_301/__init__.pyenfJenfJ\ 'KI@3-ǑgBz!project_euler/problem_301/sol1.pyenfJenfJ\ -⛲CK)wZS%project_euler/problem_493/__init__.pyeq%!#eq%!#\ ɇR0CȞz $ǚ?!project_euler/problem_493/sol1.pyeZZ\eZZ\\ ⛲CK)wZS%project_euler/problem_551/__init__.pyenfJenfJ\ t&,^B<LYpe!project_euler/problem_551/sol1.pyenfJenfJ\ 0⛲CK)wZS%project_euler/problem_587/__init__.pyenfJenfJ\ 1 oa]lmUsx@!project_euler/problem_587/sol1.pyenfJenfJ\ 7⛲CK)wZS%project_euler/problem_686/__init__.pyenfJenfJ\ 8uQQ2r6d !project_euler/problem_686/sol1.pyenfJenfJ\ :⛲CK)wZS%project_euler/problem_800/__init__.pyenfJenfJ\ ;\x{q^-˵ !project_euler/problem_800/sol1.pyeq%!#eq%!#\  WC߂/_ȳ$pyproject.tomlenfJenfJ\ um<dWD]$Equantum/README.mdeZZ\eZZ\\ ⛲CK)wZSquantum/__init__.pyeq%!#eq%!#\ l ®A]bٟ+_quantum/bb84.pyeq%!#eq%!#\ i[^5v:_LEWOMLquantum/deutsch_jozsa.pyeq%!#eq%!#\ !}U4WB!df~quantum/half_adder.pyeq%!#eq%!#\ #'-z*:Ϊ`\quantum/not_gate.pyenfJenfJ\ Av*µHuC -rquantum/q_fourier_transform.pyeqK9߬OeqK9߬O\ f1Q`V0\.A*Hquantum/q_full_adder.pyeq%!#eq%!#\ 2䓲CC7Rquantum/quantum_entanglement.pyenfJenfJ\ D*nYizUH&quantum/quantum_random.py.DISABLED.txteq%!#eq%!#\  ]@ quantum/quantum_teleportation.pyeq%!#eq%!#\ "B@D].quantum/ripple_adder_classic.pyeq%!#eq%!#\ >`[1J O u:Cquantum/single_qubit_measure.pyeq%!#eq%!#\  ;1/]שPquantum/superdense_coding.pyeqK9߬OeqK9߬O\l'R=T.@!2t*requirements.txteZZ\eZZ\\ ⛲CK)wZSscheduling/__init__.pyenfJenfJ\ 1͸!b6%Lys%scheduling/first_come_first_served.pyeqK9߬OeqK9߬O\ YPS*:`sI%)scheduling/highest_response_ratio_next.pyenfJenfJ\ K{#W_2C9)2Ea@*scheduling/job_sequencing_with_deadline.pyenfJenfJ\ L0t<ťSyeYW(scheduling/multi_level_feedback_queue.pyenfJenfJ\ M itM0XN~* /scheduling/non_preemptive_shortest_job_first.pyenfJenfJ\ 3M٥S 7m7sIJ''wEscheduling/round_robin.pyenfJenfJ\ ] sd|<qFҫp scheduling/shortest_job_first.pyeZZ\eZZ\\ ⛲CK)wZSscripts/__init__.pyenlaenla\ H$oe Ź,hscripts/build_directory_md.pyenfTSenfTS\ ,CvGf SDo="scripts/project_euler_answers.jsonenfTSenfTS\ So#qҀemant2scripts/validate_filenames.pyenfTSenfTS\ w J&wLX5~scripts/validate_solutions.pyeZ[eZ[\ ⛲CK)wZSsearches/__init__.pyeo`hA.eo`hA.\#Oeb"bߺ_޽<{0searches/binary_search.pyenfTSenfTS\ N#oAB~t D&!searches/binary_tree_traversal.pyeZ[eZ[\ hƅnEE_5^ searches/double_linear_search.pyeZ[eZ[\ H>L I(ۂ*searches/double_linear_search_recursion.pyenfTSenfTS\ 6 UӞؗ2yNc searches/fibonacci_search.pyenfTSenfTS\ zN'Vzݢzsearches/hill_climbing.pyenfTSenfTS\ 2IL&2޾ގ< searches/interpolation_search.pyenfTSenfTS\ 7;x dgh_g^searches/jump_search.pyenfTSenfTS\  Wnֺ?j searches/linear_search.pyeZ[eZ[\ ^ތMj~ lsearches/quick_select.pyeZ[eZ[\ iϟ5\V6%ݚy-<"searches/sentinel_linear_search.pyenfTSenfTS\ =siJd:Hg.e searches/simple_binary_search.pyenfTSenfTS\ n="] "@ l6 ݥk5 searches/simulated_annealing.pyenfTSenfTS\ J*C٘Yvbq"ۨ5searches/tabu_search.pyeZ[eZ[\ FbzEv;searches/tabu_test_data.txtenfTSenfTS\ 6/Ңb+RvM{searches/ternary_search.pyeqeq\ ΰ |+~*=PHбsorts/README.mdeZ[eZ[\ ⛲CK)wZSsorts/__init__.pyenfTSenfTS\ sd=ϴUGxZX1[sorts/bead_sort.pyeo WYeo WY\ =AU * S^a?sorts/binary_insertion_sort.pyenfTSenfTS\  _zE5]5asorts/bitonic_sort.pyenfTSenfTS\ )? U&Sd: 5MOTwsorts/bogo_sort.pyeo`hA.eo`hA.\ *}6*[A_ep@B^2sorts/bubble_sort.pyeqK9߬OeqK9߬O\{E&|n{-sorts/bucket_sort.pyenfTSenfTS\ R'Պ<CB]T53HUsorts/circle_sort.pyeo WYeo WY\.81h*Ϙ ;+?#ͼsorts/cocktail_shaker_sort.pyenfTSenfTS\ :<TmL6z_տsorts/comb_sort.pyeqK9߬OeqK9߬O\ İ2=[.RI*sorts/counting_sort.pyeqK9߬OeqK9߬O\ ߀o@Dy+`c} RMsorts/cycle_sort.pyeqK9߬OeqK9߬O\ \gE lCp~nKsorts/double_sort.pyenfTSenfTS\ TPu:{Q x6 !sorts/dutch_national_flag_sort.pyenfTSenfTS\ U犝 #VI9sorts/exchange_sort.pyenfTSenfTS\ yE+@NrYsorts/external_sort.pyeZ[eZ[\ l&bdKT%jsorts/gnome_sort.pyeZ[eZ[\ Mʇ؜!1_@$M:FLbgRsorts/heap_sort.pyeqK9߬OeqK9߬O\ m[`ә,#>0bJ Bsorts/insertion_sort.pyeo WYeo WY\ IdZ۷|UCsorts/intro_sort.pyenfTSenfTS\ B "2ytat֥o+8X!nsorts/iterative_merge_sort.pyenfTSenfTS\ 2J[ޠ?aᇊdʪ4sorts/merge_insertion_sort.pyeo WYeo WY\ T &n$m|}sorts/merge_sort.pyenfTSenfTS\ VGLuؿy%\6̽sorts/msd_radix_sort.pyeZ[eZ[\ [ACHW)sorts/natural_sort.pyenfTSenfTS\ '@;0- E('sorts/normal_distribution_quick_sort.mdenfTSenfTS\ ]}KñxUt3'sorts/odd_even_sort.pyeqK9߬OeqK9߬O\ Kև Vy;?ML;(sorts/odd_even_transposition_parallel.pyenfTSenfTS\ mx*q^!Rc_W/sorts/odd_even_transposition_single_threaded.pyeZ[eZ[\ Ds5;r@~&@>)Ksorts/pancake_sort.pyenfTSenfTS\ c 2` RO!dsorts/patience_sort.pyenfTSenfTS\ 9>mL oqG:|8~sorts/pigeon_sort.pyeZ[eZ[\ #PEL _S]0sorts/pigeonhole_sort.pyenfTSenfTS\ :>>HC޺wsorts/quick_sort.pyeZ[eZ[\ % (md"L2 sorts/quick_sort_3_partition.pyenfTSenfTS\ +abI >}5tsorts/radix_sort.pyeq%!#eq%!#\ % F6ϯ1n\-sorts/random_normal_distribution_quicksort.pyeq%!#eq%!#\ 'tgA~",u# sorts/random_pivot_quick_sort.pyeo`hA.eo`hA.\ 1Y>[n4}98> K'sorts/recursive_bubble_sort.pyenfTSenfTS\ )}WaC~77׳{!sorts/recursive_insertion_sort.pyenfTSenfTS\ WL,@Ue-qP 6h"sorts/recursive_mergesort_array.pyenfTSenfTS\  s_D ~0hsorts/recursive_quick_sort.pyeq%!#eq%!#\ Lpp "EVp,yVsorts/selection_sort.pyeq%!#eq%!#\ p)Ѽ w`'bsorts/shell_sort.pyenfTSenfTS\ X]{s5A ~Tsorts/shrink_shell_sort.pyenfTSenfTS\ = se='@T2aOsorts/slowsort.pyeq%!#eq%!#\ [͚[4 VyIl/sorts/stooge_sort.pyenfTSenfTS\ cLӖV~c]Dsorts/strand_sort.pyenfTSenfTS\ Q(&(h2sorts/tim_sort.pyeq%!#eq%!#\ YWS 83T<sorts/topological_sort.pyepNepN\ kxθ:?1kZ2T!sorts/tree_sort.pyeZ[eZ[\ 3D/zX+,?d}sorts/unknown_sort.pyeZ[eZ[\ 4<`oV>]暗sorts/wiggle_sort.pyeZ[eZ[\ 6⛲CK)wZSstrings/__init__.pyenfTSenfTS\ *KOhՎ8wJT(strings/aho_corasick.pyenfTSenfTS\ Y݊tM6l}GF),%strings/alternative_string_arrange.pyenfTSenfTS\ ZEߤ8gUpstrings/anagrams.pyenf\enf\\ [ڿRoush5bstrings/anagrams.txtenf\enf\\ \w _K\A tD"strings/autocomplete_using_trie.pyenf\enf\\ ]N&B (-Me 'U}Ystrings/barcode_validator.pyenf\enf\\ @ s/tB|l޴b}strings/boyer_moore_search.pyenf\enf\\ ?!S8|nQ1_W1strings/can_string_be_rearranged_as_palindrome.pyeo WYeo WY\ M|+SM50I.\strings/capitalize.pyenf\enf\\M#Ϲ!ðmT$AnKstrings/check_anagrams.pyenf\enf\\ a xEt cxDDt strings/credit_card_validator.pyeo WYeo WY\ uT1(xN-strings/detecting_english_programmatically.pyenfeenfe\ f<u9v*@dSڪstrings/dictionary.txtenfeenfe\ g+3?A$c9'S~[9strings/dna.pyeo WYeo WY\ az7q܄PFtmstrings/frequency_finder.pyenfeenfe\ j#I*a[d8strings/hamming_distance.pyenfeenfe\ kc{"mg?ZG"%!strings/indian_phone_validator.pyenfeenfe\ lmֶi+O 3h#strings/is_contains_unique_chars.pyenfeenfe\ mh٬-ExuSo~istrings/is_isogram.pyenfeenfe\ n ȸ1Pt6strings/is_pangram.pyenfeenfe\ p V`n:ff{&.vstrings/is_spain_national_id.pyenfeenfe\ tldV^=锈>˵<ʗGR$strings/is_srilankan_phone_number.pyeZZeZZ\ SM:׾8(Z>strings/jaro_winkler.pyeo`hA.eo`hA.\ 20sVwZ֍}&Qstrings/join.pyeq%!#eq%!#\ Tq;5TqgV-tstrings/knuth_morris_pratt.pyenfeenfe\ {MÛes7by#strings/levenshtein_distance.pyepNepN\ t<뫑ȁ1QCVsstrings/lower.pyenfeenfe\ Ō|Ddbյw~strings/manacher.pyenfeenfe\ E pMJi!p_%strings/min_cost_string_conversion.pyenfeenfe\ 1Yr iN&:4strings/naive_string_search.pyenfeenfe\ wq JG2 #Ie}strings/ngram.pyenfeenfe\ x =z#_R절Hstrings/palindrome.pyenfeenfe\ Ye5 !{O]|}7$strings/prefix_function.pyeqK9߬OeqK9߬O\  ,av =4?'strings/rabin_karp.pyeZZeZZ\ `Z'Ro g=Me=SMstrings/remove_duplicate.pyeq%!#eq%!#\ \*|%[\W/Ustrings/reverse_letters.pyeq%!#eq%!#\ .]9Q?@v&Mjstrings/reverse_long_words.pyeZZeZZ\ bVPL \a~Gstrings/reverse_words.pyenfeenfe\ {U3zc/, h+N|*strings/snake_case_to_camel_pascal_case.pyeZZeZZ\ db+@25Ep$3t:strings/split.pyeo WYeo WY\  =G-q:i3i-strings/string_switch_case.pyenfeenfe\  #$9ӯ7)ystrings/text_justification.pyenfeenfe\  [\Rk#&DOSez}-strings/top_k_frequent_words.pyepNepN\ V^@`ަwbstrings/upper.pyenfeenfe\ i4C$ _#|V5Zq strings/wave.pyenfeenfe\  4̩q?]'erJ$strings/wildcard_pattern_matching.pyenfeenfe\ 0fZqekhhJGstrings/word_occurrence.pyeo`hA.eo`hA.\ -&~{5܉K97 Cstrings/word_patterns.pyenfHwenfHw\ & KWcd:}˫v0strings/words.txtenfenf\  {ڵ6V e[Estrings/z_function.pyeZXeZX\ r⛲CK)wZSweb_programming/__init__.pyeZXeZX\ s̗~~Ag^ RVweb_programming/co2_emission.pyeqK9߬OeqK9߬O\ ] | 3ݤY*web_programming/convert_number_to_words.pyeqZ3eqZ3\ C;] |;0Fn#(web_programming/covid_stats_via_xpath.pyenfenf\ ^m1++խ)[$ PIa'web_programming/crawl_google_results.pyenfenf\ #*= 잺SfM1`0web_programming/crawl_google_scholar_citation.pyenfenf\ ;ZL3b#%web_programming/currency_converter.pyeqZ3eqZ3\ DNQٔ]v!n'٧&web_programming/current_stock_price.pyeo WYeo WY\ )>ȩZ IBbFC"web_programming/current_weather.pyeZXeZX\ zY$NpI* ӕ_Z"web_programming/daily_horoscope.pyenfenf\  DGEIuTG$0?E]4web_programming/download_images_from_google_query.pyenfenf\  Nxשd!^ͩL]"web_programming/emails_from_url.pyeqK9߬OeqK9߬O\ `r;ݍ,qfD{tt.web_programming/fetch_anime_and_play.py.BROKENeZXeZX\ ~,{i2ޡJU߷>!web_programming/fetch_bbc_news.pyenfenf\ DЪN{cIӹ^>)aQ$web_programming/fetch_github_info.pyenfenf\ yZ 9']6E.n0web_programming/fetch_jobs.pyenfenf\ jW^tzZ0web_programming/fetch_quotes.pyenfenf\  QrX X&web_programming/fetch_well_rx_price.pyeqK9߬OeqK9߬O\ k ǖy?"y+A9*web_programming/get_amazon_product_data.pyeZXeZX\ KnV)Mw5b .web_programming/get_imdb_top_250_movies_csv.pyenlaenla\ _q29xvRK$6bZweb_programming/get_imdbtop.pyeqK9߬OeqK9߬O\ |jTbpl)Emʢ0web_programming/get_top_billionaires.py.disabledenfenf\ hQJFj-p#web_programming/get_top_hn_posts.pyenfenf\ :iqW',B@?"web_programming/get_user_tweets.pyenfenf\ 3I>f8;wjweb_programming/giphy.pyenfenf\ $Q&ãG/A/8)$web_programming/instagram_crawler.pyeo WYeo WY\ !gM}fi(y(! web_programming/instagram_pic.pyeZXeZX\ /$<PQ=1z-/x"web_programming/instagram_video.pyenfenf\ mѧoNsKweb_programming/nasa_data.pyenfenf\ 6f~Deu?Ix&web_programming/open_google_results.pyenfenf\  Z-8m))web_programming/random_anime_character.pyeZXeZX\  /G/*-`{~lZ B)web_programming/recaptcha_verification.pyeq%!#eq%!#\ 9\(9Ş{bZs+Aweb_programming/reddit.pyenfenf\  )/ ʲfiа;0{['web_programming/search_books_by_isbn.pyenfenf\ ^ֶLu)V4^w$9p web_programming/slack_message.pyeZXeZX\ [-|x-T}wY)web_programming/test_fetch_github_info.pyenfenf\ ʁLvU=(Ipf&web_programming/world_covid19_stats.pyTREE1293 47 Դ ,Ԋk`Tmaths160 3 ,Pa&Vzd7images2 0 )|A(U!series8 0 Ψ `_AZ:{ gpolynomials2 0 BP'%A sBother25 0 Q-tG>8'Osorts54 0 mɢqGZZgσgraphs61 1 KHڰFAo|tests3 0 ,jw;Va$$AtǸBhashes13 0 mZܫZtGIJomatrix20 1 HjRΟCP'P*_tests3 0 t&G+צY*.github11 2 Pk^/-+workflows4 0 ^1iZISSUE_TEMPLATE4 0 ,"VGy"vf 6.vscode1 0 6 ,kHrޖyciphers45 0 JJtjWgeodesy3 0 t6 %Ophysics20 0 `*bX*3+ quantum14 0 [Q;C UEΩdscripts5 0 <'2q@ 6ς*4Ostrings51 0 b\ *WB fractals4 0 WkW( ҽL|Jgraphics3 0 +e+d܅EVknapsack8 1 -;Ĺ'{Qk__tests3 0 -v>g 0searches17 0 +툽#>WZhfinancial6 0 F"X H{h*Qblockchain5 0 Gn ),ȥdscheduling8 0 z}[b[ ޶"*kcompression16 1 xw?9D镯 O>oimage_data7 0 broe\x-.`r`conversions27 0 g8hƚ;ُMelectronics13 0 {q>!> 3fuzzy_logic2 0 (hFNXxbdbacktracking18 0 6Aoj ;ą.devcontainer2 0 7qhg)?yl$`audio_filters7 0 WraNn9afile_transfer6 1 "gzjkx_tests2 0 ָ)0<kཫproject_euler317 127 SS&TMproblem_0018 0 ;|znLWooproblem_0026 0 K'3Ϋ:U(])fproblem_0034 0 )e™7e0{~w$problem_0043 0 L 7քu @problem_0053 0 ,}RrX3Ԥ 9xOproblem_0065 0 Hs7{`1 Iwproblem_0074 0 V٧cէlOVproblem_0084 0 (xFpbHd`problem_0094 0 L3tn.*gW?Z;problem_0104 0 _)· +"0problem_0114 0 m_~ʼnt=6+2problem_0123 0 Y)>ň;5problem_0133 0 #3yyXkNproblem_0143 0 (D74qͧproblem_0152 0 K] CQqnproblem_0163 0 ë}ޤmkLproblem_0172 0 y=,y)шOt" Hbproblem_0183 0 bc6m'US:#problem_0192 0 tg6~|bg16hproblem_0205 0 h(p Ost 3_problem_0212 0 y *,9problem_0224 0 sK[媘M problem_0232 0 -\ƆgRKJproblem_0242 0 t S'Kj"1Rȥ problem_0254 0 R)<4=PƼb3V~ problem_0262 0 (0&`^ YbV problem_0272 0 'Լ϶C_j.o_Dproblem_0282 0 )mOqo &4problem_0292 0 e="ߠ?وzproblem_0302 0 t (SU) 0-wproblem_0313 0 cʸ"@+%problem_0322 0 P0ʤ>u^{nG=problem_0332 0 JyfJ)-L4VZpproblem_0342 0 ؊ ?8udrdGb)problem_0352 0 hr.A%3C"{problem_0362 0 {(CVF\n[<t`problem_0372 0 4"|J S$problem_0382 0 J i3P*Lproblem_0392 0 -} -/ny]problem_0402 0 P *uСXA?(problem_0412 0 9%44މ[problem_0423 0 $Cz\Pɝ!8(problem_0432 0 zuޕHsۖ$problem_0442 0 ~g| $Diproblem_0452 0 qj=lQdsproblem_0462 0 ] 'cP^${ў̤axproblem_0472 0 L ifJ%Q@{Eproblem_0482 0 4x)u9*Âproblem_0492 0 uo^LX6{{h. Rproblem_0502 0 rbtR~problem_0512 0 V Q[T*5*^XV7vproblem_0522 0 ~Hpproblem_0532 0 \vET-" E^problem_0544 0 XNgK eproblem_0552 0 Ggmd~GGw,B_problem_0562 0 접"5ZBihܵ8)problem_0572 0 | K^κV-problem_0582 0 uɢS]ۖd5uproblem_0594 0 &w T", {73}problem_0622 0 yDc}tZIke<iproblem_0632 0 XzZNmXproblem_0642 0 rsݙmproblem_0652 0 nhΛH)OM%problem_0674 0 /8VVv3"L=nN7problem_0682 0 F:X4|wnot+n3\problem_0692 0 H ʫm)wproblem_0702 0 @R8 qMei~TEproblem_0712 0 h."y۾Nf;awproblem_0723 0 C9m|X>U~#dJTϒproblem_0732 0 >=&XXN604=nproblem_0743 0 d&&rc~e.[ƭproblem_0752 0 ~ߕZ/Mproblem_0762 0 v= txZْqproblem_0772 0 DaC!TPzBproblem_0782 0 ncp>(דfDproblem_0794 0 ȘsA62~4BΓlproblem_0802 0 IUb²5problem_0813 0 _g.48c^\$problem_0824 0 ˗/@U(%problem_0852 0 0du UwZproblem_0862 0 N[Txy |d'aproblem_0872 0 WRg\:3iproblem_0894 0 g|P xqGproblem_0912 0 h3c=6}LӞbproblem_0922 0 zi(ŔIgV1Niproblem_0942 0 W5w%8Pޫrproblem_0972 0 (ڎ<m TY]<2 problem_0993 0 J=sq\o-q,problem_1002 0 =wo{ћЙF9_problem_1012 0 ׋@}5tproblem_1024 0 COqdD#_fproblem_1042 0 ֕f9tV y[0Tproblem_1074 0 m~ ЯN*8Jeproblem_1092 0 pEuG=_k9problem_1122 0 @GcBǸrƐ@Cproblem_1132 0 wpJeOIZSD:\kproblem_1142 0 4M} Gs)rR6cproblem_1152 0 ۦf3#aA2݉problem_1162 0 TJ٥$ʂ$(-problem_1172 0 |&TEic ,wLnproblem_1192 0 |{/hck$.J+J>problem_1202 0 2)X')0c6problem_1212 0 \J$@LVproblem_1232 0 [J_xpMlKproblem_1252 0 Z͡^/%sLS#problem_1292 0 f//]@V76problem_1312 0 lkPdPOproblem_1352 0 {\By/Iproblem_1442 0 K9uX!pP񅘍problem_1452 0 v(3Be޳,ؘ0֍problem_1732 0 !$>,ppRpproblem_1742 0 z0Y$A۾RtV8problem_1802 0 p ! yz^Fproblem_1872 0 b@Q=4problem_1882 0 _\Z% $ZAtFgproblem_1912 0 jMd,ѡHWproblem_2032 0 .-fm^*E\)problem_2052 0  n(θh[cKproblem_2062 0 m* ptP[problem_2072 0 U4E9ɯJ{)problem_2342 0 %KpTgO#<problem_3012 0 +ϠmvS6problem_4932 0 H-b[،>/ҧ28problem_5512 0 ~4˜ ]ihÍ~problem_5872 0 6Hn>"٣8s4problem_6862 0 P|j蛗ieðproblem_8002 0 '+0oM:՝greedy_methods4 0 6 RuY<&linear_algebra12 1 "1zh`ugsrc10 0 b$ Kg䯰,&:P$neural_network9 1 )Ul:<af`activation_functions1 0 'ޱqI0\ubsk|@boolean_algebra10 0 w\Ύ۹bj?,T.Ŏcomputer_vision9 0 ^0U#}OkSG Y.fdata_structures90 9 L/|;domd R'*heap8 0 U?8cb~7m#trie3 0 0r"V\Kä0aOBsqueue9 0 /^"`.uZ!Ĝharrays3 0 U7G?s-ûw stacks13 0 _-<341,rFhashing10 2 ~oY T1tests1 0 oFU5@嗯Bnumber_theory2 0 wϔ61roF6@binary_tree26 0 ,3z2*5o̥Mlinked_list14 0 =>54E ; l/tdisjoint_set3 0 K^ꆤ$|50]Unetworking_flow3 0 }zF$PY^Sweb_programming37 0 lpұ;iTbit_manipulation19 0 kVlV iܼȳbmachine_learning35 3 B_$~fڃdmBlstm3 0 B-"3Gn 7'forecasting3 0 &km~$ps<local_weighted_learning3 0 xtjוTJC#cellular_automata6 0 G6S d Ͱbgenetic_algorithm2 0 B~[ػ2Miˈdivide_and_conquer13 0 L[Ac23Wo$?linear_programming1 0 oh'h' -arithmetic_analysis16 1 *haj7>image_data3 0 'PZbwұ-_dynamic_programming41 0 Sb~digital_image_processing34 8 %0:^`|XdyVresize2 0 -$?T =JQJifilters8 0 ?bcq:+rotation2 0 [Սf^&Ԯdithering2 0 H?ejOB pimage_data3 0 Lu(X,aӕm=edge_detection2 0 5 M I$^histogram_equalization6 2 _O/!;G<?DLimage_data2 0 ‰דM>+V\output_data2 0 yMmAmorphological_operations2 0 Eo.N|V{\E L`dwXRyR̳
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# https://en.wikipedia.org/wiki/LC_circuit """An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, is an electric circuit consisting of an inductor, represented by the letter L, and a capacitor, represented by the letter C, connected together. The circuit can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuit's resonant frequency. Source: https://en.wikipedia.org/wiki/LC_circuit """ from __future__ import annotations from math import pi, sqrt def resonant_frequency(inductance: float, capacitance: float) -> tuple: """ This function can calculate the resonant frequency of LC circuit, for the given value of inductance and capacitnace. Examples are given below: >>> resonant_frequency(inductance=10, capacitance=5) ('Resonant frequency', 0.022507907903927652) >>> resonant_frequency(inductance=0, capacitance=5) Traceback (most recent call last): ... ValueError: Inductance cannot be 0 or negative >>> resonant_frequency(inductance=10, capacitance=0) Traceback (most recent call last): ... ValueError: Capacitance cannot be 0 or negative """ if inductance <= 0: raise ValueError("Inductance cannot be 0 or negative") elif capacitance <= 0: raise ValueError("Capacitance cannot be 0 or negative") else: return ( "Resonant frequency", float(1 / (2 * pi * (sqrt(inductance * capacitance)))), ) if __name__ == "__main__": import doctest doctest.testmod()
# https://en.wikipedia.org/wiki/LC_circuit """An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, is an electric circuit consisting of an inductor, represented by the letter L, and a capacitor, represented by the letter C, connected together. The circuit can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuit's resonant frequency. Source: https://en.wikipedia.org/wiki/LC_circuit """ from __future__ import annotations from math import pi, sqrt def resonant_frequency(inductance: float, capacitance: float) -> tuple: """ This function can calculate the resonant frequency of LC circuit, for the given value of inductance and capacitnace. Examples are given below: >>> resonant_frequency(inductance=10, capacitance=5) ('Resonant frequency', 0.022507907903927652) >>> resonant_frequency(inductance=0, capacitance=5) Traceback (most recent call last): ... ValueError: Inductance cannot be 0 or negative >>> resonant_frequency(inductance=10, capacitance=0) Traceback (most recent call last): ... ValueError: Capacitance cannot be 0 or negative """ if inductance <= 0: raise ValueError("Inductance cannot be 0 or negative") elif capacitance <= 0: raise ValueError("Capacitance cannot be 0 or negative") else: return ( "Resonant frequency", float(1 / (2 * pi * (sqrt(inductance * capacitance)))), ) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" One of the several implementations of Lempel–Ziv–Welch decompression algorithm https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch """ import math import sys def read_file_binary(file_path: str) -> str: """ Reads given file as bytes and returns them as a long string """ result = "" try: with open(file_path, "rb") as binary_file: data = binary_file.read() for dat in data: curr_byte = f"{dat:08b}" result += curr_byte return result except OSError: print("File not accessible") sys.exit() def decompress_data(data_bits: str) -> str: """ Decompresses given data_bits using Lempel–Ziv–Welch compression algorithm and returns the result as a string """ lexicon = {"0": "0", "1": "1"} result, curr_string = "", "" index = len(lexicon) for i in range(len(data_bits)): curr_string += data_bits[i] if curr_string not in lexicon: continue last_match_id = lexicon[curr_string] result += last_match_id lexicon[curr_string] = last_match_id + "0" if math.log2(index).is_integer(): new_lex = {} for curr_key in list(lexicon): new_lex["0" + curr_key] = lexicon.pop(curr_key) lexicon = new_lex lexicon[bin(index)[2:]] = last_match_id + "1" index += 1 curr_string = "" return result def write_file_binary(file_path: str, to_write: str) -> None: """ Writes given to_write string (should only consist of 0's and 1's) as bytes in the file """ byte_length = 8 try: with open(file_path, "wb") as opened_file: result_byte_array = [ to_write[i : i + byte_length] for i in range(0, len(to_write), byte_length) ] if len(result_byte_array[-1]) % byte_length == 0: result_byte_array.append("10000000") else: result_byte_array[-1] += "1" + "0" * ( byte_length - len(result_byte_array[-1]) - 1 ) for elem in result_byte_array[:-1]: opened_file.write(int(elem, 2).to_bytes(1, byteorder="big")) except OSError: print("File not accessible") sys.exit() def remove_prefix(data_bits: str) -> str: """ Removes size prefix, that compressed file should have Returns the result """ counter = 0 for letter in data_bits: if letter == "1": break counter += 1 data_bits = data_bits[counter:] data_bits = data_bits[counter + 1 :] return data_bits def compress(source_path: str, destination_path: str) -> None: """ Reads source file, decompresses it and writes the result in destination file """ data_bits = read_file_binary(source_path) data_bits = remove_prefix(data_bits) decompressed = decompress_data(data_bits) write_file_binary(destination_path, decompressed) if __name__ == "__main__": compress(sys.argv[1], sys.argv[2])
""" One of the several implementations of Lempel–Ziv–Welch decompression algorithm https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch """ import math import sys def read_file_binary(file_path: str) -> str: """ Reads given file as bytes and returns them as a long string """ result = "" try: with open(file_path, "rb") as binary_file: data = binary_file.read() for dat in data: curr_byte = f"{dat:08b}" result += curr_byte return result except OSError: print("File not accessible") sys.exit() def decompress_data(data_bits: str) -> str: """ Decompresses given data_bits using Lempel–Ziv–Welch compression algorithm and returns the result as a string """ lexicon = {"0": "0", "1": "1"} result, curr_string = "", "" index = len(lexicon) for i in range(len(data_bits)): curr_string += data_bits[i] if curr_string not in lexicon: continue last_match_id = lexicon[curr_string] result += last_match_id lexicon[curr_string] = last_match_id + "0" if math.log2(index).is_integer(): new_lex = {} for curr_key in list(lexicon): new_lex["0" + curr_key] = lexicon.pop(curr_key) lexicon = new_lex lexicon[bin(index)[2:]] = last_match_id + "1" index += 1 curr_string = "" return result def write_file_binary(file_path: str, to_write: str) -> None: """ Writes given to_write string (should only consist of 0's and 1's) as bytes in the file """ byte_length = 8 try: with open(file_path, "wb") as opened_file: result_byte_array = [ to_write[i : i + byte_length] for i in range(0, len(to_write), byte_length) ] if len(result_byte_array[-1]) % byte_length == 0: result_byte_array.append("10000000") else: result_byte_array[-1] += "1" + "0" * ( byte_length - len(result_byte_array[-1]) - 1 ) for elem in result_byte_array[:-1]: opened_file.write(int(elem, 2).to_bytes(1, byteorder="big")) except OSError: print("File not accessible") sys.exit() def remove_prefix(data_bits: str) -> str: """ Removes size prefix, that compressed file should have Returns the result """ counter = 0 for letter in data_bits: if letter == "1": break counter += 1 data_bits = data_bits[counter:] data_bits = data_bits[counter + 1 :] return data_bits def compress(source_path: str, destination_path: str) -> None: """ Reads source file, decompresses it and writes the result in destination file """ data_bits = read_file_binary(source_path) data_bits = remove_prefix(data_bits) decompressed = decompress_data(data_bits) write_file_binary(destination_path, decompressed) if __name__ == "__main__": compress(sys.argv[1], sys.argv[2])
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from unittest.mock import Mock, patch from file_transfer.send_file import send_file @patch("socket.socket") @patch("builtins.open") def test_send_file_running_as_expected(file, sock): # ===== initialization ===== conn = Mock() sock.return_value.accept.return_value = conn, Mock() f = iter([1, None]) file.return_value.__enter__.return_value.read.side_effect = lambda _: next(f) # ===== invoke ===== send_file(filename="mytext.txt", testing=True) # ===== ensurance ===== sock.assert_called_once() sock.return_value.bind.assert_called_once() sock.return_value.listen.assert_called_once() sock.return_value.accept.assert_called_once() conn.recv.assert_called_once() file.return_value.__enter__.assert_called_once() file.return_value.__enter__.return_value.read.assert_called() conn.send.assert_called_once() conn.close.assert_called_once() sock.return_value.shutdown.assert_called_once() sock.return_value.close.assert_called_once()
from unittest.mock import Mock, patch from file_transfer.send_file import send_file @patch("socket.socket") @patch("builtins.open") def test_send_file_running_as_expected(file, sock): # ===== initialization ===== conn = Mock() sock.return_value.accept.return_value = conn, Mock() f = iter([1, None]) file.return_value.__enter__.return_value.read.side_effect = lambda _: next(f) # ===== invoke ===== send_file(filename="mytext.txt", testing=True) # ===== ensurance ===== sock.assert_called_once() sock.return_value.bind.assert_called_once() sock.return_value.listen.assert_called_once() sock.return_value.accept.assert_called_once() conn.recv.assert_called_once() file.return_value.__enter__.assert_called_once() file.return_value.__enter__.return_value.read.assert_called() conn.send.assert_called_once() conn.close.assert_called_once() sock.return_value.shutdown.assert_called_once() sock.return_value.close.assert_called_once()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Author: João Gustavo A. Amorim & Gabriel Kunz # Author email: [email protected] and [email protected] # Coding date: apr 2019 # Black: True """ * This code implement the Hamming code: https://en.wikipedia.org/wiki/Hamming_code - In telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect up to two-bit errors or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. * the implemented code consists of: * a function responsible for encoding the message (emitterConverter) * return the encoded message * a function responsible for decoding the message (receptorConverter) * return the decoded message and a ack of data integrity * how to use: to be used you must declare how many parity bits (sizePari) you want to include in the message. it is desired (for test purposes) to select a bit to be set as an error. This serves to check whether the code is working correctly. Lastly, the variable of the message/word that must be desired to be encoded (text). * how this work: declaration of variables (sizePari, be, text) converts the message/word (text) to binary using the text_to_bits function encodes the message using the rules of hamming encoding decodes the message using the rules of hamming encoding print the original message, the encoded message and the decoded message forces an error in the coded text variable decodes the message that was forced the error print the original message, the encoded message, the bit changed message and the decoded message """ # Imports import numpy as np # Functions of binary conversion-------------------------------------- def text_to_bits(text, encoding="utf-8", errors="surrogatepass"): """ >>> text_to_bits("msg") '011011010111001101100111' """ bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:] return bits.zfill(8 * ((len(bits) + 7) // 8)) def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"): """ >>> text_from_bits('011011010111001101100111') 'msg' """ n = int(bits, 2) return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0" # Functions of hamming code------------------------------------------- def emitter_converter(size_par, data): """ :param size_par: how many parity bits the message must have :param data: information bits :return: message to be transmitted by unreliable medium - bits of information merged with parity bits >>> emitter_converter(4, "101010111111") ['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1'] """ if size_par + len(data) <= 2**size_par - (len(data) - 1): raise ValueError("size of parity don't match with size of data") data_out = [] parity = [] bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data) + 1)] # sorted information data for the size of the output data data_ord = [] # data position template + parity data_out_gab = [] # parity bit counter qtd_bp = 0 # counter position of data bits cont_data = 0 for x in range(1, size_par + len(data) + 1): # Performs a template of bit positions - who should be given, # and who should be parity if qtd_bp < size_par: if (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_ord.append(data[cont_data]) cont_data += 1 else: data_ord.append(None) # Calculates parity qtd_bp = 0 # parity bit counter for bp in range(1, size_par + 1): # Bit counter one for a given parity cont_bo = 0 # counter to control the loop reading cont_loop = 0 for x in data_ord: if x is not None: try: aux = (bin_pos[cont_loop])[-1 * (bp)] except IndexError: aux = "0" if aux == "1" and x == "1": cont_bo += 1 cont_loop += 1 parity.append(cont_bo % 2) qtd_bp += 1 # Mount the message cont_bp = 0 # parity bit counter for x in range(0, size_par + len(data)): if data_ord[x] is None: data_out.append(str(parity[cont_bp])) cont_bp += 1 else: data_out.append(data_ord[x]) return data_out def receptor_converter(size_par, data): """ >>> receptor_converter(4, "1111010010111111") (['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True) """ # data position template + parity data_out_gab = [] # Parity bit counter qtd_bp = 0 # Counter p data bit reading cont_data = 0 # list of parity received parity_received = [] data_output = [] for x in range(1, len(data) + 1): # Performs a template of bit positions - who should be given, # and who should be parity if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_output.append(data[cont_data]) else: parity_received.append(data[cont_data]) cont_data += 1 # -----------calculates the parity with the data data_out = [] parity = [] bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data_output) + 1)] # sorted information data for the size of the output data data_ord = [] # Data position feedback + parity data_out_gab = [] # Parity bit counter qtd_bp = 0 # Counter p data bit reading cont_data = 0 for x in range(1, size_par + len(data_output) + 1): # Performs a template position of bits - who should be given, # and who should be parity if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_ord.append(data_output[cont_data]) cont_data += 1 else: data_ord.append(None) # Calculates parity qtd_bp = 0 # parity bit counter for bp in range(1, size_par + 1): # Bit counter one for a certain parity cont_bo = 0 # Counter to control loop reading cont_loop = 0 for x in data_ord: if x is not None: try: aux = (bin_pos[cont_loop])[-1 * (bp)] except IndexError: aux = "0" if aux == "1" and x == "1": cont_bo += 1 cont_loop += 1 parity.append(str(cont_bo % 2)) qtd_bp += 1 # Mount the message cont_bp = 0 # Parity bit counter for x in range(0, size_par + len(data_output)): if data_ord[x] is None: data_out.append(str(parity[cont_bp])) cont_bp += 1 else: data_out.append(data_ord[x]) ack = parity_received == parity return data_output, ack # --------------------------------------------------------------------- """ # Example how to use # number of parity bits sizePari = 4 # location of the bit that will be forced an error be = 2 # Message/word to be encoded and decoded with hamming # text = input("Enter the word to be read: ") text = "Message01" # Convert the message to binary binaryText = text_to_bits(text) # Prints the binary of the string print("Text input in binary is '" + binaryText + "'") # total transmitted bits totalBits = len(binaryText) + sizePari print("Size of data is " + str(totalBits)) print("\n --Message exchange--") print("Data to send ------------> " + binaryText) dataOut = emitterConverter(sizePari, binaryText) print("Data converted ----------> " + "".join(dataOut)) dataReceiv, ack = receptorConverter(sizePari, dataOut) print( "Data receive ------------> " + "".join(dataReceiv) + "\t\t -- Data integrity: " + str(ack) ) print("\n --Force error--") print("Data to send ------------> " + binaryText) dataOut = emitterConverter(sizePari, binaryText) print("Data converted ----------> " + "".join(dataOut)) # forces error dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1") print("Data after transmission -> " + "".join(dataOut)) dataReceiv, ack = receptorConverter(sizePari, dataOut) print( "Data receive ------------> " + "".join(dataReceiv) + "\t\t -- Data integrity: " + str(ack) ) """
# Author: João Gustavo A. Amorim & Gabriel Kunz # Author email: [email protected] and [email protected] # Coding date: apr 2019 # Black: True """ * This code implement the Hamming code: https://en.wikipedia.org/wiki/Hamming_code - In telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect up to two-bit errors or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. * the implemented code consists of: * a function responsible for encoding the message (emitterConverter) * return the encoded message * a function responsible for decoding the message (receptorConverter) * return the decoded message and a ack of data integrity * how to use: to be used you must declare how many parity bits (sizePari) you want to include in the message. it is desired (for test purposes) to select a bit to be set as an error. This serves to check whether the code is working correctly. Lastly, the variable of the message/word that must be desired to be encoded (text). * how this work: declaration of variables (sizePari, be, text) converts the message/word (text) to binary using the text_to_bits function encodes the message using the rules of hamming encoding decodes the message using the rules of hamming encoding print the original message, the encoded message and the decoded message forces an error in the coded text variable decodes the message that was forced the error print the original message, the encoded message, the bit changed message and the decoded message """ # Imports import numpy as np # Functions of binary conversion-------------------------------------- def text_to_bits(text, encoding="utf-8", errors="surrogatepass"): """ >>> text_to_bits("msg") '011011010111001101100111' """ bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:] return bits.zfill(8 * ((len(bits) + 7) // 8)) def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"): """ >>> text_from_bits('011011010111001101100111') 'msg' """ n = int(bits, 2) return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0" # Functions of hamming code------------------------------------------- def emitter_converter(size_par, data): """ :param size_par: how many parity bits the message must have :param data: information bits :return: message to be transmitted by unreliable medium - bits of information merged with parity bits >>> emitter_converter(4, "101010111111") ['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1'] """ if size_par + len(data) <= 2**size_par - (len(data) - 1): raise ValueError("size of parity don't match with size of data") data_out = [] parity = [] bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data) + 1)] # sorted information data for the size of the output data data_ord = [] # data position template + parity data_out_gab = [] # parity bit counter qtd_bp = 0 # counter position of data bits cont_data = 0 for x in range(1, size_par + len(data) + 1): # Performs a template of bit positions - who should be given, # and who should be parity if qtd_bp < size_par: if (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_ord.append(data[cont_data]) cont_data += 1 else: data_ord.append(None) # Calculates parity qtd_bp = 0 # parity bit counter for bp in range(1, size_par + 1): # Bit counter one for a given parity cont_bo = 0 # counter to control the loop reading cont_loop = 0 for x in data_ord: if x is not None: try: aux = (bin_pos[cont_loop])[-1 * (bp)] except IndexError: aux = "0" if aux == "1" and x == "1": cont_bo += 1 cont_loop += 1 parity.append(cont_bo % 2) qtd_bp += 1 # Mount the message cont_bp = 0 # parity bit counter for x in range(0, size_par + len(data)): if data_ord[x] is None: data_out.append(str(parity[cont_bp])) cont_bp += 1 else: data_out.append(data_ord[x]) return data_out def receptor_converter(size_par, data): """ >>> receptor_converter(4, "1111010010111111") (['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True) """ # data position template + parity data_out_gab = [] # Parity bit counter qtd_bp = 0 # Counter p data bit reading cont_data = 0 # list of parity received parity_received = [] data_output = [] for x in range(1, len(data) + 1): # Performs a template of bit positions - who should be given, # and who should be parity if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_output.append(data[cont_data]) else: parity_received.append(data[cont_data]) cont_data += 1 # -----------calculates the parity with the data data_out = [] parity = [] bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data_output) + 1)] # sorted information data for the size of the output data data_ord = [] # Data position feedback + parity data_out_gab = [] # Parity bit counter qtd_bp = 0 # Counter p data bit reading cont_data = 0 for x in range(1, size_par + len(data_output) + 1): # Performs a template position of bits - who should be given, # and who should be parity if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer(): data_out_gab.append("P") qtd_bp = qtd_bp + 1 else: data_out_gab.append("D") # Sorts the data to the new output size if data_out_gab[-1] == "D": data_ord.append(data_output[cont_data]) cont_data += 1 else: data_ord.append(None) # Calculates parity qtd_bp = 0 # parity bit counter for bp in range(1, size_par + 1): # Bit counter one for a certain parity cont_bo = 0 # Counter to control loop reading cont_loop = 0 for x in data_ord: if x is not None: try: aux = (bin_pos[cont_loop])[-1 * (bp)] except IndexError: aux = "0" if aux == "1" and x == "1": cont_bo += 1 cont_loop += 1 parity.append(str(cont_bo % 2)) qtd_bp += 1 # Mount the message cont_bp = 0 # Parity bit counter for x in range(0, size_par + len(data_output)): if data_ord[x] is None: data_out.append(str(parity[cont_bp])) cont_bp += 1 else: data_out.append(data_ord[x]) ack = parity_received == parity return data_output, ack # --------------------------------------------------------------------- """ # Example how to use # number of parity bits sizePari = 4 # location of the bit that will be forced an error be = 2 # Message/word to be encoded and decoded with hamming # text = input("Enter the word to be read: ") text = "Message01" # Convert the message to binary binaryText = text_to_bits(text) # Prints the binary of the string print("Text input in binary is '" + binaryText + "'") # total transmitted bits totalBits = len(binaryText) + sizePari print("Size of data is " + str(totalBits)) print("\n --Message exchange--") print("Data to send ------------> " + binaryText) dataOut = emitterConverter(sizePari, binaryText) print("Data converted ----------> " + "".join(dataOut)) dataReceiv, ack = receptorConverter(sizePari, dataOut) print( "Data receive ------------> " + "".join(dataReceiv) + "\t\t -- Data integrity: " + str(ack) ) print("\n --Force error--") print("Data to send ------------> " + binaryText) dataOut = emitterConverter(sizePari, binaryText) print("Data converted ----------> " + "".join(dataOut)) # forces error dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1") print("Data after transmission -> " + "".join(dataOut)) dataReceiv, ack = receptorConverter(sizePari, dataOut) print( "Data receive ------------> " + "".join(dataReceiv) + "\t\t -- Data integrity: " + str(ack) ) """
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Problem Statement: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom in triangle.txt (right click and 'Save Link/Target As...'), a 15K text file containing a triangle with one-hundred rows. """ import os def solution(): """ Finds the maximum total in a triangle as described by the problem statement above. >>> solution() 7273 """ script_dir = os.path.dirname(os.path.realpath(__file__)) triangle = os.path.join(script_dir, "triangle.txt") with open(triangle) as f: triangle = f.readlines() a = [] for line in triangle: numbers_from_line = [] for number in line.strip().split(" "): numbers_from_line.append(int(number)) a.append(numbers_from_line) for i in range(1, len(a)): for j in range(len(a[i])): number1 = a[i - 1][j] if j != len(a[i - 1]) else 0 number2 = a[i - 1][j - 1] if j > 0 else 0 a[i][j] += max(number1, number2) return max(a[-1]) if __name__ == "__main__": print(solution())
""" Problem Statement: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom in triangle.txt (right click and 'Save Link/Target As...'), a 15K text file containing a triangle with one-hundred rows. """ import os def solution(): """ Finds the maximum total in a triangle as described by the problem statement above. >>> solution() 7273 """ script_dir = os.path.dirname(os.path.realpath(__file__)) triangle = os.path.join(script_dir, "triangle.txt") with open(triangle) as f: triangle = f.readlines() a = [] for line in triangle: numbers_from_line = [] for number in line.strip().split(" "): numbers_from_line.append(int(number)) a.append(numbers_from_line) for i in range(1, len(a)): for j in range(len(a[i])): number1 = a[i - 1][j] if j != len(a[i - 1]) else 0 number2 = a[i - 1][j - 1] if j > 0 else 0 a[i][j] += max(number1, number2) return max(a[-1]) if __name__ == "__main__": print(solution())
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def lower(word: str) -> str: """ Will convert the entire string to lowercase letters >>> lower("wow") 'wow' >>> lower("HellZo") 'hellzo' >>> lower("WHAT") 'what' >>> lower("wh[]32") 'wh[]32' >>> lower("whAT") 'what' """ # converting to ascii value int value and checking to see if char is a capital # letter if it is a capital letter it is getting shift by 32 which makes it a lower # case letter return "".join(chr(ord(char) + 32) if "A" <= char <= "Z" else char for char in word) if __name__ == "__main__": from doctest import testmod testmod()
def lower(word: str) -> str: """ Will convert the entire string to lowercase letters >>> lower("wow") 'wow' >>> lower("HellZo") 'hellzo' >>> lower("WHAT") 'what' >>> lower("wh[]32") 'wh[]32' >>> lower("whAT") 'what' """ # converting to ascii value int value and checking to see if char is a capital # letter if it is a capital letter it is getting shift by 32 which makes it a lower # case letter return "".join(chr(ord(char) + 32) if "A" <= char <= "Z" else char for char in word) if __name__ == "__main__": from doctest import testmod testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
"""Newton's Method.""" # Newton's Method - https://en.wikipedia.org/wiki/Newton%27s_method from collections.abc import Callable RealFunc = Callable[[float], float] # type alias for a real -> real function # function is the f(x) and derivative is the f'(x) def newton( function: RealFunc, derivative: RealFunc, starting_int: int, ) -> float: """ >>> newton(lambda x: x ** 3 - 2 * x - 5, lambda x: 3 * x ** 2 - 2, 3) 2.0945514815423474 >>> newton(lambda x: x ** 3 - 1, lambda x: 3 * x ** 2, -2) 1.0 >>> newton(lambda x: x ** 3 - 1, lambda x: 3 * x ** 2, -4) 1.0000000000000102 >>> import math >>> newton(math.sin, math.cos, 1) 0.0 >>> newton(math.sin, math.cos, 2) 3.141592653589793 >>> newton(math.cos, lambda x: -math.sin(x), 2) 1.5707963267948966 >>> newton(math.cos, lambda x: -math.sin(x), 0) Traceback (most recent call last): ... ZeroDivisionError: Could not find root """ prev_guess = float(starting_int) while True: try: next_guess = prev_guess - function(prev_guess) / derivative(prev_guess) except ZeroDivisionError: raise ZeroDivisionError("Could not find root") from None if abs(prev_guess - next_guess) < 10**-5: return next_guess prev_guess = next_guess def f(x: float) -> float: return (x**3) - (2 * x) - 5 def f1(x: float) -> float: return 3 * (x**2) - 2 if __name__ == "__main__": print(newton(f, f1, 3))
"""Newton's Method.""" # Newton's Method - https://en.wikipedia.org/wiki/Newton%27s_method from collections.abc import Callable RealFunc = Callable[[float], float] # type alias for a real -> real function # function is the f(x) and derivative is the f'(x) def newton( function: RealFunc, derivative: RealFunc, starting_int: int, ) -> float: """ >>> newton(lambda x: x ** 3 - 2 * x - 5, lambda x: 3 * x ** 2 - 2, 3) 2.0945514815423474 >>> newton(lambda x: x ** 3 - 1, lambda x: 3 * x ** 2, -2) 1.0 >>> newton(lambda x: x ** 3 - 1, lambda x: 3 * x ** 2, -4) 1.0000000000000102 >>> import math >>> newton(math.sin, math.cos, 1) 0.0 >>> newton(math.sin, math.cos, 2) 3.141592653589793 >>> newton(math.cos, lambda x: -math.sin(x), 2) 1.5707963267948966 >>> newton(math.cos, lambda x: -math.sin(x), 0) Traceback (most recent call last): ... ZeroDivisionError: Could not find root """ prev_guess = float(starting_int) while True: try: next_guess = prev_guess - function(prev_guess) / derivative(prev_guess) except ZeroDivisionError: raise ZeroDivisionError("Could not find root") from None if abs(prev_guess - next_guess) < 10**-5: return next_guess prev_guess = next_guess def f(x: float) -> float: return (x**3) - (2 * x) - 5 def f1(x: float) -> float: return 3 * (x**2) - 2 if __name__ == "__main__": print(newton(f, f1, 3))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
#!/usr/bin/perl use strict; use warnings; use IPC::Open2; # An example hook script to integrate Watchman # (https://facebook.github.io/watchman/) with git to speed up detecting # new and modified files. # # The hook is passed a version (currently 1) and a time in nanoseconds # formatted as a string and outputs to stdout all files that have been # modified since the given time. Paths must be relative to the root of # the working tree and separated by a single NUL. # # To enable this hook, rename this file to "query-watchman" and set # 'git config core.fsmonitor .git/hooks/query-watchman' # my ($version, $time) = @ARGV; # Check the hook interface version if ($version == 1) { # convert nanoseconds to seconds # subtract one second to make sure watchman will return all changes $time = int ($time / 1000000000) - 1; } else { die "Unsupported query-fsmonitor hook version '$version'.\n" . "Falling back to scanning...\n"; } my $git_work_tree; if ($^O =~ 'msys' || $^O =~ 'cygwin') { $git_work_tree = Win32::GetCwd(); $git_work_tree =~ tr/\\/\//; } else { require Cwd; $git_work_tree = Cwd::cwd(); } my $retry = 1; launch_watchman(); sub launch_watchman { my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty') or die "open2() failed: $!\n" . "Falling back to scanning...\n"; # In the query expression below we're asking for names of files that # changed since $time but were not transient (ie created after # $time but no longer exist). # # To accomplish this, we're using the "since" generator to use the # recency index to select candidate nodes and "fields" to limit the # output to file names only. my $query = <<" END"; ["query", "$git_work_tree", { "since": $time, "fields": ["name"] }] END print CHLD_IN $query; close CHLD_IN; my $response = do {local $/; <CHLD_OUT>}; die "Watchman: command returned no output.\n" . "Falling back to scanning...\n" if $response eq ""; die "Watchman: command returned invalid output: $response\n" . "Falling back to scanning...\n" unless $response =~ /^\{/; my $json_pkg; eval { require JSON::XS; $json_pkg = "JSON::XS"; 1; } or do { require JSON::PP; $json_pkg = "JSON::PP"; }; my $o = $json_pkg->new->utf8->decode($response); if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) { print STDERR "Adding '$git_work_tree' to watchman's watch list.\n"; $retry--; qx/watchman watch "$git_work_tree"/; die "Failed to make watchman watch '$git_work_tree'.\n" . "Falling back to scanning...\n" if $? != 0; # Watchman will always return all files on the first query so # return the fast "everything is dirty" flag to git and do the # Watchman query just to get it over with now so we won't pay # the cost in git to look up each individual file. print "/\0"; eval { launch_watchman() }; exit 0; } die "Watchman: $o->{error}.\n" . "Falling back to scanning...\n" if $o->{error}; binmode STDOUT, ":utf8"; local $, = "\0"; print @{$o->{files}}; }
#!/usr/bin/perl use strict; use warnings; use IPC::Open2; # An example hook script to integrate Watchman # (https://facebook.github.io/watchman/) with git to speed up detecting # new and modified files. # # The hook is passed a version (currently 1) and a time in nanoseconds # formatted as a string and outputs to stdout all files that have been # modified since the given time. Paths must be relative to the root of # the working tree and separated by a single NUL. # # To enable this hook, rename this file to "query-watchman" and set # 'git config core.fsmonitor .git/hooks/query-watchman' # my ($version, $time) = @ARGV; # Check the hook interface version if ($version == 1) { # convert nanoseconds to seconds # subtract one second to make sure watchman will return all changes $time = int ($time / 1000000000) - 1; } else { die "Unsupported query-fsmonitor hook version '$version'.\n" . "Falling back to scanning...\n"; } my $git_work_tree; if ($^O =~ 'msys' || $^O =~ 'cygwin') { $git_work_tree = Win32::GetCwd(); $git_work_tree =~ tr/\\/\//; } else { require Cwd; $git_work_tree = Cwd::cwd(); } my $retry = 1; launch_watchman(); sub launch_watchman { my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty') or die "open2() failed: $!\n" . "Falling back to scanning...\n"; # In the query expression below we're asking for names of files that # changed since $time but were not transient (ie created after # $time but no longer exist). # # To accomplish this, we're using the "since" generator to use the # recency index to select candidate nodes and "fields" to limit the # output to file names only. my $query = <<" END"; ["query", "$git_work_tree", { "since": $time, "fields": ["name"] }] END print CHLD_IN $query; close CHLD_IN; my $response = do {local $/; <CHLD_OUT>}; die "Watchman: command returned no output.\n" . "Falling back to scanning...\n" if $response eq ""; die "Watchman: command returned invalid output: $response\n" . "Falling back to scanning...\n" unless $response =~ /^\{/; my $json_pkg; eval { require JSON::XS; $json_pkg = "JSON::XS"; 1; } or do { require JSON::PP; $json_pkg = "JSON::PP"; }; my $o = $json_pkg->new->utf8->decode($response); if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) { print STDERR "Adding '$git_work_tree' to watchman's watch list.\n"; $retry--; qx/watchman watch "$git_work_tree"/; die "Failed to make watchman watch '$git_work_tree'.\n" . "Falling back to scanning...\n" if $? != 0; # Watchman will always return all files on the first query so # return the fast "everything is dirty" flag to git and do the # Watchman query just to get it over with now so we won't pay # the cost in git to look up each individual file. print "/\0"; eval { launch_watchman() }; exit 0; } die "Watchman: $o->{error}.\n" . "Falling back to scanning...\n" if $o->{error}; binmode STDOUT, ":utf8"; local $, = "\0"; print @{$o->{files}}; }
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# floyd_warshall.py """ The problem is to find the shortest distance between all pairs of vertices in a weighted directed graph that can have negative edge weights. """ def _print_dist(dist, v): print("\nThe shortest path matrix using Floyd Warshall algorithm\n") for i in range(v): for j in range(v): if dist[i][j] != float("inf"): print(int(dist[i][j]), end="\t") else: print("INF", end="\t") print() def floyd_warshall(graph, v): """ :param graph: 2D array calculated from weight[edge[i, j]] :type graph: List[List[float]] :param v: number of vertices :type v: int :return: shortest distance between all vertex pairs distance[u][v] will contain the shortest distance from vertex u to v. 1. For all edges from v to n, distance[i][j] = weight(edge(i, j)). 3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] + distance[k][j]) for each possible pair i, j of vertices. 4. The above is repeated for each vertex k in the graph. 5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is updated to the next vertex[i][k]. """ dist = [[float("inf") for _ in range(v)] for _ in range(v)] for i in range(v): for j in range(v): dist[i][j] = graph[i][j] # check vertex k against all other vertices (i, j) for k in range(v): # looping through rows of graph array for i in range(v): # looping through columns of graph array for j in range(v): if ( dist[i][k] != float("inf") and dist[k][j] != float("inf") and dist[i][k] + dist[k][j] < dist[i][j] ): dist[i][j] = dist[i][k] + dist[k][j] _print_dist(dist, v) return dist, v if __name__ == "__main__": v = int(input("Enter number of vertices: ")) e = int(input("Enter number of edges: ")) graph = [[float("inf") for i in range(v)] for j in range(v)] for i in range(v): graph[i][i] = 0.0 # src and dst are indices that must be within the array size graph[e][v] # failure to follow this will result in an error for i in range(e): print("\nEdge ", i + 1) src = int(input("Enter source:")) dst = int(input("Enter destination:")) weight = float(input("Enter weight:")) graph[src][dst] = weight floyd_warshall(graph, v) # Example Input # Enter number of vertices: 3 # Enter number of edges: 2 # # generated graph from vertex and edge inputs # [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]] # [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]] # specify source, destination and weight for edge #1 # Edge 1 # Enter source:1 # Enter destination:2 # Enter weight:2 # specify source, destination and weight for edge #2 # Edge 2 # Enter source:2 # Enter destination:1 # Enter weight:1 # # Expected Output from the vertice, edge and src, dst, weight inputs!! # 0 INF INF # INF 0 2 # INF 1 0
# floyd_warshall.py """ The problem is to find the shortest distance between all pairs of vertices in a weighted directed graph that can have negative edge weights. """ def _print_dist(dist, v): print("\nThe shortest path matrix using Floyd Warshall algorithm\n") for i in range(v): for j in range(v): if dist[i][j] != float("inf"): print(int(dist[i][j]), end="\t") else: print("INF", end="\t") print() def floyd_warshall(graph, v): """ :param graph: 2D array calculated from weight[edge[i, j]] :type graph: List[List[float]] :param v: number of vertices :type v: int :return: shortest distance between all vertex pairs distance[u][v] will contain the shortest distance from vertex u to v. 1. For all edges from v to n, distance[i][j] = weight(edge(i, j)). 3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] + distance[k][j]) for each possible pair i, j of vertices. 4. The above is repeated for each vertex k in the graph. 5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is updated to the next vertex[i][k]. """ dist = [[float("inf") for _ in range(v)] for _ in range(v)] for i in range(v): for j in range(v): dist[i][j] = graph[i][j] # check vertex k against all other vertices (i, j) for k in range(v): # looping through rows of graph array for i in range(v): # looping through columns of graph array for j in range(v): if ( dist[i][k] != float("inf") and dist[k][j] != float("inf") and dist[i][k] + dist[k][j] < dist[i][j] ): dist[i][j] = dist[i][k] + dist[k][j] _print_dist(dist, v) return dist, v if __name__ == "__main__": v = int(input("Enter number of vertices: ")) e = int(input("Enter number of edges: ")) graph = [[float("inf") for i in range(v)] for j in range(v)] for i in range(v): graph[i][i] = 0.0 # src and dst are indices that must be within the array size graph[e][v] # failure to follow this will result in an error for i in range(e): print("\nEdge ", i + 1) src = int(input("Enter source:")) dst = int(input("Enter destination:")) weight = float(input("Enter weight:")) graph[src][dst] = weight floyd_warshall(graph, v) # Example Input # Enter number of vertices: 3 # Enter number of edges: 2 # # generated graph from vertex and edge inputs # [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]] # [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]] # specify source, destination and weight for edge #1 # Edge 1 # Enter source:1 # Enter destination:2 # Enter weight:2 # specify source, destination and weight for edge #2 # Edge 2 # Enter source:2 # Enter destination:1 # Enter weight:1 # # Expected Output from the vertice, edge and src, dst, weight inputs!! # 0 INF INF # INF 0 2 # INF 1 0
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Code contributed by Honey Sharma Source: https://en.wikipedia.org/wiki/Cycle_sort """ def cycle_sort(array: list) -> list: """ >>> cycle_sort([4, 3, 2, 1]) [1, 2, 3, 4] >>> cycle_sort([-4, 20, 0, -50, 100, -1]) [-50, -4, -1, 0, 20, 100] >>> cycle_sort([-.1, -.2, 1.3, -.8]) [-0.8, -0.2, -0.1, 1.3] >>> cycle_sort([]) [] """ array_len = len(array) for cycle_start in range(0, array_len - 1): item = array[cycle_start] pos = cycle_start for i in range(cycle_start + 1, array_len): if array[i] < item: pos += 1 if pos == cycle_start: continue while item == array[pos]: pos += 1 array[pos], item = item, array[pos] while pos != cycle_start: pos = cycle_start for i in range(cycle_start + 1, array_len): if array[i] < item: pos += 1 while item == array[pos]: pos += 1 array[pos], item = item, array[pos] return array if __name__ == "__main__": assert cycle_sort([4, 5, 3, 2, 1]) == [1, 2, 3, 4, 5] assert cycle_sort([0, 1, -10, 15, 2, -2]) == [-10, -2, 0, 1, 2, 15]
""" Code contributed by Honey Sharma Source: https://en.wikipedia.org/wiki/Cycle_sort """ def cycle_sort(array: list) -> list: """ >>> cycle_sort([4, 3, 2, 1]) [1, 2, 3, 4] >>> cycle_sort([-4, 20, 0, -50, 100, -1]) [-50, -4, -1, 0, 20, 100] >>> cycle_sort([-.1, -.2, 1.3, -.8]) [-0.8, -0.2, -0.1, 1.3] >>> cycle_sort([]) [] """ array_len = len(array) for cycle_start in range(0, array_len - 1): item = array[cycle_start] pos = cycle_start for i in range(cycle_start + 1, array_len): if array[i] < item: pos += 1 if pos == cycle_start: continue while item == array[pos]: pos += 1 array[pos], item = item, array[pos] while pos != cycle_start: pos = cycle_start for i in range(cycle_start + 1, array_len): if array[i] < item: pos += 1 while item == array[pos]: pos += 1 array[pos], item = item, array[pos] return array if __name__ == "__main__": assert cycle_sort([4, 5, 3, 2, 1]) == [1, 2, 3, 4, 5] assert cycle_sort([0, 1, -10, 15, 2, -2]) == [-10, -2, 0, 1, 2, 15]
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Illustrate how to implement inorder traversal in binary search tree. Author: Gurneet Singh https://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/ """ class BinaryTreeNode: """Defining the structure of BinaryTreeNode""" def __init__(self, data: int) -> None: self.data = data self.left_child: BinaryTreeNode | None = None self.right_child: BinaryTreeNode | None = None def insert(node: BinaryTreeNode | None, new_value: int) -> BinaryTreeNode | None: """ If the binary search tree is empty, make a new node and declare it as root. >>> node_a = BinaryTreeNode(12345) >>> node_b = insert(node_a, 67890) >>> node_a.left_child == node_b.left_child True >>> node_a.right_child == node_b.right_child True >>> node_a.data == node_b.data True """ if node is None: node = BinaryTreeNode(new_value) return node # binary search tree is not empty, # so we will insert it into the tree # if new_value is less than value of data in node, # add it to left subtree and proceed recursively if new_value < node.data: node.left_child = insert(node.left_child, new_value) else: # if new_value is greater than value of data in node, # add it to right subtree and proceed recursively node.right_child = insert(node.right_child, new_value) return node def inorder(node: None | BinaryTreeNode) -> list[int]: # if node is None,return """ >>> inorder(make_tree()) [6, 10, 14, 15, 20, 25, 60] """ if node: inorder_array = inorder(node.left_child) inorder_array = [*inorder_array, node.data] inorder_array = inorder_array + inorder(node.right_child) else: inorder_array = [] return inorder_array def make_tree() -> BinaryTreeNode | None: root = insert(None, 15) insert(root, 10) insert(root, 25) insert(root, 6) insert(root, 14) insert(root, 20) insert(root, 60) return root def main() -> None: # main function root = make_tree() print("Printing values of binary search tree in Inorder Traversal.") inorder(root) if __name__ == "__main__": import doctest doctest.testmod() main()
""" Illustrate how to implement inorder traversal in binary search tree. Author: Gurneet Singh https://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/ """ class BinaryTreeNode: """Defining the structure of BinaryTreeNode""" def __init__(self, data: int) -> None: self.data = data self.left_child: BinaryTreeNode | None = None self.right_child: BinaryTreeNode | None = None def insert(node: BinaryTreeNode | None, new_value: int) -> BinaryTreeNode | None: """ If the binary search tree is empty, make a new node and declare it as root. >>> node_a = BinaryTreeNode(12345) >>> node_b = insert(node_a, 67890) >>> node_a.left_child == node_b.left_child True >>> node_a.right_child == node_b.right_child True >>> node_a.data == node_b.data True """ if node is None: node = BinaryTreeNode(new_value) return node # binary search tree is not empty, # so we will insert it into the tree # if new_value is less than value of data in node, # add it to left subtree and proceed recursively if new_value < node.data: node.left_child = insert(node.left_child, new_value) else: # if new_value is greater than value of data in node, # add it to right subtree and proceed recursively node.right_child = insert(node.right_child, new_value) return node def inorder(node: None | BinaryTreeNode) -> list[int]: # if node is None,return """ >>> inorder(make_tree()) [6, 10, 14, 15, 20, 25, 60] """ if node: inorder_array = inorder(node.left_child) inorder_array = [*inorder_array, node.data] inorder_array = inorder_array + inorder(node.right_child) else: inorder_array = [] return inorder_array def make_tree() -> BinaryTreeNode | None: root = insert(None, 15) insert(root, 10) insert(root, 25) insert(root, 6) insert(root, 14) insert(root, 20) insert(root, 60) return root def main() -> None: # main function root = make_tree() print("Printing values of binary search tree in Inorder Traversal.") inorder(root) if __name__ == "__main__": import doctest doctest.testmod() main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Wavelet tree is a data-structure designed to efficiently answer various range queries for arrays. Wavelets trees are different from other binary trees in the sense that the nodes are split based on the actual values of the elements and not on indices, such as the with segment trees or fenwick trees. You can read more about them here: 1. https://users.dcc.uchile.cl/~jperez/papers/ioiconf16.pdf 2. https://www.youtube.com/watch?v=4aSv9PcecDw&t=811s 3. https://www.youtube.com/watch?v=CybAgVF-MMc&t=1178s """ from __future__ import annotations test_array = [2, 1, 4, 5, 6, 0, 8, 9, 1, 2, 0, 6, 4, 2, 0, 6, 5, 3, 2, 7] class Node: def __init__(self, length: int) -> None: self.minn: int = -1 self.maxx: int = -1 self.map_left: list[int] = [-1] * length self.left: Node | None = None self.right: Node | None = None def __repr__(self) -> str: """ >>> node = Node(length=27) >>> repr(node) 'Node(min_value=-1 max_value=-1)' >>> repr(node) == str(node) True """ return f"Node(min_value={self.minn} max_value={self.maxx})" def build_tree(arr: list[int]) -> Node | None: """ Builds the tree for arr and returns the root of the constructed tree >>> build_tree(test_array) Node(min_value=0 max_value=9) """ root = Node(len(arr)) root.minn, root.maxx = min(arr), max(arr) # Leaf node case where the node contains only one unique value if root.minn == root.maxx: return root """ Take the mean of min and max element of arr as the pivot and partition arr into left_arr and right_arr with all elements <= pivot in the left_arr and the rest in right_arr, maintaining the order of the elements, then recursively build trees for left_arr and right_arr """ pivot = (root.minn + root.maxx) // 2 left_arr: list[int] = [] right_arr: list[int] = [] for index, num in enumerate(arr): if num <= pivot: left_arr.append(num) else: right_arr.append(num) root.map_left[index] = len(left_arr) root.left = build_tree(left_arr) root.right = build_tree(right_arr) return root def rank_till_index(node: Node | None, num: int, index: int) -> int: """ Returns the number of occurrences of num in interval [0, index] in the list >>> root = build_tree(test_array) >>> rank_till_index(root, 6, 6) 1 >>> rank_till_index(root, 2, 0) 1 >>> rank_till_index(root, 1, 10) 2 >>> rank_till_index(root, 17, 7) 0 >>> rank_till_index(root, 0, 9) 1 """ if index < 0 or node is None: return 0 # Leaf node cases if node.minn == node.maxx: return index + 1 if node.minn == num else 0 pivot = (node.minn + node.maxx) // 2 if num <= pivot: # go the left subtree and map index to the left subtree return rank_till_index(node.left, num, node.map_left[index] - 1) else: # go to the right subtree and map index to the right subtree return rank_till_index(node.right, num, index - node.map_left[index]) def rank(node: Node | None, num: int, start: int, end: int) -> int: """ Returns the number of occurrences of num in interval [start, end] in the list >>> root = build_tree(test_array) >>> rank(root, 6, 3, 13) 2 >>> rank(root, 2, 0, 19) 4 >>> rank(root, 9, 2 ,2) 0 >>> rank(root, 0, 5, 10) 2 """ if start > end: return 0 rank_till_end = rank_till_index(node, num, end) rank_before_start = rank_till_index(node, num, start - 1) return rank_till_end - rank_before_start def quantile(node: Node | None, index: int, start: int, end: int) -> int: """ Returns the index'th smallest element in interval [start, end] in the list index is 0-indexed >>> root = build_tree(test_array) >>> quantile(root, 2, 2, 5) 5 >>> quantile(root, 5, 2, 13) 4 >>> quantile(root, 0, 6, 6) 8 >>> quantile(root, 4, 2, 5) -1 """ if index > (end - start) or start > end or node is None: return -1 # Leaf node case if node.minn == node.maxx: return node.minn # Number of elements in the left subtree in interval [start, end] num_elements_in_left_tree = node.map_left[end] - ( node.map_left[start - 1] if start else 0 ) if num_elements_in_left_tree > index: return quantile( node.left, index, (node.map_left[start - 1] if start else 0), node.map_left[end] - 1, ) else: return quantile( node.right, index - num_elements_in_left_tree, start - (node.map_left[start - 1] if start else 0), end - node.map_left[end], ) def range_counting( node: Node | None, start: int, end: int, start_num: int, end_num: int ) -> int: """ Returns the number of elements in range [start_num, end_num] in interval [start, end] in the list >>> root = build_tree(test_array) >>> range_counting(root, 1, 10, 3, 7) 3 >>> range_counting(root, 2, 2, 1, 4) 1 >>> range_counting(root, 0, 19, 0, 100) 20 >>> range_counting(root, 1, 0, 1, 100) 0 >>> range_counting(root, 0, 17, 100, 1) 0 """ if ( start > end or node is None or start_num > end_num or node.minn > end_num or node.maxx < start_num ): return 0 if start_num <= node.minn and node.maxx <= end_num: return end - start + 1 left = range_counting( node.left, (node.map_left[start - 1] if start else 0), node.map_left[end] - 1, start_num, end_num, ) right = range_counting( node.right, start - (node.map_left[start - 1] if start else 0), end - node.map_left[end], start_num, end_num, ) return left + right if __name__ == "__main__": import doctest doctest.testmod()
""" Wavelet tree is a data-structure designed to efficiently answer various range queries for arrays. Wavelets trees are different from other binary trees in the sense that the nodes are split based on the actual values of the elements and not on indices, such as the with segment trees or fenwick trees. You can read more about them here: 1. https://users.dcc.uchile.cl/~jperez/papers/ioiconf16.pdf 2. https://www.youtube.com/watch?v=4aSv9PcecDw&t=811s 3. https://www.youtube.com/watch?v=CybAgVF-MMc&t=1178s """ from __future__ import annotations test_array = [2, 1, 4, 5, 6, 0, 8, 9, 1, 2, 0, 6, 4, 2, 0, 6, 5, 3, 2, 7] class Node: def __init__(self, length: int) -> None: self.minn: int = -1 self.maxx: int = -1 self.map_left: list[int] = [-1] * length self.left: Node | None = None self.right: Node | None = None def __repr__(self) -> str: """ >>> node = Node(length=27) >>> repr(node) 'Node(min_value=-1 max_value=-1)' >>> repr(node) == str(node) True """ return f"Node(min_value={self.minn} max_value={self.maxx})" def build_tree(arr: list[int]) -> Node | None: """ Builds the tree for arr and returns the root of the constructed tree >>> build_tree(test_array) Node(min_value=0 max_value=9) """ root = Node(len(arr)) root.minn, root.maxx = min(arr), max(arr) # Leaf node case where the node contains only one unique value if root.minn == root.maxx: return root """ Take the mean of min and max element of arr as the pivot and partition arr into left_arr and right_arr with all elements <= pivot in the left_arr and the rest in right_arr, maintaining the order of the elements, then recursively build trees for left_arr and right_arr """ pivot = (root.minn + root.maxx) // 2 left_arr: list[int] = [] right_arr: list[int] = [] for index, num in enumerate(arr): if num <= pivot: left_arr.append(num) else: right_arr.append(num) root.map_left[index] = len(left_arr) root.left = build_tree(left_arr) root.right = build_tree(right_arr) return root def rank_till_index(node: Node | None, num: int, index: int) -> int: """ Returns the number of occurrences of num in interval [0, index] in the list >>> root = build_tree(test_array) >>> rank_till_index(root, 6, 6) 1 >>> rank_till_index(root, 2, 0) 1 >>> rank_till_index(root, 1, 10) 2 >>> rank_till_index(root, 17, 7) 0 >>> rank_till_index(root, 0, 9) 1 """ if index < 0 or node is None: return 0 # Leaf node cases if node.minn == node.maxx: return index + 1 if node.minn == num else 0 pivot = (node.minn + node.maxx) // 2 if num <= pivot: # go the left subtree and map index to the left subtree return rank_till_index(node.left, num, node.map_left[index] - 1) else: # go to the right subtree and map index to the right subtree return rank_till_index(node.right, num, index - node.map_left[index]) def rank(node: Node | None, num: int, start: int, end: int) -> int: """ Returns the number of occurrences of num in interval [start, end] in the list >>> root = build_tree(test_array) >>> rank(root, 6, 3, 13) 2 >>> rank(root, 2, 0, 19) 4 >>> rank(root, 9, 2 ,2) 0 >>> rank(root, 0, 5, 10) 2 """ if start > end: return 0 rank_till_end = rank_till_index(node, num, end) rank_before_start = rank_till_index(node, num, start - 1) return rank_till_end - rank_before_start def quantile(node: Node | None, index: int, start: int, end: int) -> int: """ Returns the index'th smallest element in interval [start, end] in the list index is 0-indexed >>> root = build_tree(test_array) >>> quantile(root, 2, 2, 5) 5 >>> quantile(root, 5, 2, 13) 4 >>> quantile(root, 0, 6, 6) 8 >>> quantile(root, 4, 2, 5) -1 """ if index > (end - start) or start > end or node is None: return -1 # Leaf node case if node.minn == node.maxx: return node.minn # Number of elements in the left subtree in interval [start, end] num_elements_in_left_tree = node.map_left[end] - ( node.map_left[start - 1] if start else 0 ) if num_elements_in_left_tree > index: return quantile( node.left, index, (node.map_left[start - 1] if start else 0), node.map_left[end] - 1, ) else: return quantile( node.right, index - num_elements_in_left_tree, start - (node.map_left[start - 1] if start else 0), end - node.map_left[end], ) def range_counting( node: Node | None, start: int, end: int, start_num: int, end_num: int ) -> int: """ Returns the number of elements in range [start_num, end_num] in interval [start, end] in the list >>> root = build_tree(test_array) >>> range_counting(root, 1, 10, 3, 7) 3 >>> range_counting(root, 2, 2, 1, 4) 1 >>> range_counting(root, 0, 19, 0, 100) 20 >>> range_counting(root, 1, 0, 1, 100) 0 >>> range_counting(root, 0, 17, 100, 1) 0 """ if ( start > end or node is None or start_num > end_num or node.minn > end_num or node.maxx < start_num ): return 0 if start_num <= node.minn and node.maxx <= end_num: return end - start + 1 left = range_counting( node.left, (node.map_left[start - 1] if start else 0), node.map_left[end] - 1, start_num, end_num, ) right = range_counting( node.right, start - (node.map_left[start - 1] if start else 0), end - node.map_left[end], start_num, end_num, ) return left + right if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" A Hamming number is a positive integer of the form 2^i*3^j*5^k, for some non-negative integers i, j, and k. They are often referred to as regular numbers. More info at: https://en.wikipedia.org/wiki/Regular_number. """ def hamming(n_element: int) -> list: """ This function creates an ordered list of n length as requested, and afterwards returns the last value of the list. It must be given a positive integer. :param n_element: The number of elements on the list :return: The nth element of the list >>> hamming(5) [1, 2, 3, 4, 5] >>> hamming(10) [1, 2, 3, 4, 5, 6, 8, 9, 10, 12] >>> hamming(15) [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24] """ n_element = int(n_element) if n_element < 1: my_error = ValueError("a should be a positive number") raise my_error hamming_list = [1] i, j, k = (0, 0, 0) index = 1 while index < n_element: while hamming_list[i] * 2 <= hamming_list[-1]: i += 1 while hamming_list[j] * 3 <= hamming_list[-1]: j += 1 while hamming_list[k] * 5 <= hamming_list[-1]: k += 1 hamming_list.append( min(hamming_list[i] * 2, hamming_list[j] * 3, hamming_list[k] * 5) ) index += 1 return hamming_list if __name__ == "__main__": n = input("Enter the last number (nth term) of the Hamming Number Series: ") print("Formula of Hamming Number Series => 2^i * 3^j * 5^k") hamming_numbers = hamming(int(n)) print("-----------------------------------------------------") print(f"The list with nth numbers is: {hamming_numbers}") print("-----------------------------------------------------")
""" A Hamming number is a positive integer of the form 2^i*3^j*5^k, for some non-negative integers i, j, and k. They are often referred to as regular numbers. More info at: https://en.wikipedia.org/wiki/Regular_number. """ def hamming(n_element: int) -> list: """ This function creates an ordered list of n length as requested, and afterwards returns the last value of the list. It must be given a positive integer. :param n_element: The number of elements on the list :return: The nth element of the list >>> hamming(5) [1, 2, 3, 4, 5] >>> hamming(10) [1, 2, 3, 4, 5, 6, 8, 9, 10, 12] >>> hamming(15) [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24] """ n_element = int(n_element) if n_element < 1: my_error = ValueError("a should be a positive number") raise my_error hamming_list = [1] i, j, k = (0, 0, 0) index = 1 while index < n_element: while hamming_list[i] * 2 <= hamming_list[-1]: i += 1 while hamming_list[j] * 3 <= hamming_list[-1]: j += 1 while hamming_list[k] * 5 <= hamming_list[-1]: k += 1 hamming_list.append( min(hamming_list[i] * 2, hamming_list[j] * 3, hamming_list[k] * 5) ) index += 1 return hamming_list if __name__ == "__main__": n = input("Enter the last number (nth term) of the Hamming Number Series: ") print("Formula of Hamming Number Series => 2^i * 3^j * 5^k") hamming_numbers = hamming(int(n)) print("-----------------------------------------------------") print(f"The list with nth numbers is: {hamming_numbers}") print("-----------------------------------------------------")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from doctest import testmod from math import sqrt def factors_of_a_number(num: int) -> list: """ >>> factors_of_a_number(1) [1] >>> factors_of_a_number(5) [1, 5] >>> factors_of_a_number(24) [1, 2, 3, 4, 6, 8, 12, 24] >>> factors_of_a_number(-24) [] """ facs: list[int] = [] if num < 1: return facs facs.append(1) if num == 1: return facs facs.append(num) for i in range(2, int(sqrt(num)) + 1): if num % i == 0: # If i is a factor of num facs.append(i) d = num // i # num//i is the other factor of num if d != i: # If d and i are distinct facs.append(d) # we have found another factor facs.sort() return facs if __name__ == "__main__": testmod(name="factors_of_a_number", verbose=True)
from doctest import testmod from math import sqrt def factors_of_a_number(num: int) -> list: """ >>> factors_of_a_number(1) [1] >>> factors_of_a_number(5) [1, 5] >>> factors_of_a_number(24) [1, 2, 3, 4, 6, 8, 12, 24] >>> factors_of_a_number(-24) [] """ facs: list[int] = [] if num < 1: return facs facs.append(1) if num == 1: return facs facs.append(num) for i in range(2, int(sqrt(num)) + 1): if num % i == 0: # If i is a factor of num facs.append(i) d = num // i # num//i is the other factor of num if d != i: # If d and i are distinct facs.append(d) # we have found another factor facs.sort() return facs if __name__ == "__main__": testmod(name="factors_of_a_number", verbose=True)
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Project Euler Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright). Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical insights to solve. Project Euler is ideal for mathematicians who are learning to code. The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions. ## Solution Guidelines Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first. ### Coding Style * Please maintain consistency in project directory and solution file names. Keep the following points in mind: * Create a new directory only for the problems which do not exist yet. * If you create a new directory, please create an empty `__init__.py` file inside it as well. * Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on. * Please provide a link to the problem and other references, if used, in the **module-level docstring**. * All imports should come ***after*** the module-level docstring. * You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below: * It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`. * When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem. * Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about. * There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1): ```python def solution(limit: int = 1000): """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution method in the module-level docstring. >>> solution(1) ... >>> solution(16) ... >>> solution(100) ... """ ``` ### Solution Template You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works. Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content. ```python """ Project Euler Problem [problem number]: [link to the original problem] ... [Entire problem statement] ... ... [Solution explanation - Optional] ... References [Optional]: - [Wikipedia link to the topic] - [Stackoverflow link] ... """ import module1 import module2 ... def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement explaining what the function is about. ... A more elaborate description ... [Optional] ... [Doctest] ... """ ... # calculations ... return # You can have multiple helper functions but the solution function should be # after all the helper functions ... def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution in the module-level docstring. ... [Doctest as mentioned above] ... """ ... # calculations ... return answer if __name__ == "__main__": print(f"{solution() = }") ```
# Project Euler Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright). Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical insights to solve. Project Euler is ideal for mathematicians who are learning to code. The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions. ## Solution Guidelines Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first. ### Coding Style * Please maintain consistency in project directory and solution file names. Keep the following points in mind: * Create a new directory only for the problems which do not exist yet. * If you create a new directory, please create an empty `__init__.py` file inside it as well. * Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on. * Please provide a link to the problem and other references, if used, in the **module-level docstring**. * All imports should come ***after*** the module-level docstring. * You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below: * It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`. * When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem. * Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about. * There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1): ```python def solution(limit: int = 1000): """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution method in the module-level docstring. >>> solution(1) ... >>> solution(16) ... >>> solution(100) ... """ ``` ### Solution Template You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works. Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content. ```python """ Project Euler Problem [problem number]: [link to the original problem] ... [Entire problem statement] ... ... [Solution explanation - Optional] ... References [Optional]: - [Wikipedia link to the topic] - [Stackoverflow link] ... """ import module1 import module2 ... def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement explaining what the function is about. ... A more elaborate description ... [Optional] ... [Doctest] ... """ ... # calculations ... return # You can have multiple helper functions but the solution function should be # after all the helper functions ... def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution in the module-level docstring. ... [Doctest as mentioned above] ... """ ... # calculations ... return answer if __name__ == "__main__": print(f"{solution() = }") ```
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Problem 45: https://projecteuler.net/problem=45 Triangle, pentagonal, and hexagonal numbers are generated by the following formulae: Triangle T(n) = (n * (n + 1)) / 2 1, 3, 6, 10, 15, ... Pentagonal P(n) = (n * (3 * n − 1)) / 2 1, 5, 12, 22, 35, ... Hexagonal H(n) = n * (2 * n − 1) 1, 6, 15, 28, 45, ... It can be verified that T(285) = P(165) = H(143) = 40755. Find the next triangle number that is also pentagonal and hexagonal. All triangle numbers are hexagonal numbers. T(2n-1) = n * (2 * n - 1) = H(n) So we shall check only for hexagonal numbers which are also pentagonal. """ def hexagonal_num(n: int) -> int: """ Returns nth hexagonal number >>> hexagonal_num(143) 40755 >>> hexagonal_num(21) 861 >>> hexagonal_num(10) 190 """ return n * (2 * n - 1) def is_pentagonal(n: int) -> bool: """ Returns True if n is pentagonal, False otherwise. >>> is_pentagonal(330) True >>> is_pentagonal(7683) False >>> is_pentagonal(2380) True """ root = (1 + 24 * n) ** 0.5 return ((1 + root) / 6) % 1 == 0 def solution(start: int = 144) -> int: """ Returns the next number which is triangular, pentagonal and hexagonal. >>> solution(144) 1533776805 """ n = start num = hexagonal_num(n) while not is_pentagonal(num): n += 1 num = hexagonal_num(n) return num if __name__ == "__main__": print(f"{solution()} = ")
""" Problem 45: https://projecteuler.net/problem=45 Triangle, pentagonal, and hexagonal numbers are generated by the following formulae: Triangle T(n) = (n * (n + 1)) / 2 1, 3, 6, 10, 15, ... Pentagonal P(n) = (n * (3 * n − 1)) / 2 1, 5, 12, 22, 35, ... Hexagonal H(n) = n * (2 * n − 1) 1, 6, 15, 28, 45, ... It can be verified that T(285) = P(165) = H(143) = 40755. Find the next triangle number that is also pentagonal and hexagonal. All triangle numbers are hexagonal numbers. T(2n-1) = n * (2 * n - 1) = H(n) So we shall check only for hexagonal numbers which are also pentagonal. """ def hexagonal_num(n: int) -> int: """ Returns nth hexagonal number >>> hexagonal_num(143) 40755 >>> hexagonal_num(21) 861 >>> hexagonal_num(10) 190 """ return n * (2 * n - 1) def is_pentagonal(n: int) -> bool: """ Returns True if n is pentagonal, False otherwise. >>> is_pentagonal(330) True >>> is_pentagonal(7683) False >>> is_pentagonal(2380) True """ root = (1 + 24 * n) ** 0.5 return ((1 + root) / 6) % 1 == 0 def solution(start: int = 144) -> int: """ Returns the next number which is triangular, pentagonal and hexagonal. >>> solution(144) 1533776805 """ n = start num = hexagonal_num(n) while not is_pentagonal(num): n += 1 num = hexagonal_num(n) return num if __name__ == "__main__": print(f"{solution()} = ")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
class Things: def __init__(self, name, value, weight): self.name = name self.value = value self.weight = weight def __repr__(self): return f"{self.__class__.__name__}({self.name}, {self.value}, {self.weight})" def get_value(self): return self.value def get_name(self): return self.name def get_weight(self): return self.weight def value_weight(self): return self.value / self.weight def build_menu(name, value, weight): menu = [] for i in range(len(value)): menu.append(Things(name[i], value[i], weight[i])) return menu def greedy(item, max_cost, key_func): items_copy = sorted(item, key=key_func, reverse=True) result = [] total_value, total_cost = 0.0, 0.0 for i in range(len(items_copy)): if (total_cost + items_copy[i].get_weight()) <= max_cost: result.append(items_copy[i]) total_cost += items_copy[i].get_weight() total_value += items_copy[i].get_value() return (result, total_value) def test_greedy(): """ >>> food = ["Burger", "Pizza", "Coca Cola", "Rice", ... "Sambhar", "Chicken", "Fries", "Milk"] >>> value = [80, 100, 60, 70, 50, 110, 90, 60] >>> weight = [40, 60, 40, 70, 100, 85, 55, 70] >>> foods = build_menu(food, value, weight) >>> foods # doctest: +NORMALIZE_WHITESPACE [Things(Burger, 80, 40), Things(Pizza, 100, 60), Things(Coca Cola, 60, 40), Things(Rice, 70, 70), Things(Sambhar, 50, 100), Things(Chicken, 110, 85), Things(Fries, 90, 55), Things(Milk, 60, 70)] >>> greedy(foods, 500, Things.get_value) # doctest: +NORMALIZE_WHITESPACE ([Things(Chicken, 110, 85), Things(Pizza, 100, 60), Things(Fries, 90, 55), Things(Burger, 80, 40), Things(Rice, 70, 70), Things(Coca Cola, 60, 40), Things(Milk, 60, 70)], 570.0) """ if __name__ == "__main__": import doctest doctest.testmod()
class Things: def __init__(self, name, value, weight): self.name = name self.value = value self.weight = weight def __repr__(self): return f"{self.__class__.__name__}({self.name}, {self.value}, {self.weight})" def get_value(self): return self.value def get_name(self): return self.name def get_weight(self): return self.weight def value_weight(self): return self.value / self.weight def build_menu(name, value, weight): menu = [] for i in range(len(value)): menu.append(Things(name[i], value[i], weight[i])) return menu def greedy(item, max_cost, key_func): items_copy = sorted(item, key=key_func, reverse=True) result = [] total_value, total_cost = 0.0, 0.0 for i in range(len(items_copy)): if (total_cost + items_copy[i].get_weight()) <= max_cost: result.append(items_copy[i]) total_cost += items_copy[i].get_weight() total_value += items_copy[i].get_value() return (result, total_value) def test_greedy(): """ >>> food = ["Burger", "Pizza", "Coca Cola", "Rice", ... "Sambhar", "Chicken", "Fries", "Milk"] >>> value = [80, 100, 60, 70, 50, 110, 90, 60] >>> weight = [40, 60, 40, 70, 100, 85, 55, 70] >>> foods = build_menu(food, value, weight) >>> foods # doctest: +NORMALIZE_WHITESPACE [Things(Burger, 80, 40), Things(Pizza, 100, 60), Things(Coca Cola, 60, 40), Things(Rice, 70, 70), Things(Sambhar, 50, 100), Things(Chicken, 110, 85), Things(Fries, 90, 55), Things(Milk, 60, 70)] >>> greedy(foods, 500, Things.get_value) # doctest: +NORMALIZE_WHITESPACE ([Things(Chicken, 110, 85), Things(Pizza, 100, 60), Things(Fries, 90, 55), Things(Burger, 80, 40), Things(Rice, 70, 70), Things(Coca Cola, 60, 40), Things(Milk, 60, 70)], 570.0) """ if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import itertools import string from collections.abc import Generator, Iterable def chunker(seq: Iterable[str], size: int) -> Generator[tuple[str, ...], None, None]: it = iter(seq) while True: chunk = tuple(itertools.islice(it, size)) if not chunk: return yield chunk def prepare_input(dirty: str) -> str: """ Prepare the plaintext by up-casing it and separating repeated letters with X's """ dirty = "".join([c.upper() for c in dirty if c in string.ascii_letters]) clean = "" if len(dirty) < 2: return dirty for i in range(len(dirty) - 1): clean += dirty[i] if dirty[i] == dirty[i + 1]: clean += "X" clean += dirty[-1] if len(clean) & 1: clean += "X" return clean def generate_table(key: str) -> list[str]: # I and J are used interchangeably to allow # us to use a 5x5 table (25 letters) alphabet = "ABCDEFGHIKLMNOPQRSTUVWXYZ" # we're using a list instead of a '2d' array because it makes the math # for setting up the table and doing the actual encoding/decoding simpler table = [] # copy key chars into the table if they are in `alphabet` ignoring duplicates for char in key.upper(): if char not in table and char in alphabet: table.append(char) # fill the rest of the table in with the remaining alphabet chars for char in alphabet: if char not in table: table.append(char) return table def encode(plaintext: str, key: str) -> str: table = generate_table(key) plaintext = prepare_input(plaintext) ciphertext = "" # https://en.wikipedia.org/wiki/Playfair_cipher#Description for char1, char2 in chunker(plaintext, 2): row1, col1 = divmod(table.index(char1), 5) row2, col2 = divmod(table.index(char2), 5) if row1 == row2: ciphertext += table[row1 * 5 + (col1 + 1) % 5] ciphertext += table[row2 * 5 + (col2 + 1) % 5] elif col1 == col2: ciphertext += table[((row1 + 1) % 5) * 5 + col1] ciphertext += table[((row2 + 1) % 5) * 5 + col2] else: # rectangle ciphertext += table[row1 * 5 + col2] ciphertext += table[row2 * 5 + col1] return ciphertext def decode(ciphertext: str, key: str) -> str: table = generate_table(key) plaintext = "" # https://en.wikipedia.org/wiki/Playfair_cipher#Description for char1, char2 in chunker(ciphertext, 2): row1, col1 = divmod(table.index(char1), 5) row2, col2 = divmod(table.index(char2), 5) if row1 == row2: plaintext += table[row1 * 5 + (col1 - 1) % 5] plaintext += table[row2 * 5 + (col2 - 1) % 5] elif col1 == col2: plaintext += table[((row1 - 1) % 5) * 5 + col1] plaintext += table[((row2 - 1) % 5) * 5 + col2] else: # rectangle plaintext += table[row1 * 5 + col2] plaintext += table[row2 * 5 + col1] return plaintext
import itertools import string from collections.abc import Generator, Iterable def chunker(seq: Iterable[str], size: int) -> Generator[tuple[str, ...], None, None]: it = iter(seq) while True: chunk = tuple(itertools.islice(it, size)) if not chunk: return yield chunk def prepare_input(dirty: str) -> str: """ Prepare the plaintext by up-casing it and separating repeated letters with X's """ dirty = "".join([c.upper() for c in dirty if c in string.ascii_letters]) clean = "" if len(dirty) < 2: return dirty for i in range(len(dirty) - 1): clean += dirty[i] if dirty[i] == dirty[i + 1]: clean += "X" clean += dirty[-1] if len(clean) & 1: clean += "X" return clean def generate_table(key: str) -> list[str]: # I and J are used interchangeably to allow # us to use a 5x5 table (25 letters) alphabet = "ABCDEFGHIKLMNOPQRSTUVWXYZ" # we're using a list instead of a '2d' array because it makes the math # for setting up the table and doing the actual encoding/decoding simpler table = [] # copy key chars into the table if they are in `alphabet` ignoring duplicates for char in key.upper(): if char not in table and char in alphabet: table.append(char) # fill the rest of the table in with the remaining alphabet chars for char in alphabet: if char not in table: table.append(char) return table def encode(plaintext: str, key: str) -> str: table = generate_table(key) plaintext = prepare_input(plaintext) ciphertext = "" # https://en.wikipedia.org/wiki/Playfair_cipher#Description for char1, char2 in chunker(plaintext, 2): row1, col1 = divmod(table.index(char1), 5) row2, col2 = divmod(table.index(char2), 5) if row1 == row2: ciphertext += table[row1 * 5 + (col1 + 1) % 5] ciphertext += table[row2 * 5 + (col2 + 1) % 5] elif col1 == col2: ciphertext += table[((row1 + 1) % 5) * 5 + col1] ciphertext += table[((row2 + 1) % 5) * 5 + col2] else: # rectangle ciphertext += table[row1 * 5 + col2] ciphertext += table[row2 * 5 + col1] return ciphertext def decode(ciphertext: str, key: str) -> str: table = generate_table(key) plaintext = "" # https://en.wikipedia.org/wiki/Playfair_cipher#Description for char1, char2 in chunker(ciphertext, 2): row1, col1 = divmod(table.index(char1), 5) row2, col2 = divmod(table.index(char2), 5) if row1 == row2: plaintext += table[row1 * 5 + (col1 - 1) % 5] plaintext += table[row2 * 5 + (col2 - 1) % 5] elif col1 == col2: plaintext += table[((row1 - 1) % 5) * 5 + col1] plaintext += table[((row2 - 1) % 5) * 5 + col2] else: # rectangle plaintext += table[row1 * 5 + col2] plaintext += table[row2 * 5 + col1] return plaintext
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Normal Distribution QuickSort QuickSort Algorithm where the pivot element is chosen randomly between first and last elements of the array, and the array elements are taken from Standard Normal Distribution. ## Array elements The array elements are taken from a Standard Normal Distribution, having mean = 0 and standard deviation = 1. ### The code ```python >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> p = 100 # 100 elements are to be sorted >>> mu, sigma = 0, 1 # mean and standard deviation >>> X = np.random.normal(mu, sigma, p) >>> np.save(outfile, X) >>> 'The array is' >>> X ``` ------ #### The distribution of the array elements ```python >>> mu, sigma = 0, 1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, p) >>> count, bins, ignored = plt.hist(s, 30, normed=True) >>> plt.plot(bins , 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r') >>> plt.show() ``` ------ ![normal distribution large](https://upload.wikimedia.org/wikipedia/commons/thumb/2/25/The_Normal_Distribution.svg/1280px-The_Normal_Distribution.svg.png) ------ ## Comparing the numbers of comparisons We can plot the function for Checking 'The Number of Comparisons' taking place between Normal Distribution QuickSort and Ordinary QuickSort: ```python >>> import matplotlib.pyplot as plt # Normal Distribution QuickSort is red >>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,6,15,43,136,340,800,2156,6821,16325],linewidth=2, color='r') # Ordinary QuickSort is green >>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,4,16,67,122,362,949,2131,5086,12866],linewidth=2, color='g') >>> plt.show() ```
# Normal Distribution QuickSort QuickSort Algorithm where the pivot element is chosen randomly between first and last elements of the array, and the array elements are taken from Standard Normal Distribution. ## Array elements The array elements are taken from a Standard Normal Distribution, having mean = 0 and standard deviation = 1. ### The code ```python >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> p = 100 # 100 elements are to be sorted >>> mu, sigma = 0, 1 # mean and standard deviation >>> X = np.random.normal(mu, sigma, p) >>> np.save(outfile, X) >>> 'The array is' >>> X ``` ------ #### The distribution of the array elements ```python >>> mu, sigma = 0, 1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, p) >>> count, bins, ignored = plt.hist(s, 30, normed=True) >>> plt.plot(bins , 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r') >>> plt.show() ``` ------ ![normal distribution large](https://upload.wikimedia.org/wikipedia/commons/thumb/2/25/The_Normal_Distribution.svg/1280px-The_Normal_Distribution.svg.png) ------ ## Comparing the numbers of comparisons We can plot the function for Checking 'The Number of Comparisons' taking place between Normal Distribution QuickSort and Ordinary QuickSort: ```python >>> import matplotlib.pyplot as plt # Normal Distribution QuickSort is red >>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,6,15,43,136,340,800,2156,6821,16325],linewidth=2, color='r') # Ordinary QuickSort is green >>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,4,16,67,122,362,949,2131,5086,12866],linewidth=2, color='g') >>> plt.show() ```
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def elf_hash(data: str) -> int: """ Implementation of ElfHash Algorithm, a variant of PJW hash function. >>> elf_hash('lorem ipsum') 253956621 """ hash_ = x = 0 for letter in data: hash_ = (hash_ << 4) + ord(letter) x = hash_ & 0xF0000000 if x != 0: hash_ ^= x >> 24 hash_ &= ~x return hash_ if __name__ == "__main__": import doctest doctest.testmod()
def elf_hash(data: str) -> int: """ Implementation of ElfHash Algorithm, a variant of PJW hash function. >>> elf_hash('lorem ipsum') 253956621 """ hash_ = x = 0 for letter in data: hash_ = (hash_ << 4) + ord(letter) x = hash_ & 0xF0000000 if x != 0: hash_ ^= x >> 24 hash_ &= ~x return hash_ if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" https://en.wikipedia.org/wiki/Lucas_number """ def recursive_lucas_number(n_th_number: int) -> int: """ Returns the nth lucas number >>> recursive_lucas_number(1) 1 >>> recursive_lucas_number(20) 15127 >>> recursive_lucas_number(0) 2 >>> recursive_lucas_number(25) 167761 >>> recursive_lucas_number(-1.5) Traceback (most recent call last): ... TypeError: recursive_lucas_number accepts only integer arguments. """ if not isinstance(n_th_number, int): raise TypeError("recursive_lucas_number accepts only integer arguments.") if n_th_number == 0: return 2 if n_th_number == 1: return 1 return recursive_lucas_number(n_th_number - 1) + recursive_lucas_number( n_th_number - 2 ) def dynamic_lucas_number(n_th_number: int) -> int: """ Returns the nth lucas number >>> dynamic_lucas_number(1) 1 >>> dynamic_lucas_number(20) 15127 >>> dynamic_lucas_number(0) 2 >>> dynamic_lucas_number(25) 167761 >>> dynamic_lucas_number(-1.5) Traceback (most recent call last): ... TypeError: dynamic_lucas_number accepts only integer arguments. """ if not isinstance(n_th_number, int): raise TypeError("dynamic_lucas_number accepts only integer arguments.") a, b = 2, 1 for _ in range(n_th_number): a, b = b, a + b return a if __name__ == "__main__": from doctest import testmod testmod() n = int(input("Enter the number of terms in lucas series:\n").strip()) print("Using recursive function to calculate lucas series:") print(" ".join(str(recursive_lucas_number(i)) for i in range(n))) print("\nUsing dynamic function to calculate lucas series:") print(" ".join(str(dynamic_lucas_number(i)) for i in range(n)))
""" https://en.wikipedia.org/wiki/Lucas_number """ def recursive_lucas_number(n_th_number: int) -> int: """ Returns the nth lucas number >>> recursive_lucas_number(1) 1 >>> recursive_lucas_number(20) 15127 >>> recursive_lucas_number(0) 2 >>> recursive_lucas_number(25) 167761 >>> recursive_lucas_number(-1.5) Traceback (most recent call last): ... TypeError: recursive_lucas_number accepts only integer arguments. """ if not isinstance(n_th_number, int): raise TypeError("recursive_lucas_number accepts only integer arguments.") if n_th_number == 0: return 2 if n_th_number == 1: return 1 return recursive_lucas_number(n_th_number - 1) + recursive_lucas_number( n_th_number - 2 ) def dynamic_lucas_number(n_th_number: int) -> int: """ Returns the nth lucas number >>> dynamic_lucas_number(1) 1 >>> dynamic_lucas_number(20) 15127 >>> dynamic_lucas_number(0) 2 >>> dynamic_lucas_number(25) 167761 >>> dynamic_lucas_number(-1.5) Traceback (most recent call last): ... TypeError: dynamic_lucas_number accepts only integer arguments. """ if not isinstance(n_th_number, int): raise TypeError("dynamic_lucas_number accepts only integer arguments.") a, b = 2, 1 for _ in range(n_th_number): a, b = b, a + b return a if __name__ == "__main__": from doctest import testmod testmod() n = int(input("Enter the number of terms in lucas series:\n").strip()) print("Using recursive function to calculate lucas series:") print(" ".join(str(recursive_lucas_number(i)) for i in range(n))) print("\nUsing dynamic function to calculate lucas series:") print(" ".join(str(dynamic_lucas_number(i)) for i in range(n)))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
### Interest * Compound Interest: "Compound interest is calculated by multiplying the initial principal amount by one plus the annual interest rate raised to the number of compound periods minus one." [Compound Interest](https://www.investopedia.com/) * Simple Interest: "Simple interest paid or received over a certain period is a fixed percentage of the principal amount that was borrowed or lent. " [Simple Interest](https://www.investopedia.com/)
### Interest * Compound Interest: "Compound interest is calculated by multiplying the initial principal amount by one plus the annual interest rate raised to the number of compound periods minus one." [Compound Interest](https://www.investopedia.com/) * Simple Interest: "Simple interest paid or received over a certain period is a fixed percentage of the principal amount that was borrowed or lent. " [Simple Interest](https://www.investopedia.com/)
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Minimax helps to achieve maximum score in a game by checking all possible moves depth is current depth in game tree. nodeIndex is index of current node in scores[]. if move is of maximizer return true else false leaves of game tree is stored in scores[] height is maximum height of Game tree """ from __future__ import annotations import math def minimax( depth: int, node_index: int, is_max: bool, scores: list[int], height: float ) -> int: """ >>> import math >>> scores = [90, 23, 6, 33, 21, 65, 123, 34423] >>> height = math.log(len(scores), 2) >>> minimax(0, 0, True, scores, height) 65 >>> minimax(-1, 0, True, scores, height) Traceback (most recent call last): ... ValueError: Depth cannot be less than 0 >>> minimax(0, 0, True, [], 2) Traceback (most recent call last): ... ValueError: Scores cannot be empty >>> scores = [3, 5, 2, 9, 12, 5, 23, 23] >>> height = math.log(len(scores), 2) >>> minimax(0, 0, True, scores, height) 12 """ if depth < 0: raise ValueError("Depth cannot be less than 0") if len(scores) == 0: raise ValueError("Scores cannot be empty") if depth == height: return scores[node_index] if is_max: return max( minimax(depth + 1, node_index * 2, False, scores, height), minimax(depth + 1, node_index * 2 + 1, False, scores, height), ) return min( minimax(depth + 1, node_index * 2, True, scores, height), minimax(depth + 1, node_index * 2 + 1, True, scores, height), ) def main() -> None: scores = [90, 23, 6, 33, 21, 65, 123, 34423] height = math.log(len(scores), 2) print("Optimal value : ", end="") print(minimax(0, 0, True, scores, height)) if __name__ == "__main__": import doctest doctest.testmod() main()
""" Minimax helps to achieve maximum score in a game by checking all possible moves depth is current depth in game tree. nodeIndex is index of current node in scores[]. if move is of maximizer return true else false leaves of game tree is stored in scores[] height is maximum height of Game tree """ from __future__ import annotations import math def minimax( depth: int, node_index: int, is_max: bool, scores: list[int], height: float ) -> int: """ >>> import math >>> scores = [90, 23, 6, 33, 21, 65, 123, 34423] >>> height = math.log(len(scores), 2) >>> minimax(0, 0, True, scores, height) 65 >>> minimax(-1, 0, True, scores, height) Traceback (most recent call last): ... ValueError: Depth cannot be less than 0 >>> minimax(0, 0, True, [], 2) Traceback (most recent call last): ... ValueError: Scores cannot be empty >>> scores = [3, 5, 2, 9, 12, 5, 23, 23] >>> height = math.log(len(scores), 2) >>> minimax(0, 0, True, scores, height) 12 """ if depth < 0: raise ValueError("Depth cannot be less than 0") if len(scores) == 0: raise ValueError("Scores cannot be empty") if depth == height: return scores[node_index] if is_max: return max( minimax(depth + 1, node_index * 2, False, scores, height), minimax(depth + 1, node_index * 2 + 1, False, scores, height), ) return min( minimax(depth + 1, node_index * 2, True, scores, height), minimax(depth + 1, node_index * 2 + 1, True, scores, height), ) def main() -> None: scores = [90, 23, 6, 33, 21, 65, 123, 34423] height = math.log(len(scores), 2) print("Optimal value : ", end="") print(minimax(0, 0, True, scores, height)) if __name__ == "__main__": import doctest doctest.testmod() main()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" A pure Python implementation of the quick sort algorithm For doctests run following command: python3 -m doctest -v quick_sort.py For manual testing run: python3 quick_sort.py """ from __future__ import annotations from random import randrange def quick_sort(collection: list) -> list: """A pure Python implementation of quick sort algorithm :param collection: a mutable collection of comparable items :return: the same collection ordered by ascending Examples: >>> quick_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> quick_sort([]) [] >>> quick_sort([-2, 5, 0, -45]) [-45, -2, 0, 5] """ if len(collection) < 2: return collection pivot_index = randrange(len(collection)) # Use random element as pivot pivot = collection[pivot_index] greater: list[int] = [] # All elements greater than pivot lesser: list[int] = [] # All elements less than or equal to pivot for element in collection[:pivot_index]: (greater if element > pivot else lesser).append(element) for element in collection[pivot_index + 1 :]: (greater if element > pivot else lesser).append(element) return [*quick_sort(lesser), pivot, *quick_sort(greater)] if __name__ == "__main__": user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(quick_sort(unsorted))
""" A pure Python implementation of the quick sort algorithm For doctests run following command: python3 -m doctest -v quick_sort.py For manual testing run: python3 quick_sort.py """ from __future__ import annotations from random import randrange def quick_sort(collection: list) -> list: """A pure Python implementation of quick sort algorithm :param collection: a mutable collection of comparable items :return: the same collection ordered by ascending Examples: >>> quick_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> quick_sort([]) [] >>> quick_sort([-2, 5, 0, -45]) [-45, -2, 0, 5] """ if len(collection) < 2: return collection pivot_index = randrange(len(collection)) # Use random element as pivot pivot = collection[pivot_index] greater: list[int] = [] # All elements greater than pivot lesser: list[int] = [] # All elements less than or equal to pivot for element in collection[:pivot_index]: (greater if element > pivot else lesser).append(element) for element in collection[pivot_index + 1 :]: (greater if element > pivot else lesser).append(element) return [*quick_sort(lesser), pivot, *quick_sort(greater)] if __name__ == "__main__": user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(quick_sort(unsorted))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 6: https://projecteuler.net/problem=6 Sum square difference The sum of the squares of the first ten natural numbers is, 1^2 + 2^2 + ... + 10^2 = 385 The square of the sum of the first ten natural numbers is, (1 + 2 + ... + 10)^2 = 55^2 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 - 385 = 2640. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. """ def solution(n: int = 100) -> int: """ Returns the difference between the sum of the squares of the first n natural numbers and the square of the sum. >>> solution(10) 2640 >>> solution(15) 13160 >>> solution(20) 41230 >>> solution(50) 1582700 """ sum_of_squares = n * (n + 1) * (2 * n + 1) / 6 square_of_sum = (n * (n + 1) / 2) ** 2 return int(square_of_sum - sum_of_squares) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 6: https://projecteuler.net/problem=6 Sum square difference The sum of the squares of the first ten natural numbers is, 1^2 + 2^2 + ... + 10^2 = 385 The square of the sum of the first ten natural numbers is, (1 + 2 + ... + 10)^2 = 55^2 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 - 385 = 2640. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. """ def solution(n: int = 100) -> int: """ Returns the difference between the sum of the squares of the first n natural numbers and the square of the sum. >>> solution(10) 2640 >>> solution(15) 13160 >>> solution(20) 41230 >>> solution(50) 1582700 """ sum_of_squares = n * (n + 1) * (2 * n + 1) / 6 square_of_sum = (n * (n + 1) / 2) ** 2 return int(square_of_sum - sum_of_squares) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import numpy as np def runge_kutta(f, y0, x0, h, x_end): """ Calculate the numeric solution at each step to the ODE f(x, y) using RK4 https://en.wikipedia.org/wiki/Runge-Kutta_methods Arguments: f -- The ode as a function of x and y y0 -- the initial value for y x0 -- the initial value for x h -- the stepsize x_end -- the end value for x >>> # the exact solution is math.exp(x) >>> def f(x, y): ... return y >>> y0 = 1 >>> y = runge_kutta(f, y0, 0.0, 0.01, 5) >>> y[-1] 148.41315904125113 """ n = int(np.ceil((x_end - x0) / h)) y = np.zeros((n + 1,)) y[0] = y0 x = x0 for k in range(n): k1 = f(x, y[k]) k2 = f(x + 0.5 * h, y[k] + 0.5 * h * k1) k3 = f(x + 0.5 * h, y[k] + 0.5 * h * k2) k4 = f(x + h, y[k] + h * k3) y[k + 1] = y[k] + (1 / 6) * h * (k1 + 2 * k2 + 2 * k3 + k4) x += h return y if __name__ == "__main__": import doctest doctest.testmod()
import numpy as np def runge_kutta(f, y0, x0, h, x_end): """ Calculate the numeric solution at each step to the ODE f(x, y) using RK4 https://en.wikipedia.org/wiki/Runge-Kutta_methods Arguments: f -- The ode as a function of x and y y0 -- the initial value for y x0 -- the initial value for x h -- the stepsize x_end -- the end value for x >>> # the exact solution is math.exp(x) >>> def f(x, y): ... return y >>> y0 = 1 >>> y = runge_kutta(f, y0, 0.0, 0.01, 5) >>> y[-1] 148.41315904125113 """ n = int(np.ceil((x_end - x0) / h)) y = np.zeros((n + 1,)) y[0] = y0 x = x0 for k in range(n): k1 = f(x, y[k]) k2 = f(x + 0.5 * h, y[k] + 0.5 * h * k1) k3 = f(x + 0.5 * h, y[k] + 0.5 * h * k2) k4 = f(x + h, y[k] + h * k3) y[k + 1] = y[k] + (1 / 6) * h * (k1 + 2 * k2 + 2 * k3 + k4) x += h return y if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
"""Breath First Search (BFS) can be used when finding the shortest path from a given source node to a target node in an unweighted graph. """ from __future__ import annotations graph = { "A": ["B", "C", "E"], "B": ["A", "D", "E"], "C": ["A", "F", "G"], "D": ["B"], "E": ["A", "B", "D"], "F": ["C"], "G": ["C"], } class Graph: def __init__(self, graph: dict[str, list[str]], source_vertex: str) -> None: """ Graph is implemented as dictionary of adjacency lists. Also, Source vertex have to be defined upon initialization. """ self.graph = graph # mapping node to its parent in resulting breadth first tree self.parent: dict[str, str | None] = {} self.source_vertex = source_vertex def breath_first_search(self) -> None: """ This function is a helper for running breath first search on this graph. >>> g = Graph(graph, "G") >>> g.breath_first_search() >>> g.parent {'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'} """ visited = {self.source_vertex} self.parent[self.source_vertex] = None queue = [self.source_vertex] # first in first out queue while queue: vertex = queue.pop(0) for adjacent_vertex in self.graph[vertex]: if adjacent_vertex not in visited: visited.add(adjacent_vertex) self.parent[adjacent_vertex] = vertex queue.append(adjacent_vertex) def shortest_path(self, target_vertex: str) -> str: """ This shortest path function returns a string, describing the result: 1.) No path is found. The string is a human readable message to indicate this. 2.) The shortest path is found. The string is in the form `v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target vertex, if it exists separately. >>> g = Graph(graph, "G") >>> g.breath_first_search() Case 1 - No path is found. >>> g.shortest_path("Foo") Traceback (most recent call last): ... ValueError: No path from vertex: G to vertex: Foo Case 2 - The path is found. >>> g.shortest_path("D") 'G->C->A->B->D' >>> g.shortest_path("G") 'G' """ if target_vertex == self.source_vertex: return self.source_vertex target_vertex_parent = self.parent.get(target_vertex) if target_vertex_parent is None: msg = ( f"No path from vertex: {self.source_vertex} to vertex: {target_vertex}" ) raise ValueError(msg) return self.shortest_path(target_vertex_parent) + f"->{target_vertex}" if __name__ == "__main__": g = Graph(graph, "G") g.breath_first_search() print(g.shortest_path("D")) print(g.shortest_path("G")) print(g.shortest_path("Foo"))
"""Breath First Search (BFS) can be used when finding the shortest path from a given source node to a target node in an unweighted graph. """ from __future__ import annotations graph = { "A": ["B", "C", "E"], "B": ["A", "D", "E"], "C": ["A", "F", "G"], "D": ["B"], "E": ["A", "B", "D"], "F": ["C"], "G": ["C"], } class Graph: def __init__(self, graph: dict[str, list[str]], source_vertex: str) -> None: """ Graph is implemented as dictionary of adjacency lists. Also, Source vertex have to be defined upon initialization. """ self.graph = graph # mapping node to its parent in resulting breadth first tree self.parent: dict[str, str | None] = {} self.source_vertex = source_vertex def breath_first_search(self) -> None: """ This function is a helper for running breath first search on this graph. >>> g = Graph(graph, "G") >>> g.breath_first_search() >>> g.parent {'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'} """ visited = {self.source_vertex} self.parent[self.source_vertex] = None queue = [self.source_vertex] # first in first out queue while queue: vertex = queue.pop(0) for adjacent_vertex in self.graph[vertex]: if adjacent_vertex not in visited: visited.add(adjacent_vertex) self.parent[adjacent_vertex] = vertex queue.append(adjacent_vertex) def shortest_path(self, target_vertex: str) -> str: """ This shortest path function returns a string, describing the result: 1.) No path is found. The string is a human readable message to indicate this. 2.) The shortest path is found. The string is in the form `v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target vertex, if it exists separately. >>> g = Graph(graph, "G") >>> g.breath_first_search() Case 1 - No path is found. >>> g.shortest_path("Foo") Traceback (most recent call last): ... ValueError: No path from vertex: G to vertex: Foo Case 2 - The path is found. >>> g.shortest_path("D") 'G->C->A->B->D' >>> g.shortest_path("G") 'G' """ if target_vertex == self.source_vertex: return self.source_vertex target_vertex_parent = self.parent.get(target_vertex) if target_vertex_parent is None: msg = ( f"No path from vertex: {self.source_vertex} to vertex: {target_vertex}" ) raise ValueError(msg) return self.shortest_path(target_vertex_parent) + f"->{target_vertex}" if __name__ == "__main__": g = Graph(graph, "G") g.breath_first_search() print(g.shortest_path("D")) print(g.shortest_path("G")) print(g.shortest_path("Foo"))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform The Burrows–Wheeler transform (BWT, also called block-sorting compression) rearranges a character string into runs of similar characters. This is useful for compression, since it tends to be easy to compress a string that has runs of repeated characters by techniques such as move-to-front transform and run-length encoding. More importantly, the transformation is reversible, without needing to store any additional data except the position of the first original character. The BWT is thus a "free" method of improving the efficiency of text compression algorithms, costing only some extra computation. """ from __future__ import annotations from typing import TypedDict class BWTTransformDict(TypedDict): bwt_string: str idx_original_string: int def all_rotations(s: str) -> list[str]: """ :param s: The string that will be rotated len(s) times. :return: A list with the rotations. :raises TypeError: If s is not an instance of str. Examples: >>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE ['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA', 'A|^BANAN', '|^BANANA'] >>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE ['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a', 'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d', '_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca', 'aa_asa_da_cas'] >>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE ['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan', 'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab', 'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan'] >>> all_rotations(5) Traceback (most recent call last): ... TypeError: The parameter s type must be str. """ if not isinstance(s, str): raise TypeError("The parameter s type must be str.") return [s[i:] + s[:i] for i in range(len(s))] def bwt_transform(s: str) -> BWTTransformDict: """ :param s: The string that will be used at bwt algorithm :return: the string composed of the last char of each row of the ordered rotations and the index of the original string at ordered rotations list :raises TypeError: If the s parameter type is not str :raises ValueError: If the s parameter is empty Examples: >>> bwt_transform("^BANANA") {'bwt_string': 'BNN^AAA', 'idx_original_string': 6} >>> bwt_transform("a_asa_da_casa") {'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3} >>> bwt_transform("panamabanana") {'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11} >>> bwt_transform(4) Traceback (most recent call last): ... TypeError: The parameter s type must be str. >>> bwt_transform('') Traceback (most recent call last): ... ValueError: The parameter s must not be empty. """ if not isinstance(s, str): raise TypeError("The parameter s type must be str.") if not s: raise ValueError("The parameter s must not be empty.") rotations = all_rotations(s) rotations.sort() # sort the list of rotations in alphabetically order # make a string composed of the last char of each rotation response: BWTTransformDict = { "bwt_string": "".join([word[-1] for word in rotations]), "idx_original_string": rotations.index(s), } return response def reverse_bwt(bwt_string: str, idx_original_string: int) -> str: """ :param bwt_string: The string returned from bwt algorithm execution :param idx_original_string: A 0-based index of the string that was used to generate bwt_string at ordered rotations list :return: The string used to generate bwt_string when bwt was executed :raises TypeError: If the bwt_string parameter type is not str :raises ValueError: If the bwt_string parameter is empty :raises TypeError: If the idx_original_string type is not int or if not possible to cast it to int :raises ValueError: If the idx_original_string value is lower than 0 or greater than len(bwt_string) - 1 >>> reverse_bwt("BNN^AAA", 6) '^BANANA' >>> reverse_bwt("aaaadss_c__aa", 3) 'a_asa_da_casa' >>> reverse_bwt("mnpbnnaaaaaa", 11) 'panamabanana' >>> reverse_bwt(4, 11) Traceback (most recent call last): ... TypeError: The parameter bwt_string type must be str. >>> reverse_bwt("", 11) Traceback (most recent call last): ... ValueError: The parameter bwt_string must not be empty. >>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... TypeError: The parameter idx_original_string type must be int or passive of cast to int. >>> reverse_bwt("mnpbnnaaaaaa", -1) Traceback (most recent call last): ... ValueError: The parameter idx_original_string must not be lower than 0. >>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... ValueError: The parameter idx_original_string must be lower than len(bwt_string). >>> reverse_bwt("mnpbnnaaaaaa", 11.0) 'panamabanana' >>> reverse_bwt("mnpbnnaaaaaa", 11.4) 'panamabanana' """ if not isinstance(bwt_string, str): raise TypeError("The parameter bwt_string type must be str.") if not bwt_string: raise ValueError("The parameter bwt_string must not be empty.") try: idx_original_string = int(idx_original_string) except ValueError: raise TypeError( "The parameter idx_original_string type must be int or passive" " of cast to int." ) if idx_original_string < 0: raise ValueError("The parameter idx_original_string must not be lower than 0.") if idx_original_string >= len(bwt_string): raise ValueError( "The parameter idx_original_string must be lower than len(bwt_string)." ) ordered_rotations = [""] * len(bwt_string) for _ in range(len(bwt_string)): for i in range(len(bwt_string)): ordered_rotations[i] = bwt_string[i] + ordered_rotations[i] ordered_rotations.sort() return ordered_rotations[idx_original_string] if __name__ == "__main__": entry_msg = "Provide a string that I will generate its BWT transform: " s = input(entry_msg).strip() result = bwt_transform(s) print( f"Burrows Wheeler transform for string '{s}' results " f"in '{result['bwt_string']}'" ) original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"]) print( f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' " f"we get original string '{original_string}'" )
""" https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform The Burrows–Wheeler transform (BWT, also called block-sorting compression) rearranges a character string into runs of similar characters. This is useful for compression, since it tends to be easy to compress a string that has runs of repeated characters by techniques such as move-to-front transform and run-length encoding. More importantly, the transformation is reversible, without needing to store any additional data except the position of the first original character. The BWT is thus a "free" method of improving the efficiency of text compression algorithms, costing only some extra computation. """ from __future__ import annotations from typing import TypedDict class BWTTransformDict(TypedDict): bwt_string: str idx_original_string: int def all_rotations(s: str) -> list[str]: """ :param s: The string that will be rotated len(s) times. :return: A list with the rotations. :raises TypeError: If s is not an instance of str. Examples: >>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE ['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA', 'A|^BANAN', '|^BANANA'] >>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE ['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a', 'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d', '_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca', 'aa_asa_da_cas'] >>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE ['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan', 'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab', 'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan'] >>> all_rotations(5) Traceback (most recent call last): ... TypeError: The parameter s type must be str. """ if not isinstance(s, str): raise TypeError("The parameter s type must be str.") return [s[i:] + s[:i] for i in range(len(s))] def bwt_transform(s: str) -> BWTTransformDict: """ :param s: The string that will be used at bwt algorithm :return: the string composed of the last char of each row of the ordered rotations and the index of the original string at ordered rotations list :raises TypeError: If the s parameter type is not str :raises ValueError: If the s parameter is empty Examples: >>> bwt_transform("^BANANA") {'bwt_string': 'BNN^AAA', 'idx_original_string': 6} >>> bwt_transform("a_asa_da_casa") {'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3} >>> bwt_transform("panamabanana") {'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11} >>> bwt_transform(4) Traceback (most recent call last): ... TypeError: The parameter s type must be str. >>> bwt_transform('') Traceback (most recent call last): ... ValueError: The parameter s must not be empty. """ if not isinstance(s, str): raise TypeError("The parameter s type must be str.") if not s: raise ValueError("The parameter s must not be empty.") rotations = all_rotations(s) rotations.sort() # sort the list of rotations in alphabetically order # make a string composed of the last char of each rotation response: BWTTransformDict = { "bwt_string": "".join([word[-1] for word in rotations]), "idx_original_string": rotations.index(s), } return response def reverse_bwt(bwt_string: str, idx_original_string: int) -> str: """ :param bwt_string: The string returned from bwt algorithm execution :param idx_original_string: A 0-based index of the string that was used to generate bwt_string at ordered rotations list :return: The string used to generate bwt_string when bwt was executed :raises TypeError: If the bwt_string parameter type is not str :raises ValueError: If the bwt_string parameter is empty :raises TypeError: If the idx_original_string type is not int or if not possible to cast it to int :raises ValueError: If the idx_original_string value is lower than 0 or greater than len(bwt_string) - 1 >>> reverse_bwt("BNN^AAA", 6) '^BANANA' >>> reverse_bwt("aaaadss_c__aa", 3) 'a_asa_da_casa' >>> reverse_bwt("mnpbnnaaaaaa", 11) 'panamabanana' >>> reverse_bwt(4, 11) Traceback (most recent call last): ... TypeError: The parameter bwt_string type must be str. >>> reverse_bwt("", 11) Traceback (most recent call last): ... ValueError: The parameter bwt_string must not be empty. >>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... TypeError: The parameter idx_original_string type must be int or passive of cast to int. >>> reverse_bwt("mnpbnnaaaaaa", -1) Traceback (most recent call last): ... ValueError: The parameter idx_original_string must not be lower than 0. >>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... ValueError: The parameter idx_original_string must be lower than len(bwt_string). >>> reverse_bwt("mnpbnnaaaaaa", 11.0) 'panamabanana' >>> reverse_bwt("mnpbnnaaaaaa", 11.4) 'panamabanana' """ if not isinstance(bwt_string, str): raise TypeError("The parameter bwt_string type must be str.") if not bwt_string: raise ValueError("The parameter bwt_string must not be empty.") try: idx_original_string = int(idx_original_string) except ValueError: raise TypeError( "The parameter idx_original_string type must be int or passive" " of cast to int." ) if idx_original_string < 0: raise ValueError("The parameter idx_original_string must not be lower than 0.") if idx_original_string >= len(bwt_string): raise ValueError( "The parameter idx_original_string must be lower than len(bwt_string)." ) ordered_rotations = [""] * len(bwt_string) for _ in range(len(bwt_string)): for i in range(len(bwt_string)): ordered_rotations[i] = bwt_string[i] + ordered_rotations[i] ordered_rotations.sort() return ordered_rotations[idx_original_string] if __name__ == "__main__": entry_msg = "Provide a string that I will generate its BWT transform: " s = input(entry_msg).strip() result = bwt_transform(s) print( f"Burrows Wheeler transform for string '{s}' results " f"in '{result['bwt_string']}'" ) original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"]) print( f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' " f"we get original string '{original_string}'" )
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Checks if a system of forces is in static equilibrium. """ from __future__ import annotations from numpy import array, cos, cross, float64, radians, sin from numpy.typing import NDArray def polar_force( magnitude: float, angle: float, radian_mode: bool = False ) -> list[float]: """ Resolves force along rectangular components. (force, angle) => (force_x, force_y) >>> import math >>> force = polar_force(10, 45) >>> math.isclose(force[0], 7.071067811865477) True >>> math.isclose(force[1], 7.0710678118654755) True >>> force = polar_force(10, 3.14, radian_mode=True) >>> math.isclose(force[0], -9.999987317275396) True >>> math.isclose(force[1], 0.01592652916486828) True """ if radian_mode: return [magnitude * cos(angle), magnitude * sin(angle)] return [magnitude * cos(radians(angle)), magnitude * sin(radians(angle))] def in_static_equilibrium( forces: NDArray[float64], location: NDArray[float64], eps: float = 10**-1 ) -> bool: """ Check if a system is in equilibrium. It takes two numpy.array objects. forces ==> [ [force1_x, force1_y], [force2_x, force2_y], ....] location ==> [ [x1, y1], [x2, y2], ....] >>> force = array([[1, 1], [-1, 2]]) >>> location = array([[1, 0], [10, 0]]) >>> in_static_equilibrium(force, location) False """ # summation of moments is zero moments: NDArray[float64] = cross(location, forces) sum_moments: float = sum(moments) return abs(sum_moments) < eps if __name__ == "__main__": # Test to check if it works forces = array( [ polar_force(718.4, 180 - 30), polar_force(879.54, 45), polar_force(100, -90), ] ) location: NDArray[float64] = array([[0, 0], [0, 0], [0, 0]]) assert in_static_equilibrium(forces, location) # Problem 1 in image_data/2D_problems.jpg forces = array( [ polar_force(30 * 9.81, 15), polar_force(215, 180 - 45), polar_force(264, 90 - 30), ] ) location = array([[0, 0], [0, 0], [0, 0]]) assert in_static_equilibrium(forces, location) # Problem in image_data/2D_problems_1.jpg forces = array([[0, -2000], [0, -1200], [0, 15600], [0, -12400]]) location = array([[0, 0], [6, 0], [10, 0], [12, 0]]) assert in_static_equilibrium(forces, location) import doctest doctest.testmod()
""" Checks if a system of forces is in static equilibrium. """ from __future__ import annotations from numpy import array, cos, cross, float64, radians, sin from numpy.typing import NDArray def polar_force( magnitude: float, angle: float, radian_mode: bool = False ) -> list[float]: """ Resolves force along rectangular components. (force, angle) => (force_x, force_y) >>> import math >>> force = polar_force(10, 45) >>> math.isclose(force[0], 7.071067811865477) True >>> math.isclose(force[1], 7.0710678118654755) True >>> force = polar_force(10, 3.14, radian_mode=True) >>> math.isclose(force[0], -9.999987317275396) True >>> math.isclose(force[1], 0.01592652916486828) True """ if radian_mode: return [magnitude * cos(angle), magnitude * sin(angle)] return [magnitude * cos(radians(angle)), magnitude * sin(radians(angle))] def in_static_equilibrium( forces: NDArray[float64], location: NDArray[float64], eps: float = 10**-1 ) -> bool: """ Check if a system is in equilibrium. It takes two numpy.array objects. forces ==> [ [force1_x, force1_y], [force2_x, force2_y], ....] location ==> [ [x1, y1], [x2, y2], ....] >>> force = array([[1, 1], [-1, 2]]) >>> location = array([[1, 0], [10, 0]]) >>> in_static_equilibrium(force, location) False """ # summation of moments is zero moments: NDArray[float64] = cross(location, forces) sum_moments: float = sum(moments) return abs(sum_moments) < eps if __name__ == "__main__": # Test to check if it works forces = array( [ polar_force(718.4, 180 - 30), polar_force(879.54, 45), polar_force(100, -90), ] ) location: NDArray[float64] = array([[0, 0], [0, 0], [0, 0]]) assert in_static_equilibrium(forces, location) # Problem 1 in image_data/2D_problems.jpg forces = array( [ polar_force(30 * 9.81, 15), polar_force(215, 180 - 45), polar_force(264, 90 - 30), ] ) location = array([[0, 0], [0, 0], [0, 0]]) assert in_static_equilibrium(forces, location) # Problem in image_data/2D_problems_1.jpg forces = array([[0, -2000], [0, -1200], [0, 15600], [0, -12400]]) location = array([[0, 0], [6, 0], [10, 0], [12, 0]]) assert in_static_equilibrium(forces, location) import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# A complete working Python program to demonstrate all # stack operations using a doubly linked list from __future__ import annotations from typing import Generic, TypeVar T = TypeVar("T") class Node(Generic[T]): def __init__(self, data: T): self.data = data # Assign data self.next: Node[T] | None = None # Initialize next as null self.prev: Node[T] | None = None # Initialize prev as null class Stack(Generic[T]): """ >>> stack = Stack() >>> stack.is_empty() True >>> stack.print_stack() stack elements are: >>> for i in range(4): ... stack.push(i) ... >>> stack.is_empty() False >>> stack.print_stack() stack elements are: 3->2->1->0-> >>> stack.top() 3 >>> len(stack) 4 >>> stack.pop() 3 >>> stack.print_stack() stack elements are: 2->1->0-> """ def __init__(self) -> None: self.head: Node[T] | None = None def push(self, data: T) -> None: """add a Node to the stack""" if self.head is None: self.head = Node(data) else: new_node = Node(data) self.head.prev = new_node new_node.next = self.head new_node.prev = None self.head = new_node def pop(self) -> T | None: """pop the top element off the stack""" if self.head is None: return None else: assert self.head is not None temp = self.head.data self.head = self.head.next if self.head is not None: self.head.prev = None return temp def top(self) -> T | None: """return the top element of the stack""" return self.head.data if self.head is not None else None def __len__(self) -> int: temp = self.head count = 0 while temp is not None: count += 1 temp = temp.next return count def is_empty(self) -> bool: return self.head is None def print_stack(self) -> None: print("stack elements are:") temp = self.head while temp is not None: print(temp.data, end="->") temp = temp.next # Code execution starts here if __name__ == "__main__": # Start with the empty stack stack: Stack[int] = Stack() # Insert 4 at the beginning. So stack becomes 4->None print("Stack operations using Doubly LinkedList") stack.push(4) # Insert 5 at the beginning. So stack becomes 4->5->None stack.push(5) # Insert 6 at the beginning. So stack becomes 4->5->6->None stack.push(6) # Insert 7 at the beginning. So stack becomes 4->5->6->7->None stack.push(7) # Print the stack stack.print_stack() # Print the top element print("\nTop element is ", stack.top()) # Print the stack size print("Size of the stack is ", len(stack)) # pop the top element stack.pop() # pop the top element stack.pop() # two elements have now been popped off stack.print_stack() # Print True if the stack is empty else False print("\nstack is empty:", stack.is_empty())
# A complete working Python program to demonstrate all # stack operations using a doubly linked list from __future__ import annotations from typing import Generic, TypeVar T = TypeVar("T") class Node(Generic[T]): def __init__(self, data: T): self.data = data # Assign data self.next: Node[T] | None = None # Initialize next as null self.prev: Node[T] | None = None # Initialize prev as null class Stack(Generic[T]): """ >>> stack = Stack() >>> stack.is_empty() True >>> stack.print_stack() stack elements are: >>> for i in range(4): ... stack.push(i) ... >>> stack.is_empty() False >>> stack.print_stack() stack elements are: 3->2->1->0-> >>> stack.top() 3 >>> len(stack) 4 >>> stack.pop() 3 >>> stack.print_stack() stack elements are: 2->1->0-> """ def __init__(self) -> None: self.head: Node[T] | None = None def push(self, data: T) -> None: """add a Node to the stack""" if self.head is None: self.head = Node(data) else: new_node = Node(data) self.head.prev = new_node new_node.next = self.head new_node.prev = None self.head = new_node def pop(self) -> T | None: """pop the top element off the stack""" if self.head is None: return None else: assert self.head is not None temp = self.head.data self.head = self.head.next if self.head is not None: self.head.prev = None return temp def top(self) -> T | None: """return the top element of the stack""" return self.head.data if self.head is not None else None def __len__(self) -> int: temp = self.head count = 0 while temp is not None: count += 1 temp = temp.next return count def is_empty(self) -> bool: return self.head is None def print_stack(self) -> None: print("stack elements are:") temp = self.head while temp is not None: print(temp.data, end="->") temp = temp.next # Code execution starts here if __name__ == "__main__": # Start with the empty stack stack: Stack[int] = Stack() # Insert 4 at the beginning. So stack becomes 4->None print("Stack operations using Doubly LinkedList") stack.push(4) # Insert 5 at the beginning. So stack becomes 4->5->None stack.push(5) # Insert 6 at the beginning. So stack becomes 4->5->6->None stack.push(6) # Insert 7 at the beginning. So stack becomes 4->5->6->7->None stack.push(7) # Print the stack stack.print_stack() # Print the top element print("\nTop element is ", stack.top()) # Print the stack size print("Size of the stack is ", len(stack)) # pop the top element stack.pop() # pop the top element stack.pop() # two elements have now been popped off stack.print_stack() # Print True if the stack is empty else False print("\nstack is empty:", stack.is_empty())
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 301: https://projecteuler.net/problem=301 Problem Statement: Nim is a game played with heaps of stones, where two players take it in turn to remove any number of stones from any heap until no stones remain. We'll consider the three-heap normal-play version of Nim, which works as follows: - At the start of the game there are three heaps of stones. - On each player's turn, the player may remove any positive number of stones from any single heap. - The first player unable to move (because no stones remain) loses. If (n1, n2, n3) indicates a Nim position consisting of heaps of size n1, n2, and n3, then there is a simple function, which you may look up or attempt to deduce for yourself, X(n1, n2, n3) that returns: - zero if, with perfect strategy, the player about to move will eventually lose; or - non-zero if, with perfect strategy, the player about to move will eventually win. For example X(1,2,3) = 0 because, no matter what the current player does, the opponent can respond with a move that leaves two heaps of equal size, at which point every move by the current player can be mirrored by the opponent until no stones remain; so the current player loses. To illustrate: - current player moves to (1,2,1) - opponent moves to (1,0,1) - current player moves to (0,0,1) - opponent moves to (0,0,0), and so wins. For how many positive integers n <= 2^30 does X(n,2n,3n) = 0? """ def solution(exponent: int = 30) -> int: """ For any given exponent x >= 0, 1 <= n <= 2^x. This function returns how many Nim games are lost given that each Nim game has three heaps of the form (n, 2*n, 3*n). >>> solution(0) 1 >>> solution(2) 3 >>> solution(10) 144 """ # To find how many total games were lost for a given exponent x, # we need to find the Fibonacci number F(x+2). fibonacci_index = exponent + 2 phi = (1 + 5**0.5) / 2 fibonacci = (phi**fibonacci_index - (phi - 1) ** fibonacci_index) / 5**0.5 return int(fibonacci) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 301: https://projecteuler.net/problem=301 Problem Statement: Nim is a game played with heaps of stones, where two players take it in turn to remove any number of stones from any heap until no stones remain. We'll consider the three-heap normal-play version of Nim, which works as follows: - At the start of the game there are three heaps of stones. - On each player's turn, the player may remove any positive number of stones from any single heap. - The first player unable to move (because no stones remain) loses. If (n1, n2, n3) indicates a Nim position consisting of heaps of size n1, n2, and n3, then there is a simple function, which you may look up or attempt to deduce for yourself, X(n1, n2, n3) that returns: - zero if, with perfect strategy, the player about to move will eventually lose; or - non-zero if, with perfect strategy, the player about to move will eventually win. For example X(1,2,3) = 0 because, no matter what the current player does, the opponent can respond with a move that leaves two heaps of equal size, at which point every move by the current player can be mirrored by the opponent until no stones remain; so the current player loses. To illustrate: - current player moves to (1,2,1) - opponent moves to (1,0,1) - current player moves to (0,0,1) - opponent moves to (0,0,0), and so wins. For how many positive integers n <= 2^30 does X(n,2n,3n) = 0? """ def solution(exponent: int = 30) -> int: """ For any given exponent x >= 0, 1 <= n <= 2^x. This function returns how many Nim games are lost given that each Nim game has three heaps of the form (n, 2*n, 3*n). >>> solution(0) 1 >>> solution(2) 3 >>> solution(10) 144 """ # To find how many total games were lost for a given exponent x, # we need to find the Fibonacci number F(x+2). fibonacci_index = exponent + 2 phi = (1 + 5**0.5) / 2 fibonacci = (phi**fibonacci_index - (phi - 1) ** fibonacci_index) / 5**0.5 return int(fibonacci) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" This is pure Python implementation of comb sort algorithm. Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort algorithm. In bubble sort, distance (or gap) between two compared elements is always one. Comb sort improvement is that gap can be much more than 1, in order to prevent slowing down by small values at the end of a list. More info on: https://en.wikipedia.org/wiki/Comb_sort For doctests run following command: python -m doctest -v comb_sort.py or python3 -m doctest -v comb_sort.py For manual testing run: python comb_sort.py """ def comb_sort(data: list) -> list: """Pure implementation of comb sort algorithm in Python :param data: mutable collection with comparable items :return: the same collection in ascending order Examples: >>> comb_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> comb_sort([]) [] >>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3]) [-15, -7, 0, 2, 3, 8, 45, 99] """ shrink_factor = 1.3 gap = len(data) completed = False while not completed: # Update the gap value for a next comb gap = int(gap / shrink_factor) if gap <= 1: completed = True index = 0 while index + gap < len(data): if data[index] > data[index + gap]: # Swap values data[index], data[index + gap] = data[index + gap], data[index] completed = False index += 1 return data if __name__ == "__main__": import doctest doctest.testmod() user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(comb_sort(unsorted))
""" This is pure Python implementation of comb sort algorithm. Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort algorithm. In bubble sort, distance (or gap) between two compared elements is always one. Comb sort improvement is that gap can be much more than 1, in order to prevent slowing down by small values at the end of a list. More info on: https://en.wikipedia.org/wiki/Comb_sort For doctests run following command: python -m doctest -v comb_sort.py or python3 -m doctest -v comb_sort.py For manual testing run: python comb_sort.py """ def comb_sort(data: list) -> list: """Pure implementation of comb sort algorithm in Python :param data: mutable collection with comparable items :return: the same collection in ascending order Examples: >>> comb_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> comb_sort([]) [] >>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3]) [-15, -7, 0, 2, 3, 8, 45, 99] """ shrink_factor = 1.3 gap = len(data) completed = False while not completed: # Update the gap value for a next comb gap = int(gap / shrink_factor) if gap <= 1: completed = True index = 0 while index + gap < len(data): if data[index] > data[index + gap]: # Swap values data[index], data[index + gap] = data[index + gap], data[index] completed = False index += 1 return data if __name__ == "__main__": import doctest doctest.testmod() user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(comb_sort(unsorted))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def bin_to_decimal(bin_string: str) -> int: """ Convert a binary value to its decimal equivalent >>> bin_to_decimal("101") 5 >>> bin_to_decimal(" 1010 ") 10 >>> bin_to_decimal("-11101") -29 >>> bin_to_decimal("0") 0 >>> bin_to_decimal("a") Traceback (most recent call last): ... ValueError: Non-binary value was passed to the function >>> bin_to_decimal("") Traceback (most recent call last): ... ValueError: Empty string was passed to the function >>> bin_to_decimal("39") Traceback (most recent call last): ... ValueError: Non-binary value was passed to the function """ bin_string = str(bin_string).strip() if not bin_string: raise ValueError("Empty string was passed to the function") is_negative = bin_string[0] == "-" if is_negative: bin_string = bin_string[1:] if not all(char in "01" for char in bin_string): raise ValueError("Non-binary value was passed to the function") decimal_number = 0 for char in bin_string: decimal_number = 2 * decimal_number + int(char) return -decimal_number if is_negative else decimal_number if __name__ == "__main__": from doctest import testmod testmod()
def bin_to_decimal(bin_string: str) -> int: """ Convert a binary value to its decimal equivalent >>> bin_to_decimal("101") 5 >>> bin_to_decimal(" 1010 ") 10 >>> bin_to_decimal("-11101") -29 >>> bin_to_decimal("0") 0 >>> bin_to_decimal("a") Traceback (most recent call last): ... ValueError: Non-binary value was passed to the function >>> bin_to_decimal("") Traceback (most recent call last): ... ValueError: Empty string was passed to the function >>> bin_to_decimal("39") Traceback (most recent call last): ... ValueError: Non-binary value was passed to the function """ bin_string = str(bin_string).strip() if not bin_string: raise ValueError("Empty string was passed to the function") is_negative = bin_string[0] == "-" if is_negative: bin_string = bin_string[1:] if not all(char in "01" for char in bin_string): raise ValueError("Non-binary value was passed to the function") decimal_number = 0 for char in bin_string: decimal_number = 2 * decimal_number + int(char) return -decimal_number if is_negative else decimal_number if __name__ == "__main__": from doctest import testmod testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Program to list all the ways a target string can be constructed from the given list of substrings """ from __future__ import annotations def all_construct(target: str, word_bank: list[str] | None = None) -> list[list[str]]: """ returns the list containing all the possible combinations a string(target) can be constructed from the given list of substrings(word_bank) >>> all_construct("hello", ["he", "l", "o"]) [['he', 'l', 'l', 'o']] >>> all_construct("purple",["purp","p","ur","le","purpl"]) [['purp', 'le'], ['p', 'ur', 'p', 'le']] """ word_bank = word_bank or [] # create a table table_size: int = len(target) + 1 table: list[list[list[str]]] = [] for _ in range(table_size): table.append([]) # seed value table[0] = [[]] # because empty string has empty combination # iterate through the indices for i in range(table_size): # condition if table[i] != []: for word in word_bank: # slice condition if target[i : i + len(word)] == word: new_combinations: list[list[str]] = [ [word, *way] for way in table[i] ] # adds the word to every combination the current position holds # now,push that combination to the table[i+len(word)] table[i + len(word)] += new_combinations # combinations are in reverse order so reverse for better output for combination in table[len(target)]: combination.reverse() return table[len(target)] if __name__ == "__main__": print(all_construct("jwajalapa", ["jwa", "j", "w", "a", "la", "lapa"])) print(all_construct("rajamati", ["s", "raj", "amat", "raja", "ma", "i", "t"])) print( all_construct( "hexagonosaurus", ["h", "ex", "hex", "ag", "ago", "ru", "auru", "rus", "go", "no", "o", "s"], ) )
""" Program to list all the ways a target string can be constructed from the given list of substrings """ from __future__ import annotations def all_construct(target: str, word_bank: list[str] | None = None) -> list[list[str]]: """ returns the list containing all the possible combinations a string(target) can be constructed from the given list of substrings(word_bank) >>> all_construct("hello", ["he", "l", "o"]) [['he', 'l', 'l', 'o']] >>> all_construct("purple",["purp","p","ur","le","purpl"]) [['purp', 'le'], ['p', 'ur', 'p', 'le']] """ word_bank = word_bank or [] # create a table table_size: int = len(target) + 1 table: list[list[list[str]]] = [] for _ in range(table_size): table.append([]) # seed value table[0] = [[]] # because empty string has empty combination # iterate through the indices for i in range(table_size): # condition if table[i] != []: for word in word_bank: # slice condition if target[i : i + len(word)] == word: new_combinations: list[list[str]] = [ [word, *way] for way in table[i] ] # adds the word to every combination the current position holds # now,push that combination to the table[i+len(word)] table[i + len(word)] += new_combinations # combinations are in reverse order so reverse for better output for combination in table[len(target)]: combination.reverse() return table[len(target)] if __name__ == "__main__": print(all_construct("jwajalapa", ["jwa", "j", "w", "a", "la", "lapa"])) print(all_construct("rajamati", ["s", "raj", "amat", "raja", "ma", "i", "t"])) print( all_construct( "hexagonosaurus", ["h", "ex", "hex", "ag", "ago", "ru", "auru", "rus", "go", "no", "o", "s"], ) )
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from typing import Any def viterbi( observations_space: list, states_space: list, initial_probabilities: dict, transition_probabilities: dict, emission_probabilities: dict, ) -> list: """ Viterbi Algorithm, to find the most likely path of states from the start and the expected output. https://en.wikipedia.org/wiki/Viterbi_algorithm sdafads Wikipedia example >>> observations = ["normal", "cold", "dizzy"] >>> states = ["Healthy", "Fever"] >>> start_p = {"Healthy": 0.6, "Fever": 0.4} >>> trans_p = { ... "Healthy": {"Healthy": 0.7, "Fever": 0.3}, ... "Fever": {"Healthy": 0.4, "Fever": 0.6}, ... } >>> emit_p = { ... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1}, ... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6}, ... } >>> viterbi(observations, states, start_p, trans_p, emit_p) ['Healthy', 'Healthy', 'Fever'] >>> viterbi((), states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, (), start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, {}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, start_p, {}, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, start_p, trans_p, {}) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi("invalid", states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: observations_space must be a list >>> viterbi(["valid", 123], states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: observations_space must be a list of strings >>> viterbi(observations, "invalid", start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: states_space must be a list >>> viterbi(observations, ["valid", 123], start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: states_space must be a list of strings >>> viterbi(observations, states, "invalid", trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities must be a dict >>> viterbi(observations, states, {2:2}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities all keys must be strings >>> viterbi(observations, states, {"a":2}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities all values must be float >>> viterbi(observations, states, start_p, "invalid", emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities must be a dict >>> viterbi(observations, states, start_p, {"a":2}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all values must be dict >>> viterbi(observations, states, start_p, {2:{2:2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> viterbi(observations, states, start_p, {"a":{2:2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> viterbi(observations, states, start_p, {"a":{"b":2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities nested dictionary all values must be float >>> viterbi(observations, states, start_p, trans_p, "invalid") Traceback (most recent call last): ... ValueError: emission_probabilities must be a dict >>> viterbi(observations, states, start_p, trans_p, None) Traceback (most recent call last): ... ValueError: There's an empty parameter """ _validation( observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ) # Creates data structures and fill initial step probabilities: dict = {} pointers: dict = {} for state in states_space: observation = observations_space[0] probabilities[(state, observation)] = ( initial_probabilities[state] * emission_probabilities[state][observation] ) pointers[(state, observation)] = None # Fills the data structure with the probabilities of # different transitions and pointers to previous states for o in range(1, len(observations_space)): observation = observations_space[o] prior_observation = observations_space[o - 1] for state in states_space: # Calculates the argmax for probability function arg_max = "" max_probability = -1 for k_state in states_space: probability = ( probabilities[(k_state, prior_observation)] * transition_probabilities[k_state][state] * emission_probabilities[state][observation] ) if probability > max_probability: max_probability = probability arg_max = k_state # Update probabilities and pointers dicts probabilities[(state, observation)] = ( probabilities[(arg_max, prior_observation)] * transition_probabilities[arg_max][state] * emission_probabilities[state][observation] ) pointers[(state, observation)] = arg_max # The final observation final_observation = observations_space[len(observations_space) - 1] # argmax for given final observation arg_max = "" max_probability = -1 for k_state in states_space: probability = probabilities[(k_state, final_observation)] if probability > max_probability: max_probability = probability arg_max = k_state last_state = arg_max # Process pointers backwards previous = last_state result = [] for o in range(len(observations_space) - 1, -1, -1): result.append(previous) previous = pointers[previous, observations_space[o]] result.reverse() return result def _validation( observations_space: Any, states_space: Any, initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> observations = ["normal", "cold", "dizzy"] >>> states = ["Healthy", "Fever"] >>> start_p = {"Healthy": 0.6, "Fever": 0.4} >>> trans_p = { ... "Healthy": {"Healthy": 0.7, "Fever": 0.3}, ... "Fever": {"Healthy": 0.4, "Fever": 0.6}, ... } >>> emit_p = { ... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1}, ... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6}, ... } >>> _validation(observations, states, start_p, trans_p, emit_p) >>> _validation([], states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter """ _validate_not_empty( observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ) _validate_lists(observations_space, states_space) _validate_dicts( initial_probabilities, transition_probabilities, emission_probabilities ) def _validate_not_empty( observations_space: Any, states_space: Any, initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> _validate_not_empty(["a"], ["b"], {"c":0.5}, ... {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) >>> _validate_not_empty(["a"], ["b"], {"c":0.5}, {}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> _validate_not_empty(["a"], ["b"], None, {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: There's an empty parameter """ if not all( [ observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ] ): raise ValueError("There's an empty parameter") def _validate_lists(observations_space: Any, states_space: Any) -> None: """ >>> _validate_lists(["a"], ["b"]) >>> _validate_lists(1234, ["b"]) Traceback (most recent call last): ... ValueError: observations_space must be a list >>> _validate_lists(["a"], [3]) Traceback (most recent call last): ... ValueError: states_space must be a list of strings """ _validate_list(observations_space, "observations_space") _validate_list(states_space, "states_space") def _validate_list(_object: Any, var_name: str) -> None: """ >>> _validate_list(["a"], "mock_name") >>> _validate_list("a", "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a list >>> _validate_list([0.5], "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a list of strings """ if not isinstance(_object, list): msg = f"{var_name} must be a list" raise ValueError(msg) else: for x in _object: if not isinstance(x, str): msg = f"{var_name} must be a list of strings" raise ValueError(msg) def _validate_dicts( initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) >>> _validate_dicts("invalid", {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: initial_probabilities must be a dict >>> _validate_dicts({"c":0.5}, {2: {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {2: 0.7}}) Traceback (most recent call last): ... ValueError: emission_probabilities all keys must be strings >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": "h"}}) Traceback (most recent call last): ... ValueError: emission_probabilities nested dictionary all values must be float """ _validate_dict(initial_probabilities, "initial_probabilities", float) _validate_nested_dict(transition_probabilities, "transition_probabilities") _validate_nested_dict(emission_probabilities, "emission_probabilities") def _validate_nested_dict(_object: Any, var_name: str) -> None: """ >>> _validate_nested_dict({"a":{"b": 0.5}}, "mock_name") >>> _validate_nested_dict("invalid", "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a dict >>> _validate_nested_dict({"a": 8}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name all values must be dict >>> _validate_nested_dict({"a":{2: 0.5}}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name all keys must be strings >>> _validate_nested_dict({"a":{"b": 4}}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name nested dictionary all values must be float """ _validate_dict(_object, var_name, dict) for x in _object.values(): _validate_dict(x, var_name, float, True) def _validate_dict( _object: Any, var_name: str, value_type: type, nested: bool = False ) -> None: """ >>> _validate_dict({"b": 0.5}, "mock_name", float) >>> _validate_dict("invalid", "mock_name", float) Traceback (most recent call last): ... ValueError: mock_name must be a dict >>> _validate_dict({"a": 8}, "mock_name", dict) Traceback (most recent call last): ... ValueError: mock_name all values must be dict >>> _validate_dict({2: 0.5}, "mock_name",float, True) Traceback (most recent call last): ... ValueError: mock_name all keys must be strings >>> _validate_dict({"b": 4}, "mock_name", float,True) Traceback (most recent call last): ... ValueError: mock_name nested dictionary all values must be float """ if not isinstance(_object, dict): msg = f"{var_name} must be a dict" raise ValueError(msg) if not all(isinstance(x, str) for x in _object): msg = f"{var_name} all keys must be strings" raise ValueError(msg) if not all(isinstance(x, value_type) for x in _object.values()): nested_text = "nested dictionary " if nested else "" msg = f"{var_name} {nested_text}all values must be {value_type.__name__}" raise ValueError(msg) if __name__ == "__main__": from doctest import testmod testmod()
from typing import Any def viterbi( observations_space: list, states_space: list, initial_probabilities: dict, transition_probabilities: dict, emission_probabilities: dict, ) -> list: """ Viterbi Algorithm, to find the most likely path of states from the start and the expected output. https://en.wikipedia.org/wiki/Viterbi_algorithm sdafads Wikipedia example >>> observations = ["normal", "cold", "dizzy"] >>> states = ["Healthy", "Fever"] >>> start_p = {"Healthy": 0.6, "Fever": 0.4} >>> trans_p = { ... "Healthy": {"Healthy": 0.7, "Fever": 0.3}, ... "Fever": {"Healthy": 0.4, "Fever": 0.6}, ... } >>> emit_p = { ... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1}, ... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6}, ... } >>> viterbi(observations, states, start_p, trans_p, emit_p) ['Healthy', 'Healthy', 'Fever'] >>> viterbi((), states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, (), start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, {}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, start_p, {}, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi(observations, states, start_p, trans_p, {}) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> viterbi("invalid", states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: observations_space must be a list >>> viterbi(["valid", 123], states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: observations_space must be a list of strings >>> viterbi(observations, "invalid", start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: states_space must be a list >>> viterbi(observations, ["valid", 123], start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: states_space must be a list of strings >>> viterbi(observations, states, "invalid", trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities must be a dict >>> viterbi(observations, states, {2:2}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities all keys must be strings >>> viterbi(observations, states, {"a":2}, trans_p, emit_p) Traceback (most recent call last): ... ValueError: initial_probabilities all values must be float >>> viterbi(observations, states, start_p, "invalid", emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities must be a dict >>> viterbi(observations, states, start_p, {"a":2}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all values must be dict >>> viterbi(observations, states, start_p, {2:{2:2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> viterbi(observations, states, start_p, {"a":{2:2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> viterbi(observations, states, start_p, {"a":{"b":2}}, emit_p) Traceback (most recent call last): ... ValueError: transition_probabilities nested dictionary all values must be float >>> viterbi(observations, states, start_p, trans_p, "invalid") Traceback (most recent call last): ... ValueError: emission_probabilities must be a dict >>> viterbi(observations, states, start_p, trans_p, None) Traceback (most recent call last): ... ValueError: There's an empty parameter """ _validation( observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ) # Creates data structures and fill initial step probabilities: dict = {} pointers: dict = {} for state in states_space: observation = observations_space[0] probabilities[(state, observation)] = ( initial_probabilities[state] * emission_probabilities[state][observation] ) pointers[(state, observation)] = None # Fills the data structure with the probabilities of # different transitions and pointers to previous states for o in range(1, len(observations_space)): observation = observations_space[o] prior_observation = observations_space[o - 1] for state in states_space: # Calculates the argmax for probability function arg_max = "" max_probability = -1 for k_state in states_space: probability = ( probabilities[(k_state, prior_observation)] * transition_probabilities[k_state][state] * emission_probabilities[state][observation] ) if probability > max_probability: max_probability = probability arg_max = k_state # Update probabilities and pointers dicts probabilities[(state, observation)] = ( probabilities[(arg_max, prior_observation)] * transition_probabilities[arg_max][state] * emission_probabilities[state][observation] ) pointers[(state, observation)] = arg_max # The final observation final_observation = observations_space[len(observations_space) - 1] # argmax for given final observation arg_max = "" max_probability = -1 for k_state in states_space: probability = probabilities[(k_state, final_observation)] if probability > max_probability: max_probability = probability arg_max = k_state last_state = arg_max # Process pointers backwards previous = last_state result = [] for o in range(len(observations_space) - 1, -1, -1): result.append(previous) previous = pointers[previous, observations_space[o]] result.reverse() return result def _validation( observations_space: Any, states_space: Any, initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> observations = ["normal", "cold", "dizzy"] >>> states = ["Healthy", "Fever"] >>> start_p = {"Healthy": 0.6, "Fever": 0.4} >>> trans_p = { ... "Healthy": {"Healthy": 0.7, "Fever": 0.3}, ... "Fever": {"Healthy": 0.4, "Fever": 0.6}, ... } >>> emit_p = { ... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1}, ... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6}, ... } >>> _validation(observations, states, start_p, trans_p, emit_p) >>> _validation([], states, start_p, trans_p, emit_p) Traceback (most recent call last): ... ValueError: There's an empty parameter """ _validate_not_empty( observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ) _validate_lists(observations_space, states_space) _validate_dicts( initial_probabilities, transition_probabilities, emission_probabilities ) def _validate_not_empty( observations_space: Any, states_space: Any, initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> _validate_not_empty(["a"], ["b"], {"c":0.5}, ... {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) >>> _validate_not_empty(["a"], ["b"], {"c":0.5}, {}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: There's an empty parameter >>> _validate_not_empty(["a"], ["b"], None, {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: There's an empty parameter """ if not all( [ observations_space, states_space, initial_probabilities, transition_probabilities, emission_probabilities, ] ): raise ValueError("There's an empty parameter") def _validate_lists(observations_space: Any, states_space: Any) -> None: """ >>> _validate_lists(["a"], ["b"]) >>> _validate_lists(1234, ["b"]) Traceback (most recent call last): ... ValueError: observations_space must be a list >>> _validate_lists(["a"], [3]) Traceback (most recent call last): ... ValueError: states_space must be a list of strings """ _validate_list(observations_space, "observations_space") _validate_list(states_space, "states_space") def _validate_list(_object: Any, var_name: str) -> None: """ >>> _validate_list(["a"], "mock_name") >>> _validate_list("a", "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a list >>> _validate_list([0.5], "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a list of strings """ if not isinstance(_object, list): msg = f"{var_name} must be a list" raise ValueError(msg) else: for x in _object: if not isinstance(x, str): msg = f"{var_name} must be a list of strings" raise ValueError(msg) def _validate_dicts( initial_probabilities: Any, transition_probabilities: Any, emission_probabilities: Any, ) -> None: """ >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) >>> _validate_dicts("invalid", {"d": {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: initial_probabilities must be a dict >>> _validate_dicts({"c":0.5}, {2: {"e": 0.6}}, {"f": {"g": 0.7}}) Traceback (most recent call last): ... ValueError: transition_probabilities all keys must be strings >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {2: 0.7}}) Traceback (most recent call last): ... ValueError: emission_probabilities all keys must be strings >>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": "h"}}) Traceback (most recent call last): ... ValueError: emission_probabilities nested dictionary all values must be float """ _validate_dict(initial_probabilities, "initial_probabilities", float) _validate_nested_dict(transition_probabilities, "transition_probabilities") _validate_nested_dict(emission_probabilities, "emission_probabilities") def _validate_nested_dict(_object: Any, var_name: str) -> None: """ >>> _validate_nested_dict({"a":{"b": 0.5}}, "mock_name") >>> _validate_nested_dict("invalid", "mock_name") Traceback (most recent call last): ... ValueError: mock_name must be a dict >>> _validate_nested_dict({"a": 8}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name all values must be dict >>> _validate_nested_dict({"a":{2: 0.5}}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name all keys must be strings >>> _validate_nested_dict({"a":{"b": 4}}, "mock_name") Traceback (most recent call last): ... ValueError: mock_name nested dictionary all values must be float """ _validate_dict(_object, var_name, dict) for x in _object.values(): _validate_dict(x, var_name, float, True) def _validate_dict( _object: Any, var_name: str, value_type: type, nested: bool = False ) -> None: """ >>> _validate_dict({"b": 0.5}, "mock_name", float) >>> _validate_dict("invalid", "mock_name", float) Traceback (most recent call last): ... ValueError: mock_name must be a dict >>> _validate_dict({"a": 8}, "mock_name", dict) Traceback (most recent call last): ... ValueError: mock_name all values must be dict >>> _validate_dict({2: 0.5}, "mock_name",float, True) Traceback (most recent call last): ... ValueError: mock_name all keys must be strings >>> _validate_dict({"b": 4}, "mock_name", float,True) Traceback (most recent call last): ... ValueError: mock_name nested dictionary all values must be float """ if not isinstance(_object, dict): msg = f"{var_name} must be a dict" raise ValueError(msg) if not all(isinstance(x, str) for x in _object): msg = f"{var_name} all keys must be strings" raise ValueError(msg) if not all(isinstance(x, value_type) for x in _object.values()): nested_text = "nested dictionary " if nested else "" msg = f"{var_name} {nested_text}all values must be {value_type.__name__}" raise ValueError(msg) if __name__ == "__main__": from doctest import testmod testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
"""For more information about the Binomial Distribution - https://en.wikipedia.org/wiki/Binomial_distribution""" from math import factorial def binomial_distribution(successes: int, trials: int, prob: float) -> float: """ Return probability of k successes out of n tries, with p probability for one success The function uses the factorial function in order to calculate the binomial coefficient >>> binomial_distribution(3, 5, 0.7) 0.30870000000000003 >>> binomial_distribution (2, 4, 0.5) 0.375 """ if successes > trials: raise ValueError("""successes must be lower or equal to trials""") if trials < 0 or successes < 0: raise ValueError("the function is defined for non-negative integers") if not isinstance(successes, int) or not isinstance(trials, int): raise ValueError("the function is defined for non-negative integers") if not 0 < prob < 1: raise ValueError("prob has to be in range of 1 - 0") probability = (prob**successes) * ((1 - prob) ** (trials - successes)) # Calculate the binomial coefficient: n! / k!(n-k)! coefficient = float(factorial(trials)) coefficient /= factorial(successes) * factorial(trials - successes) return probability * coefficient if __name__ == "__main__": from doctest import testmod testmod() print("Probability of 2 successes out of 4 trails") print("with probability of 0.75 is:", end=" ") print(binomial_distribution(2, 4, 0.75))
"""For more information about the Binomial Distribution - https://en.wikipedia.org/wiki/Binomial_distribution""" from math import factorial def binomial_distribution(successes: int, trials: int, prob: float) -> float: """ Return probability of k successes out of n tries, with p probability for one success The function uses the factorial function in order to calculate the binomial coefficient >>> binomial_distribution(3, 5, 0.7) 0.30870000000000003 >>> binomial_distribution (2, 4, 0.5) 0.375 """ if successes > trials: raise ValueError("""successes must be lower or equal to trials""") if trials < 0 or successes < 0: raise ValueError("the function is defined for non-negative integers") if not isinstance(successes, int) or not isinstance(trials, int): raise ValueError("the function is defined for non-negative integers") if not 0 < prob < 1: raise ValueError("prob has to be in range of 1 - 0") probability = (prob**successes) * ((1 - prob) ** (trials - successes)) # Calculate the binomial coefficient: n! / k!(n-k)! coefficient = float(factorial(trials)) coefficient /= factorial(successes) * factorial(trials - successes) return probability * coefficient if __name__ == "__main__": from doctest import testmod testmod() print("Probability of 2 successes out of 4 trails") print("with probability of 0.75 is:", end=" ") print(binomial_distribution(2, 4, 0.75))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
from collections.abc import Sequence def max_subsequence_sum(nums: Sequence[int] | None = None) -> int: """Return the maximum possible sum amongst all non - empty subsequences. Raises: ValueError: when nums is empty. >>> max_subsequence_sum([1,2,3,4,-2]) 10 >>> max_subsequence_sum([-2, -3, -1, -4, -6]) -1 >>> max_subsequence_sum([]) Traceback (most recent call last): . . . ValueError: Input sequence should not be empty >>> max_subsequence_sum() Traceback (most recent call last): . . . ValueError: Input sequence should not be empty """ if nums is None or not nums: raise ValueError("Input sequence should not be empty") ans = nums[0] for i in range(1, len(nums)): num = nums[i] ans = max(ans, ans + num, num) return ans if __name__ == "__main__": import doctest doctest.testmod() # Try on a sample input from the user n = int(input("Enter number of elements : ").strip()) array = list(map(int, input("\nEnter the numbers : ").strip().split()))[:n] print(max_subsequence_sum(array))
from collections.abc import Sequence def max_subsequence_sum(nums: Sequence[int] | None = None) -> int: """Return the maximum possible sum amongst all non - empty subsequences. Raises: ValueError: when nums is empty. >>> max_subsequence_sum([1,2,3,4,-2]) 10 >>> max_subsequence_sum([-2, -3, -1, -4, -6]) -1 >>> max_subsequence_sum([]) Traceback (most recent call last): . . . ValueError: Input sequence should not be empty >>> max_subsequence_sum() Traceback (most recent call last): . . . ValueError: Input sequence should not be empty """ if nums is None or not nums: raise ValueError("Input sequence should not be empty") ans = nums[0] for i in range(1, len(nums)): num = nums[i] ans = max(ans, ans + num, num) return ans if __name__ == "__main__": import doctest doctest.testmod() # Try on a sample input from the user n = int(input("Enter number of elements : ").strip()) array = list(map(int, input("\nEnter the numbers : ").strip().split()))[:n] print(max_subsequence_sum(array))
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Given a sorted array of integers, return indices of the two numbers such that they add up to a specific target using the two pointers technique. You may assume that each input would have exactly one solution, and you may not use the same element twice. This is an alternative solution of the two-sum problem, which uses a map to solve the problem. Hence can not solve the issue if there is a constraint not use the same index twice. [1] Example: Given nums = [2, 7, 11, 15], target = 9, Because nums[0] + nums[1] = 2 + 7 = 9, return [0, 1]. [1]: https://github.com/TheAlgorithms/Python/blob/master/other/two_sum.py """ from __future__ import annotations def two_pointer(nums: list[int], target: int) -> list[int]: """ >>> two_pointer([2, 7, 11, 15], 9) [0, 1] >>> two_pointer([2, 7, 11, 15], 17) [0, 3] >>> two_pointer([2, 7, 11, 15], 18) [1, 2] >>> two_pointer([2, 7, 11, 15], 26) [2, 3] >>> two_pointer([1, 3, 3], 6) [1, 2] >>> two_pointer([2, 7, 11, 15], 8) [] >>> two_pointer([3 * i for i in range(10)], 19) [] >>> two_pointer([1, 2, 3], 6) [] """ i = 0 j = len(nums) - 1 while i < j: if nums[i] + nums[j] == target: return [i, j] elif nums[i] + nums[j] < target: i = i + 1 else: j = j - 1 return [] if __name__ == "__main__": import doctest doctest.testmod() print(f"{two_pointer([2, 7, 11, 15], 9) = }")
""" Given a sorted array of integers, return indices of the two numbers such that they add up to a specific target using the two pointers technique. You may assume that each input would have exactly one solution, and you may not use the same element twice. This is an alternative solution of the two-sum problem, which uses a map to solve the problem. Hence can not solve the issue if there is a constraint not use the same index twice. [1] Example: Given nums = [2, 7, 11, 15], target = 9, Because nums[0] + nums[1] = 2 + 7 = 9, return [0, 1]. [1]: https://github.com/TheAlgorithms/Python/blob/master/other/two_sum.py """ from __future__ import annotations def two_pointer(nums: list[int], target: int) -> list[int]: """ >>> two_pointer([2, 7, 11, 15], 9) [0, 1] >>> two_pointer([2, 7, 11, 15], 17) [0, 3] >>> two_pointer([2, 7, 11, 15], 18) [1, 2] >>> two_pointer([2, 7, 11, 15], 26) [2, 3] >>> two_pointer([1, 3, 3], 6) [1, 2] >>> two_pointer([2, 7, 11, 15], 8) [] >>> two_pointer([3 * i for i in range(10)], 19) [] >>> two_pointer([1, 2, 3], 6) [] """ i = 0 j = len(nums) - 1 while i < j: if nums[i] + nums[j] == target: return [i, j] elif nums[i] + nums[j] < target: i = i + 1 else: j = j - 1 return [] if __name__ == "__main__": import doctest doctest.testmod() print(f"{two_pointer([2, 7, 11, 15], 9) = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Author: Alexander Joslin GitHub: github.com/echoaj Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack- algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea We can use Dijkstra's two stack algorithm to solve an equation such as: (5 + ((4 * 2) * (2 + 3))) THESE ARE THE ALGORITHM'S RULES: RULE 1: Scan the expression from left to right. When an operand is encountered, push it onto the operand stack. RULE 2: When an operator is encountered in the expression, push it onto the operator stack. RULE 3: When a left parenthesis is encountered in the expression, ignore it. RULE 4: When a right parenthesis is encountered in the expression, pop an operator off the operator stack. The two operands it must operate on must be the last two operands pushed onto the operand stack. We therefore pop the operand stack twice, perform the operation, and push the result back onto the operand stack so it will be available for use as an operand of the next operator popped off the operator stack. RULE 5: When the entire infix expression has been scanned, the value left on the operand stack represents the value of the expression. NOTE: It only works with whole numbers. """ __author__ = "Alexander Joslin" import operator as op from .stack import Stack def dijkstras_two_stack_algorithm(equation: str) -> int: """ DocTests >>> dijkstras_two_stack_algorithm("(5 + 3)") 8 >>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))") 5 >>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)") -3 :param equation: a string :return: result: an integer """ operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub} operand_stack: Stack[int] = Stack() operator_stack: Stack[str] = Stack() for i in equation: if i.isdigit(): # RULE 1 operand_stack.push(int(i)) elif i in operators: # RULE 2 operator_stack.push(i) elif i == ")": # RULE 4 opr = operator_stack.peek() operator_stack.pop() num1 = operand_stack.peek() operand_stack.pop() num2 = operand_stack.peek() operand_stack.pop() total = operators[opr](num2, num1) operand_stack.push(total) # RULE 5 return operand_stack.peek() if __name__ == "__main__": equation = "(5 + ((4 * 2) * (2 + 3)))" # answer = 45 print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
""" Author: Alexander Joslin GitHub: github.com/echoaj Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack- algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea We can use Dijkstra's two stack algorithm to solve an equation such as: (5 + ((4 * 2) * (2 + 3))) THESE ARE THE ALGORITHM'S RULES: RULE 1: Scan the expression from left to right. When an operand is encountered, push it onto the operand stack. RULE 2: When an operator is encountered in the expression, push it onto the operator stack. RULE 3: When a left parenthesis is encountered in the expression, ignore it. RULE 4: When a right parenthesis is encountered in the expression, pop an operator off the operator stack. The two operands it must operate on must be the last two operands pushed onto the operand stack. We therefore pop the operand stack twice, perform the operation, and push the result back onto the operand stack so it will be available for use as an operand of the next operator popped off the operator stack. RULE 5: When the entire infix expression has been scanned, the value left on the operand stack represents the value of the expression. NOTE: It only works with whole numbers. """ __author__ = "Alexander Joslin" import operator as op from .stack import Stack def dijkstras_two_stack_algorithm(equation: str) -> int: """ DocTests >>> dijkstras_two_stack_algorithm("(5 + 3)") 8 >>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))") 5 >>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)") -3 :param equation: a string :return: result: an integer """ operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub} operand_stack: Stack[int] = Stack() operator_stack: Stack[str] = Stack() for i in equation: if i.isdigit(): # RULE 1 operand_stack.push(int(i)) elif i in operators: # RULE 2 operator_stack.push(i) elif i == ")": # RULE 4 opr = operator_stack.peek() operator_stack.pop() num1 = operand_stack.peek() operand_stack.pop() num2 = operand_stack.peek() operand_stack.pop() total = operators[opr](num2, num1) operand_stack.push(total) # RULE 5 return operand_stack.peek() if __name__ == "__main__": equation = "(5 + ((4 * 2) * (2 + 3)))" # answer = 45 print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Lychrel numbers Problem 55: https://projecteuler.net/problem=55 If we take 47, reverse and add, 47 + 74 = 121, which is palindromic. Not all numbers produce palindromes so quickly. For example, 349 + 943 = 1292, 1292 + 2921 = 4213 4213 + 3124 = 7337 That is, 349 took three iterations to arrive at a palindrome. Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome: 4668731596684224866951378664 (53 iterations, 28-digits). Surprisingly, there are palindromic numbers that are themselves Lychrel numbers; the first example is 4994. How many Lychrel numbers are there below ten-thousand? """ def is_palindrome(n: int) -> bool: """ Returns True if a number is palindrome. >>> is_palindrome(12567321) False >>> is_palindrome(1221) True >>> is_palindrome(9876789) True """ return str(n) == str(n)[::-1] def sum_reverse(n: int) -> int: """ Returns the sum of n and reverse of n. >>> sum_reverse(123) 444 >>> sum_reverse(3478) 12221 >>> sum_reverse(12) 33 """ return int(n) + int(str(n)[::-1]) def solution(limit: int = 10000) -> int: """ Returns the count of all lychrel numbers below limit. >>> solution(10000) 249 >>> solution(5000) 76 >>> solution(1000) 13 """ lychrel_nums = [] for num in range(1, limit): iterations = 0 a = num while iterations < 50: num = sum_reverse(num) iterations += 1 if is_palindrome(num): break else: lychrel_nums.append(a) return len(lychrel_nums) if __name__ == "__main__": print(f"{solution() = }")
""" Lychrel numbers Problem 55: https://projecteuler.net/problem=55 If we take 47, reverse and add, 47 + 74 = 121, which is palindromic. Not all numbers produce palindromes so quickly. For example, 349 + 943 = 1292, 1292 + 2921 = 4213 4213 + 3124 = 7337 That is, 349 took three iterations to arrive at a palindrome. Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome: 4668731596684224866951378664 (53 iterations, 28-digits). Surprisingly, there are palindromic numbers that are themselves Lychrel numbers; the first example is 4994. How many Lychrel numbers are there below ten-thousand? """ def is_palindrome(n: int) -> bool: """ Returns True if a number is palindrome. >>> is_palindrome(12567321) False >>> is_palindrome(1221) True >>> is_palindrome(9876789) True """ return str(n) == str(n)[::-1] def sum_reverse(n: int) -> int: """ Returns the sum of n and reverse of n. >>> sum_reverse(123) 444 >>> sum_reverse(3478) 12221 >>> sum_reverse(12) 33 """ return int(n) + int(str(n)[::-1]) def solution(limit: int = 10000) -> int: """ Returns the count of all lychrel numbers below limit. >>> solution(10000) 249 >>> solution(5000) 76 >>> solution(1000) 13 """ lychrel_nums = [] for num in range(1, limit): iterations = 0 a = num while iterations < 50: num = sum_reverse(num) iterations += 1 if is_palindrome(num): break else: lychrel_nums.append(a) return len(lychrel_nums) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" This script demonstrates the implementation of the ReLU function. It's a kind of activation function defined as the positive part of its argument in the context of neural network. The function takes a vector of K real numbers as input and then argmax(x, 0). After through ReLU, the element of the vector always 0 or real number. Script inspired from its corresponding Wikipedia article https://en.wikipedia.org/wiki/Rectifier_(neural_networks) """ from __future__ import annotations import numpy as np def relu(vector: list[float]): """ Implements the relu function Parameters: vector (np.array,list,tuple): A numpy array of shape (1,n) consisting of real values or a similar list,tuple Returns: relu_vec (np.array): The input numpy array, after applying relu. >>> vec = np.array([-1, 0, 5]) >>> relu(vec) array([0, 0, 5]) """ # compare two arrays and then return element-wise maxima. return np.maximum(0, vector) if __name__ == "__main__": print(np.array(relu([-1, 0, 5]))) # --> [0, 0, 5]
""" This script demonstrates the implementation of the ReLU function. It's a kind of activation function defined as the positive part of its argument in the context of neural network. The function takes a vector of K real numbers as input and then argmax(x, 0). After through ReLU, the element of the vector always 0 or real number. Script inspired from its corresponding Wikipedia article https://en.wikipedia.org/wiki/Rectifier_(neural_networks) """ from __future__ import annotations import numpy as np def relu(vector: list[float]): """ Implements the relu function Parameters: vector (np.array,list,tuple): A numpy array of shape (1,n) consisting of real values or a similar list,tuple Returns: relu_vec (np.array): The input numpy array, after applying relu. >>> vec = np.array([-1, 0, 5]) >>> relu(vec) array([0, 0, 5]) """ # compare two arrays and then return element-wise maxima. return np.maximum(0, vector) if __name__ == "__main__": print(np.array(relu([-1, 0, 5]))) # --> [0, 0, 5]
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# https://en.wikipedia.org/wiki/Fizz_buzz#Programming def fizz_buzz(number: int, iterations: int) -> str: """ Plays FizzBuzz. Prints Fizz if number is a multiple of 3. Prints Buzz if its a multiple of 5. Prints FizzBuzz if its a multiple of both 3 and 5 or 15. Else Prints The Number Itself. >>> fizz_buzz(1,7) '1 2 Fizz 4 Buzz Fizz 7 ' >>> fizz_buzz(1,0) Traceback (most recent call last): ... ValueError: Iterations must be done more than 0 times to play FizzBuzz >>> fizz_buzz(-5,5) Traceback (most recent call last): ... ValueError: starting number must be and integer and be more than 0 >>> fizz_buzz(10,-5) Traceback (most recent call last): ... ValueError: Iterations must be done more than 0 times to play FizzBuzz >>> fizz_buzz(1.5,5) Traceback (most recent call last): ... ValueError: starting number must be and integer and be more than 0 >>> fizz_buzz(1,5.5) Traceback (most recent call last): ... ValueError: iterations must be defined as integers """ if not isinstance(iterations, int): raise ValueError("iterations must be defined as integers") if not isinstance(number, int) or not number >= 1: raise ValueError( """starting number must be and integer and be more than 0""" ) if not iterations >= 1: raise ValueError("Iterations must be done more than 0 times to play FizzBuzz") out = "" while number <= iterations: if number % 3 == 0: out += "Fizz" if number % 5 == 0: out += "Buzz" if 0 not in (number % 3, number % 5): out += str(number) # print(out) number += 1 out += " " return out if __name__ == "__main__": import doctest doctest.testmod()
# https://en.wikipedia.org/wiki/Fizz_buzz#Programming def fizz_buzz(number: int, iterations: int) -> str: """ Plays FizzBuzz. Prints Fizz if number is a multiple of 3. Prints Buzz if its a multiple of 5. Prints FizzBuzz if its a multiple of both 3 and 5 or 15. Else Prints The Number Itself. >>> fizz_buzz(1,7) '1 2 Fizz 4 Buzz Fizz 7 ' >>> fizz_buzz(1,0) Traceback (most recent call last): ... ValueError: Iterations must be done more than 0 times to play FizzBuzz >>> fizz_buzz(-5,5) Traceback (most recent call last): ... ValueError: starting number must be and integer and be more than 0 >>> fizz_buzz(10,-5) Traceback (most recent call last): ... ValueError: Iterations must be done more than 0 times to play FizzBuzz >>> fizz_buzz(1.5,5) Traceback (most recent call last): ... ValueError: starting number must be and integer and be more than 0 >>> fizz_buzz(1,5.5) Traceback (most recent call last): ... ValueError: iterations must be defined as integers """ if not isinstance(iterations, int): raise ValueError("iterations must be defined as integers") if not isinstance(number, int) or not number >= 1: raise ValueError( """starting number must be and integer and be more than 0""" ) if not iterations >= 1: raise ValueError("Iterations must be done more than 0 times to play FizzBuzz") out = "" while number <= iterations: if number % 3 == 0: out += "Fizz" if number % 5 == 0: out += "Buzz" if 0 not in (number % 3, number % 5): out += str(number) # print(out) number += 1 out += " " return out if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Harmonic mean Reference: https://en.wikipedia.org/wiki/Harmonic_mean Harmonic series Reference: https://en.wikipedia.org/wiki/Harmonic_series(mathematics) """ def is_harmonic_series(series: list) -> bool: """ checking whether the input series is arithmetic series or not >>> is_harmonic_series([ 1, 2/3, 1/2, 2/5, 1/3]) True >>> is_harmonic_series([ 1, 2/3, 2/5, 1/3]) False >>> is_harmonic_series([1, 2, 3]) False >>> is_harmonic_series([1/2, 1/3, 1/4]) True >>> is_harmonic_series([2/5, 2/10, 2/15, 2/20, 2/25]) True >>> is_harmonic_series(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [1, 2/3, 2] >>> is_harmonic_series([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list >>> is_harmonic_series([0]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element >>> is_harmonic_series([1,2,0,6]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [1, 2/3, 2]") if len(series) == 0: raise ValueError("Input list must be a non empty list") if len(series) == 1 and series[0] != 0: return True rec_series = [] series_len = len(series) for i in range(0, series_len): if series[i] == 0: raise ValueError("Input series cannot have 0 as an element") rec_series.append(1 / series[i]) common_diff = rec_series[1] - rec_series[0] for index in range(2, series_len): if rec_series[index] - rec_series[index - 1] != common_diff: return False return True def harmonic_mean(series: list) -> float: """ return the harmonic mean of series >>> harmonic_mean([1, 4, 4]) 2.0 >>> harmonic_mean([3, 6, 9, 12]) 5.759999999999999 >>> harmonic_mean(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [2, 4, 6] >>> harmonic_mean([1, 2, 3]) 1.6363636363636365 >>> harmonic_mean([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [2, 4, 6]") if len(series) == 0: raise ValueError("Input list must be a non empty list") answer = 0 for val in series: answer += 1 / val return len(series) / answer if __name__ == "__main__": import doctest doctest.testmod()
""" Harmonic mean Reference: https://en.wikipedia.org/wiki/Harmonic_mean Harmonic series Reference: https://en.wikipedia.org/wiki/Harmonic_series(mathematics) """ def is_harmonic_series(series: list) -> bool: """ checking whether the input series is arithmetic series or not >>> is_harmonic_series([ 1, 2/3, 1/2, 2/5, 1/3]) True >>> is_harmonic_series([ 1, 2/3, 2/5, 1/3]) False >>> is_harmonic_series([1, 2, 3]) False >>> is_harmonic_series([1/2, 1/3, 1/4]) True >>> is_harmonic_series([2/5, 2/10, 2/15, 2/20, 2/25]) True >>> is_harmonic_series(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [1, 2/3, 2] >>> is_harmonic_series([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list >>> is_harmonic_series([0]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element >>> is_harmonic_series([1,2,0,6]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [1, 2/3, 2]") if len(series) == 0: raise ValueError("Input list must be a non empty list") if len(series) == 1 and series[0] != 0: return True rec_series = [] series_len = len(series) for i in range(0, series_len): if series[i] == 0: raise ValueError("Input series cannot have 0 as an element") rec_series.append(1 / series[i]) common_diff = rec_series[1] - rec_series[0] for index in range(2, series_len): if rec_series[index] - rec_series[index - 1] != common_diff: return False return True def harmonic_mean(series: list) -> float: """ return the harmonic mean of series >>> harmonic_mean([1, 4, 4]) 2.0 >>> harmonic_mean([3, 6, 9, 12]) 5.759999999999999 >>> harmonic_mean(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [2, 4, 6] >>> harmonic_mean([1, 2, 3]) 1.6363636363636365 >>> harmonic_mean([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [2, 4, 6]") if len(series) == 0: raise ValueError("Input list must be a non empty list") answer = 0 for val in series: answer += 1 / val return len(series) / answer if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Min heap data structure # with decrease key functionality - in O(log(n)) time class Node: def __init__(self, name, val): self.name = name self.val = val def __str__(self): return f"{self.__class__.__name__}({self.name}, {self.val})" def __lt__(self, other): return self.val < other.val class MinHeap: """ >>> r = Node("R", -1) >>> b = Node("B", 6) >>> a = Node("A", 3) >>> x = Node("X", 1) >>> e = Node("E", 4) >>> print(b) Node(B, 6) >>> myMinHeap = MinHeap([r, b, a, x, e]) >>> myMinHeap.decrease_key(b, -17) >>> print(b) Node(B, -17) >>> myMinHeap["B"] -17 """ def __init__(self, array): self.idx_of_element = {} self.heap_dict = {} self.heap = self.build_heap(array) def __getitem__(self, key): return self.get_value(key) def get_parent_idx(self, idx): return (idx - 1) // 2 def get_left_child_idx(self, idx): return idx * 2 + 1 def get_right_child_idx(self, idx): return idx * 2 + 2 def get_value(self, key): return self.heap_dict[key] def build_heap(self, array): last_idx = len(array) - 1 start_from = self.get_parent_idx(last_idx) for idx, i in enumerate(array): self.idx_of_element[i] = idx self.heap_dict[i.name] = i.val for i in range(start_from, -1, -1): self.sift_down(i, array) return array # this is min-heapify method def sift_down(self, idx, array): while True: l = self.get_left_child_idx(idx) # noqa: E741 r = self.get_right_child_idx(idx) smallest = idx if l < len(array) and array[l] < array[idx]: smallest = l if r < len(array) and array[r] < array[smallest]: smallest = r if smallest != idx: array[idx], array[smallest] = array[smallest], array[idx] ( self.idx_of_element[array[idx]], self.idx_of_element[array[smallest]], ) = ( self.idx_of_element[array[smallest]], self.idx_of_element[array[idx]], ) idx = smallest else: break def sift_up(self, idx): p = self.get_parent_idx(idx) while p >= 0 and self.heap[p] > self.heap[idx]: self.heap[p], self.heap[idx] = self.heap[idx], self.heap[p] self.idx_of_element[self.heap[p]], self.idx_of_element[self.heap[idx]] = ( self.idx_of_element[self.heap[idx]], self.idx_of_element[self.heap[p]], ) idx = p p = self.get_parent_idx(idx) def peek(self): return self.heap[0] def remove(self): self.heap[0], self.heap[-1] = self.heap[-1], self.heap[0] self.idx_of_element[self.heap[0]], self.idx_of_element[self.heap[-1]] = ( self.idx_of_element[self.heap[-1]], self.idx_of_element[self.heap[0]], ) x = self.heap.pop() del self.idx_of_element[x] self.sift_down(0, self.heap) return x def insert(self, node): self.heap.append(node) self.idx_of_element[node] = len(self.heap) - 1 self.heap_dict[node.name] = node.val self.sift_up(len(self.heap) - 1) def is_empty(self): return len(self.heap) == 0 def decrease_key(self, node, new_value): assert ( self.heap[self.idx_of_element[node]].val > new_value ), "newValue must be less that current value" node.val = new_value self.heap_dict[node.name] = new_value self.sift_up(self.idx_of_element[node]) # USAGE r = Node("R", -1) b = Node("B", 6) a = Node("A", 3) x = Node("X", 1) e = Node("E", 4) # Use one of these two ways to generate Min-Heap # Generating Min-Heap from array my_min_heap = MinHeap([r, b, a, x, e]) # Generating Min-Heap by Insert method # myMinHeap.insert(a) # myMinHeap.insert(b) # myMinHeap.insert(x) # myMinHeap.insert(r) # myMinHeap.insert(e) # Before print("Min Heap - before decrease key") for i in my_min_heap.heap: print(i) print("Min Heap - After decrease key of node [B -> -17]") my_min_heap.decrease_key(b, -17) # After for i in my_min_heap.heap: print(i) if __name__ == "__main__": import doctest doctest.testmod()
# Min heap data structure # with decrease key functionality - in O(log(n)) time class Node: def __init__(self, name, val): self.name = name self.val = val def __str__(self): return f"{self.__class__.__name__}({self.name}, {self.val})" def __lt__(self, other): return self.val < other.val class MinHeap: """ >>> r = Node("R", -1) >>> b = Node("B", 6) >>> a = Node("A", 3) >>> x = Node("X", 1) >>> e = Node("E", 4) >>> print(b) Node(B, 6) >>> myMinHeap = MinHeap([r, b, a, x, e]) >>> myMinHeap.decrease_key(b, -17) >>> print(b) Node(B, -17) >>> myMinHeap["B"] -17 """ def __init__(self, array): self.idx_of_element = {} self.heap_dict = {} self.heap = self.build_heap(array) def __getitem__(self, key): return self.get_value(key) def get_parent_idx(self, idx): return (idx - 1) // 2 def get_left_child_idx(self, idx): return idx * 2 + 1 def get_right_child_idx(self, idx): return idx * 2 + 2 def get_value(self, key): return self.heap_dict[key] def build_heap(self, array): last_idx = len(array) - 1 start_from = self.get_parent_idx(last_idx) for idx, i in enumerate(array): self.idx_of_element[i] = idx self.heap_dict[i.name] = i.val for i in range(start_from, -1, -1): self.sift_down(i, array) return array # this is min-heapify method def sift_down(self, idx, array): while True: l = self.get_left_child_idx(idx) # noqa: E741 r = self.get_right_child_idx(idx) smallest = idx if l < len(array) and array[l] < array[idx]: smallest = l if r < len(array) and array[r] < array[smallest]: smallest = r if smallest != idx: array[idx], array[smallest] = array[smallest], array[idx] ( self.idx_of_element[array[idx]], self.idx_of_element[array[smallest]], ) = ( self.idx_of_element[array[smallest]], self.idx_of_element[array[idx]], ) idx = smallest else: break def sift_up(self, idx): p = self.get_parent_idx(idx) while p >= 0 and self.heap[p] > self.heap[idx]: self.heap[p], self.heap[idx] = self.heap[idx], self.heap[p] self.idx_of_element[self.heap[p]], self.idx_of_element[self.heap[idx]] = ( self.idx_of_element[self.heap[idx]], self.idx_of_element[self.heap[p]], ) idx = p p = self.get_parent_idx(idx) def peek(self): return self.heap[0] def remove(self): self.heap[0], self.heap[-1] = self.heap[-1], self.heap[0] self.idx_of_element[self.heap[0]], self.idx_of_element[self.heap[-1]] = ( self.idx_of_element[self.heap[-1]], self.idx_of_element[self.heap[0]], ) x = self.heap.pop() del self.idx_of_element[x] self.sift_down(0, self.heap) return x def insert(self, node): self.heap.append(node) self.idx_of_element[node] = len(self.heap) - 1 self.heap_dict[node.name] = node.val self.sift_up(len(self.heap) - 1) def is_empty(self): return len(self.heap) == 0 def decrease_key(self, node, new_value): assert ( self.heap[self.idx_of_element[node]].val > new_value ), "newValue must be less that current value" node.val = new_value self.heap_dict[node.name] = new_value self.sift_up(self.idx_of_element[node]) # USAGE r = Node("R", -1) b = Node("B", 6) a = Node("A", 3) x = Node("X", 1) e = Node("E", 4) # Use one of these two ways to generate Min-Heap # Generating Min-Heap from array my_min_heap = MinHeap([r, b, a, x, e]) # Generating Min-Heap by Insert method # myMinHeap.insert(a) # myMinHeap.insert(b) # myMinHeap.insert(x) # myMinHeap.insert(r) # myMinHeap.insert(e) # Before print("Min Heap - before decrease key") for i in my_min_heap.heap: print(i) print("Min Heap - After decrease key of node [B -> -17]") my_min_heap.decrease_key(b, -17) # After for i in my_min_heap.heap: print(i) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" - - - - - -- - - - - - - - - - - - - - - - - - - - - - - Name - - CNN - Convolution Neural Network For Photo Recognizing Goal - - Recognize Handing Writing Word Photo Detail:Total 5 layers neural network * Convolution layer * Pooling layer * Input layer layer of BP * Hidden layer of BP * Output layer of BP Author: Stephen Lee Github: [email protected] Date: 2017.9.20 - - - - - -- - - - - - - - - - - - - - - - - - - - - - - """ import pickle import numpy as np from matplotlib import pyplot as plt class CNN: def __init__( self, conv1_get, size_p1, bp_num1, bp_num2, bp_num3, rate_w=0.2, rate_t=0.2 ): """ :param conv1_get: [a,c,d],size, number, step of convolution kernel :param size_p1: pooling size :param bp_num1: units number of flatten layer :param bp_num2: units number of hidden layer :param bp_num3: units number of output layer :param rate_w: rate of weight learning :param rate_t: rate of threshold learning """ self.num_bp1 = bp_num1 self.num_bp2 = bp_num2 self.num_bp3 = bp_num3 self.conv1 = conv1_get[:2] self.step_conv1 = conv1_get[2] self.size_pooling1 = size_p1 self.rate_weight = rate_w self.rate_thre = rate_t self.w_conv1 = [ np.mat(-1 * np.random.rand(self.conv1[0], self.conv1[0]) + 0.5) for i in range(self.conv1[1]) ] self.wkj = np.mat(-1 * np.random.rand(self.num_bp3, self.num_bp2) + 0.5) self.vji = np.mat(-1 * np.random.rand(self.num_bp2, self.num_bp1) + 0.5) self.thre_conv1 = -2 * np.random.rand(self.conv1[1]) + 1 self.thre_bp2 = -2 * np.random.rand(self.num_bp2) + 1 self.thre_bp3 = -2 * np.random.rand(self.num_bp3) + 1 def save_model(self, save_path): # save model dict with pickle model_dic = { "num_bp1": self.num_bp1, "num_bp2": self.num_bp2, "num_bp3": self.num_bp3, "conv1": self.conv1, "step_conv1": self.step_conv1, "size_pooling1": self.size_pooling1, "rate_weight": self.rate_weight, "rate_thre": self.rate_thre, "w_conv1": self.w_conv1, "wkj": self.wkj, "vji": self.vji, "thre_conv1": self.thre_conv1, "thre_bp2": self.thre_bp2, "thre_bp3": self.thre_bp3, } with open(save_path, "wb") as f: pickle.dump(model_dic, f) print(f"Model saved: {save_path}") @classmethod def read_model(cls, model_path): # read saved model with open(model_path, "rb") as f: model_dic = pickle.load(f) # noqa: S301 conv_get = model_dic.get("conv1") conv_get.append(model_dic.get("step_conv1")) size_p1 = model_dic.get("size_pooling1") bp1 = model_dic.get("num_bp1") bp2 = model_dic.get("num_bp2") bp3 = model_dic.get("num_bp3") r_w = model_dic.get("rate_weight") r_t = model_dic.get("rate_thre") # create model instance conv_ins = CNN(conv_get, size_p1, bp1, bp2, bp3, r_w, r_t) # modify model parameter conv_ins.w_conv1 = model_dic.get("w_conv1") conv_ins.wkj = model_dic.get("wkj") conv_ins.vji = model_dic.get("vji") conv_ins.thre_conv1 = model_dic.get("thre_conv1") conv_ins.thre_bp2 = model_dic.get("thre_bp2") conv_ins.thre_bp3 = model_dic.get("thre_bp3") return conv_ins def sig(self, x): return 1 / (1 + np.exp(-1 * x)) def do_round(self, x): return round(x, 3) def convolute(self, data, convs, w_convs, thre_convs, conv_step): # convolution process size_conv = convs[0] num_conv = convs[1] size_data = np.shape(data)[0] # get the data slice of original image data, data_focus data_focus = [] for i_focus in range(0, size_data - size_conv + 1, conv_step): for j_focus in range(0, size_data - size_conv + 1, conv_step): focus = data[ i_focus : i_focus + size_conv, j_focus : j_focus + size_conv ] data_focus.append(focus) # calculate the feature map of every single kernel, and saved as list of matrix data_featuremap = [] size_feature_map = int((size_data - size_conv) / conv_step + 1) for i_map in range(num_conv): featuremap = [] for i_focus in range(len(data_focus)): net_focus = ( np.sum(np.multiply(data_focus[i_focus], w_convs[i_map])) - thre_convs[i_map] ) featuremap.append(self.sig(net_focus)) featuremap = np.asmatrix(featuremap).reshape( size_feature_map, size_feature_map ) data_featuremap.append(featuremap) # expanding the data slice to One dimenssion focus1_list = [] for each_focus in data_focus: focus1_list.extend(self.Expand_Mat(each_focus)) focus_list = np.asarray(focus1_list) return focus_list, data_featuremap def pooling(self, featuremaps, size_pooling, pooling_type="average_pool"): # pooling process size_map = len(featuremaps[0]) size_pooled = int(size_map / size_pooling) featuremap_pooled = [] for i_map in range(len(featuremaps)): feature_map = featuremaps[i_map] map_pooled = [] for i_focus in range(0, size_map, size_pooling): for j_focus in range(0, size_map, size_pooling): focus = feature_map[ i_focus : i_focus + size_pooling, j_focus : j_focus + size_pooling, ] if pooling_type == "average_pool": # average pooling map_pooled.append(np.average(focus)) elif pooling_type == "max_pooling": # max pooling map_pooled.append(np.max(focus)) map_pooled = np.asmatrix(map_pooled).reshape(size_pooled, size_pooled) featuremap_pooled.append(map_pooled) return featuremap_pooled def _expand(self, data): # expanding three dimension data to one dimension list data_expanded = [] for i in range(len(data)): shapes = np.shape(data[i]) data_listed = data[i].reshape(1, shapes[0] * shapes[1]) data_listed = data_listed.getA().tolist()[0] data_expanded.extend(data_listed) data_expanded = np.asarray(data_expanded) return data_expanded def _expand_mat(self, data_mat): # expanding matrix to one dimension list data_mat = np.asarray(data_mat) shapes = np.shape(data_mat) data_expanded = data_mat.reshape(1, shapes[0] * shapes[1]) return data_expanded def _calculate_gradient_from_pool( self, out_map, pd_pool, num_map, size_map, size_pooling ): """ calculate the gradient from the data slice of pool layer pd_pool: list of matrix out_map: the shape of data slice(size_map*size_map) return: pd_all: list of matrix, [num, size_map, size_map] """ pd_all = [] i_pool = 0 for i_map in range(num_map): pd_conv1 = np.ones((size_map, size_map)) for i in range(0, size_map, size_pooling): for j in range(0, size_map, size_pooling): pd_conv1[i : i + size_pooling, j : j + size_pooling] = pd_pool[ i_pool ] i_pool = i_pool + 1 pd_conv2 = np.multiply( pd_conv1, np.multiply(out_map[i_map], (1 - out_map[i_map])) ) pd_all.append(pd_conv2) return pd_all def train( self, patterns, datas_train, datas_teach, n_repeat, error_accuracy, draw_e=bool ): # model traning print("----------------------Start Training-------------------------") print((" - - Shape: Train_Data ", np.shape(datas_train))) print((" - - Shape: Teach_Data ", np.shape(datas_teach))) rp = 0 all_mse = [] mse = 10000 while rp < n_repeat and mse >= error_accuracy: error_count = 0 print(f"-------------Learning Time {rp}--------------") for p in range(len(datas_train)): # print('------------Learning Image: %d--------------'%p) data_train = np.asmatrix(datas_train[p]) data_teach = np.asarray(datas_teach[p]) data_focus1, data_conved1 = self.convolute( data_train, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) shape_featuremap1 = np.shape(data_conved1) """ print(' -----original shape ', np.shape(data_train)) print(' ---- after convolution ',np.shape(data_conv1)) print(' -----after pooling ',np.shape(data_pooled1)) """ data_bp_input = self._expand(data_pooled1) bp_out1 = data_bp_input bp_net_j = np.dot(bp_out1, self.vji.T) - self.thre_bp2 bp_out2 = self.sig(bp_net_j) bp_net_k = np.dot(bp_out2, self.wkj.T) - self.thre_bp3 bp_out3 = self.sig(bp_net_k) # --------------Model Leaning ------------------------ # calculate error and gradient--------------- pd_k_all = np.multiply( (data_teach - bp_out3), np.multiply(bp_out3, (1 - bp_out3)) ) pd_j_all = np.multiply( np.dot(pd_k_all, self.wkj), np.multiply(bp_out2, (1 - bp_out2)) ) pd_i_all = np.dot(pd_j_all, self.vji) pd_conv1_pooled = pd_i_all / (self.size_pooling1 * self.size_pooling1) pd_conv1_pooled = pd_conv1_pooled.T.getA().tolist() pd_conv1_all = self._calculate_gradient_from_pool( data_conved1, pd_conv1_pooled, shape_featuremap1[0], shape_featuremap1[1], self.size_pooling1, ) # weight and threshold learning process--------- # convolution layer for k_conv in range(self.conv1[1]): pd_conv_list = self._expand_mat(pd_conv1_all[k_conv]) delta_w = self.rate_weight * np.dot(pd_conv_list, data_focus1) self.w_conv1[k_conv] = self.w_conv1[k_conv] + delta_w.reshape( (self.conv1[0], self.conv1[0]) ) self.thre_conv1[k_conv] = ( self.thre_conv1[k_conv] - np.sum(pd_conv1_all[k_conv]) * self.rate_thre ) # all connected layer self.wkj = self.wkj + pd_k_all.T * bp_out2 * self.rate_weight self.vji = self.vji + pd_j_all.T * bp_out1 * self.rate_weight self.thre_bp3 = self.thre_bp3 - pd_k_all * self.rate_thre self.thre_bp2 = self.thre_bp2 - pd_j_all * self.rate_thre # calculate the sum error of all single image errors = np.sum(abs(data_teach - bp_out3)) error_count += errors # print(' ----Teach ',data_teach) # print(' ----BP_output ',bp_out3) rp = rp + 1 mse = error_count / patterns all_mse.append(mse) def draw_error(): yplot = [error_accuracy for i in range(int(n_repeat * 1.2))] plt.plot(all_mse, "+-") plt.plot(yplot, "r--") plt.xlabel("Learning Times") plt.ylabel("All_mse") plt.grid(True, alpha=0.5) plt.show() print("------------------Training Complished---------------------") print((" - - Training epoch: ", rp, f" - - Mse: {mse:.6f}")) if draw_e: draw_error() return mse def predict(self, datas_test): # model predict produce_out = [] print("-------------------Start Testing-------------------------") print((" - - Shape: Test_Data ", np.shape(datas_test))) for p in range(len(datas_test)): data_test = np.asmatrix(datas_test[p]) data_focus1, data_conved1 = self.convolute( data_test, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) data_bp_input = self._expand(data_pooled1) bp_out1 = data_bp_input bp_net_j = bp_out1 * self.vji.T - self.thre_bp2 bp_out2 = self.sig(bp_net_j) bp_net_k = bp_out2 * self.wkj.T - self.thre_bp3 bp_out3 = self.sig(bp_net_k) produce_out.extend(bp_out3.getA().tolist()) res = [list(map(self.do_round, each)) for each in produce_out] return np.asarray(res) def convolution(self, data): # return the data of image after convoluting process so we can check it out data_test = np.asmatrix(data) data_focus1, data_conved1 = self.convolute( data_test, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) return data_conved1, data_pooled1 if __name__ == "__main__": """ I will put the example on other file """
""" - - - - - -- - - - - - - - - - - - - - - - - - - - - - - Name - - CNN - Convolution Neural Network For Photo Recognizing Goal - - Recognize Handing Writing Word Photo Detail:Total 5 layers neural network * Convolution layer * Pooling layer * Input layer layer of BP * Hidden layer of BP * Output layer of BP Author: Stephen Lee Github: [email protected] Date: 2017.9.20 - - - - - -- - - - - - - - - - - - - - - - - - - - - - - """ import pickle import numpy as np from matplotlib import pyplot as plt class CNN: def __init__( self, conv1_get, size_p1, bp_num1, bp_num2, bp_num3, rate_w=0.2, rate_t=0.2 ): """ :param conv1_get: [a,c,d],size, number, step of convolution kernel :param size_p1: pooling size :param bp_num1: units number of flatten layer :param bp_num2: units number of hidden layer :param bp_num3: units number of output layer :param rate_w: rate of weight learning :param rate_t: rate of threshold learning """ self.num_bp1 = bp_num1 self.num_bp2 = bp_num2 self.num_bp3 = bp_num3 self.conv1 = conv1_get[:2] self.step_conv1 = conv1_get[2] self.size_pooling1 = size_p1 self.rate_weight = rate_w self.rate_thre = rate_t self.w_conv1 = [ np.mat(-1 * np.random.rand(self.conv1[0], self.conv1[0]) + 0.5) for i in range(self.conv1[1]) ] self.wkj = np.mat(-1 * np.random.rand(self.num_bp3, self.num_bp2) + 0.5) self.vji = np.mat(-1 * np.random.rand(self.num_bp2, self.num_bp1) + 0.5) self.thre_conv1 = -2 * np.random.rand(self.conv1[1]) + 1 self.thre_bp2 = -2 * np.random.rand(self.num_bp2) + 1 self.thre_bp3 = -2 * np.random.rand(self.num_bp3) + 1 def save_model(self, save_path): # save model dict with pickle model_dic = { "num_bp1": self.num_bp1, "num_bp2": self.num_bp2, "num_bp3": self.num_bp3, "conv1": self.conv1, "step_conv1": self.step_conv1, "size_pooling1": self.size_pooling1, "rate_weight": self.rate_weight, "rate_thre": self.rate_thre, "w_conv1": self.w_conv1, "wkj": self.wkj, "vji": self.vji, "thre_conv1": self.thre_conv1, "thre_bp2": self.thre_bp2, "thre_bp3": self.thre_bp3, } with open(save_path, "wb") as f: pickle.dump(model_dic, f) print(f"Model saved: {save_path}") @classmethod def read_model(cls, model_path): # read saved model with open(model_path, "rb") as f: model_dic = pickle.load(f) # noqa: S301 conv_get = model_dic.get("conv1") conv_get.append(model_dic.get("step_conv1")) size_p1 = model_dic.get("size_pooling1") bp1 = model_dic.get("num_bp1") bp2 = model_dic.get("num_bp2") bp3 = model_dic.get("num_bp3") r_w = model_dic.get("rate_weight") r_t = model_dic.get("rate_thre") # create model instance conv_ins = CNN(conv_get, size_p1, bp1, bp2, bp3, r_w, r_t) # modify model parameter conv_ins.w_conv1 = model_dic.get("w_conv1") conv_ins.wkj = model_dic.get("wkj") conv_ins.vji = model_dic.get("vji") conv_ins.thre_conv1 = model_dic.get("thre_conv1") conv_ins.thre_bp2 = model_dic.get("thre_bp2") conv_ins.thre_bp3 = model_dic.get("thre_bp3") return conv_ins def sig(self, x): return 1 / (1 + np.exp(-1 * x)) def do_round(self, x): return round(x, 3) def convolute(self, data, convs, w_convs, thre_convs, conv_step): # convolution process size_conv = convs[0] num_conv = convs[1] size_data = np.shape(data)[0] # get the data slice of original image data, data_focus data_focus = [] for i_focus in range(0, size_data - size_conv + 1, conv_step): for j_focus in range(0, size_data - size_conv + 1, conv_step): focus = data[ i_focus : i_focus + size_conv, j_focus : j_focus + size_conv ] data_focus.append(focus) # calculate the feature map of every single kernel, and saved as list of matrix data_featuremap = [] size_feature_map = int((size_data - size_conv) / conv_step + 1) for i_map in range(num_conv): featuremap = [] for i_focus in range(len(data_focus)): net_focus = ( np.sum(np.multiply(data_focus[i_focus], w_convs[i_map])) - thre_convs[i_map] ) featuremap.append(self.sig(net_focus)) featuremap = np.asmatrix(featuremap).reshape( size_feature_map, size_feature_map ) data_featuremap.append(featuremap) # expanding the data slice to One dimenssion focus1_list = [] for each_focus in data_focus: focus1_list.extend(self.Expand_Mat(each_focus)) focus_list = np.asarray(focus1_list) return focus_list, data_featuremap def pooling(self, featuremaps, size_pooling, pooling_type="average_pool"): # pooling process size_map = len(featuremaps[0]) size_pooled = int(size_map / size_pooling) featuremap_pooled = [] for i_map in range(len(featuremaps)): feature_map = featuremaps[i_map] map_pooled = [] for i_focus in range(0, size_map, size_pooling): for j_focus in range(0, size_map, size_pooling): focus = feature_map[ i_focus : i_focus + size_pooling, j_focus : j_focus + size_pooling, ] if pooling_type == "average_pool": # average pooling map_pooled.append(np.average(focus)) elif pooling_type == "max_pooling": # max pooling map_pooled.append(np.max(focus)) map_pooled = np.asmatrix(map_pooled).reshape(size_pooled, size_pooled) featuremap_pooled.append(map_pooled) return featuremap_pooled def _expand(self, data): # expanding three dimension data to one dimension list data_expanded = [] for i in range(len(data)): shapes = np.shape(data[i]) data_listed = data[i].reshape(1, shapes[0] * shapes[1]) data_listed = data_listed.getA().tolist()[0] data_expanded.extend(data_listed) data_expanded = np.asarray(data_expanded) return data_expanded def _expand_mat(self, data_mat): # expanding matrix to one dimension list data_mat = np.asarray(data_mat) shapes = np.shape(data_mat) data_expanded = data_mat.reshape(1, shapes[0] * shapes[1]) return data_expanded def _calculate_gradient_from_pool( self, out_map, pd_pool, num_map, size_map, size_pooling ): """ calculate the gradient from the data slice of pool layer pd_pool: list of matrix out_map: the shape of data slice(size_map*size_map) return: pd_all: list of matrix, [num, size_map, size_map] """ pd_all = [] i_pool = 0 for i_map in range(num_map): pd_conv1 = np.ones((size_map, size_map)) for i in range(0, size_map, size_pooling): for j in range(0, size_map, size_pooling): pd_conv1[i : i + size_pooling, j : j + size_pooling] = pd_pool[ i_pool ] i_pool = i_pool + 1 pd_conv2 = np.multiply( pd_conv1, np.multiply(out_map[i_map], (1 - out_map[i_map])) ) pd_all.append(pd_conv2) return pd_all def train( self, patterns, datas_train, datas_teach, n_repeat, error_accuracy, draw_e=bool ): # model traning print("----------------------Start Training-------------------------") print((" - - Shape: Train_Data ", np.shape(datas_train))) print((" - - Shape: Teach_Data ", np.shape(datas_teach))) rp = 0 all_mse = [] mse = 10000 while rp < n_repeat and mse >= error_accuracy: error_count = 0 print(f"-------------Learning Time {rp}--------------") for p in range(len(datas_train)): # print('------------Learning Image: %d--------------'%p) data_train = np.asmatrix(datas_train[p]) data_teach = np.asarray(datas_teach[p]) data_focus1, data_conved1 = self.convolute( data_train, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) shape_featuremap1 = np.shape(data_conved1) """ print(' -----original shape ', np.shape(data_train)) print(' ---- after convolution ',np.shape(data_conv1)) print(' -----after pooling ',np.shape(data_pooled1)) """ data_bp_input = self._expand(data_pooled1) bp_out1 = data_bp_input bp_net_j = np.dot(bp_out1, self.vji.T) - self.thre_bp2 bp_out2 = self.sig(bp_net_j) bp_net_k = np.dot(bp_out2, self.wkj.T) - self.thre_bp3 bp_out3 = self.sig(bp_net_k) # --------------Model Leaning ------------------------ # calculate error and gradient--------------- pd_k_all = np.multiply( (data_teach - bp_out3), np.multiply(bp_out3, (1 - bp_out3)) ) pd_j_all = np.multiply( np.dot(pd_k_all, self.wkj), np.multiply(bp_out2, (1 - bp_out2)) ) pd_i_all = np.dot(pd_j_all, self.vji) pd_conv1_pooled = pd_i_all / (self.size_pooling1 * self.size_pooling1) pd_conv1_pooled = pd_conv1_pooled.T.getA().tolist() pd_conv1_all = self._calculate_gradient_from_pool( data_conved1, pd_conv1_pooled, shape_featuremap1[0], shape_featuremap1[1], self.size_pooling1, ) # weight and threshold learning process--------- # convolution layer for k_conv in range(self.conv1[1]): pd_conv_list = self._expand_mat(pd_conv1_all[k_conv]) delta_w = self.rate_weight * np.dot(pd_conv_list, data_focus1) self.w_conv1[k_conv] = self.w_conv1[k_conv] + delta_w.reshape( (self.conv1[0], self.conv1[0]) ) self.thre_conv1[k_conv] = ( self.thre_conv1[k_conv] - np.sum(pd_conv1_all[k_conv]) * self.rate_thre ) # all connected layer self.wkj = self.wkj + pd_k_all.T * bp_out2 * self.rate_weight self.vji = self.vji + pd_j_all.T * bp_out1 * self.rate_weight self.thre_bp3 = self.thre_bp3 - pd_k_all * self.rate_thre self.thre_bp2 = self.thre_bp2 - pd_j_all * self.rate_thre # calculate the sum error of all single image errors = np.sum(abs(data_teach - bp_out3)) error_count += errors # print(' ----Teach ',data_teach) # print(' ----BP_output ',bp_out3) rp = rp + 1 mse = error_count / patterns all_mse.append(mse) def draw_error(): yplot = [error_accuracy for i in range(int(n_repeat * 1.2))] plt.plot(all_mse, "+-") plt.plot(yplot, "r--") plt.xlabel("Learning Times") plt.ylabel("All_mse") plt.grid(True, alpha=0.5) plt.show() print("------------------Training Complished---------------------") print((" - - Training epoch: ", rp, f" - - Mse: {mse:.6f}")) if draw_e: draw_error() return mse def predict(self, datas_test): # model predict produce_out = [] print("-------------------Start Testing-------------------------") print((" - - Shape: Test_Data ", np.shape(datas_test))) for p in range(len(datas_test)): data_test = np.asmatrix(datas_test[p]) data_focus1, data_conved1 = self.convolute( data_test, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) data_bp_input = self._expand(data_pooled1) bp_out1 = data_bp_input bp_net_j = bp_out1 * self.vji.T - self.thre_bp2 bp_out2 = self.sig(bp_net_j) bp_net_k = bp_out2 * self.wkj.T - self.thre_bp3 bp_out3 = self.sig(bp_net_k) produce_out.extend(bp_out3.getA().tolist()) res = [list(map(self.do_round, each)) for each in produce_out] return np.asarray(res) def convolution(self, data): # return the data of image after convoluting process so we can check it out data_test = np.asmatrix(data) data_focus1, data_conved1 = self.convolute( data_test, self.conv1, self.w_conv1, self.thre_conv1, conv_step=self.step_conv1, ) data_pooled1 = self.pooling(data_conved1, self.size_pooling1) return data_conved1, data_pooled1 if __name__ == "__main__": """ I will put the example on other file """
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
#!/bin/python3 # Doomsday algorithm info: https://en.wikipedia.org/wiki/Doomsday_rule DOOMSDAY_LEAP = [4, 1, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5] DOOMSDAY_NOT_LEAP = [3, 7, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5] WEEK_DAY_NAMES = { 0: "Sunday", 1: "Monday", 2: "Tuesday", 3: "Wednesday", 4: "Thursday", 5: "Friday", 6: "Saturday", } def get_week_day(year: int, month: int, day: int) -> str: """Returns the week-day name out of a given date. >>> get_week_day(2020, 10, 24) 'Saturday' >>> get_week_day(2017, 10, 24) 'Tuesday' >>> get_week_day(2019, 5, 3) 'Friday' >>> get_week_day(1970, 9, 16) 'Wednesday' >>> get_week_day(1870, 8, 13) 'Saturday' >>> get_week_day(2040, 3, 14) 'Wednesday' """ # minimal input check: assert len(str(year)) > 2, "year should be in YYYY format" assert 1 <= month <= 12, "month should be between 1 to 12" assert 1 <= day <= 31, "day should be between 1 to 31" # Doomsday algorithm: century = year // 100 century_anchor = (5 * (century % 4) + 2) % 7 centurian = year % 100 centurian_m = centurian % 12 dooms_day = ( (centurian // 12) + centurian_m + (centurian_m // 4) + century_anchor ) % 7 day_anchor = ( DOOMSDAY_NOT_LEAP[month - 1] if (year % 4 != 0) or (centurian == 0 and (year % 400) == 0) else DOOMSDAY_LEAP[month - 1] ) week_day = (dooms_day + day - day_anchor) % 7 return WEEK_DAY_NAMES[week_day] if __name__ == "__main__": import doctest doctest.testmod()
#!/bin/python3 # Doomsday algorithm info: https://en.wikipedia.org/wiki/Doomsday_rule DOOMSDAY_LEAP = [4, 1, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5] DOOMSDAY_NOT_LEAP = [3, 7, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5] WEEK_DAY_NAMES = { 0: "Sunday", 1: "Monday", 2: "Tuesday", 3: "Wednesday", 4: "Thursday", 5: "Friday", 6: "Saturday", } def get_week_day(year: int, month: int, day: int) -> str: """Returns the week-day name out of a given date. >>> get_week_day(2020, 10, 24) 'Saturday' >>> get_week_day(2017, 10, 24) 'Tuesday' >>> get_week_day(2019, 5, 3) 'Friday' >>> get_week_day(1970, 9, 16) 'Wednesday' >>> get_week_day(1870, 8, 13) 'Saturday' >>> get_week_day(2040, 3, 14) 'Wednesday' """ # minimal input check: assert len(str(year)) > 2, "year should be in YYYY format" assert 1 <= month <= 12, "month should be between 1 to 12" assert 1 <= day <= 31, "day should be between 1 to 31" # Doomsday algorithm: century = year // 100 century_anchor = (5 * (century % 4) + 2) % 7 centurian = year % 100 centurian_m = centurian % 12 dooms_day = ( (centurian // 12) + centurian_m + (centurian_m // 4) + century_anchor ) % 7 day_anchor = ( DOOMSDAY_NOT_LEAP[month - 1] if (year % 4 != 0) or (centurian == 0 and (year % 400) == 0) else DOOMSDAY_LEAP[month - 1] ) week_day = (dooms_day + day - day_anchor) % 7 return WEEK_DAY_NAMES[week_day] if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def max_difference(a: list[int]) -> tuple[int, int]: """ We are given an array A[1..n] of integers, n >= 1. We want to find a pair of indices (i, j) such that 1 <= i <= j <= n and A[j] - A[i] is as large as possible. Explanation: https://www.geeksforgeeks.org/maximum-difference-between-two-elements/ >>> max_difference([5, 11, 2, 1, 7, 9, 0, 7]) (1, 9) """ # base case if len(a) == 1: return a[0], a[0] else: # split A into half. first = a[: len(a) // 2] second = a[len(a) // 2 :] # 2 sub problems, 1/2 of original size. small1, big1 = max_difference(first) small2, big2 = max_difference(second) # get min of first and max of second # linear time min_first = min(first) max_second = max(second) # 3 cases, either (small1, big1), # (min_first, max_second), (small2, big2) # constant comparisons if big2 - small2 > max_second - min_first and big2 - small2 > big1 - small1: return small2, big2 elif big1 - small1 > max_second - min_first: return small1, big1 else: return min_first, max_second if __name__ == "__main__": import doctest doctest.testmod()
def max_difference(a: list[int]) -> tuple[int, int]: """ We are given an array A[1..n] of integers, n >= 1. We want to find a pair of indices (i, j) such that 1 <= i <= j <= n and A[j] - A[i] is as large as possible. Explanation: https://www.geeksforgeeks.org/maximum-difference-between-two-elements/ >>> max_difference([5, 11, 2, 1, 7, 9, 0, 7]) (1, 9) """ # base case if len(a) == 1: return a[0], a[0] else: # split A into half. first = a[: len(a) // 2] second = a[len(a) // 2 :] # 2 sub problems, 1/2 of original size. small1, big1 = max_difference(first) small2, big2 = max_difference(second) # get min of first and max of second # linear time min_first = min(first) max_second = max(second) # 3 cases, either (small1, big1), # (min_first, max_second), (small2, big2) # constant comparisons if big2 - small2 > max_second - min_first and big2 - small2 > big1 - small1: return small2, big2 elif big1 - small1 > max_second - min_first: return small1, big1 else: return min_first, max_second if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Calculates buoyant force on object submerged within static fluid. Discovered by greek mathematician, Archimedes. The principle is named after him. Equation for calculating buoyant force: Fb = ρ * V * g Source: - https://en.wikipedia.org/wiki/Archimedes%27_principle """ # Acceleration Constant on Earth (unit m/s^2) g = 9.80665 def archimedes_principle( fluid_density: float, volume: float, gravity: float = g ) -> float: """ Args: fluid_density: density of fluid (kg/m^3) volume: volume of object / liquid being displaced by object gravity: Acceleration from gravity. Gravitational force on system, Default is Earth Gravity returns: buoyant force on object in Newtons >>> archimedes_principle(fluid_density=997, volume=0.5, gravity=9.8) 4885.3 >>> archimedes_principle(fluid_density=997, volume=0.7) 6844.061035 """ if fluid_density <= 0: raise ValueError("Impossible fluid density") if volume < 0: raise ValueError("Impossible Object volume") if gravity <= 0: raise ValueError("Impossible Gravity") return fluid_density * gravity * volume if __name__ == "__main__": import doctest # run doctest doctest.testmod()
""" Calculates buoyant force on object submerged within static fluid. Discovered by greek mathematician, Archimedes. The principle is named after him. Equation for calculating buoyant force: Fb = ρ * V * g Source: - https://en.wikipedia.org/wiki/Archimedes%27_principle """ # Acceleration Constant on Earth (unit m/s^2) g = 9.80665 def archimedes_principle( fluid_density: float, volume: float, gravity: float = g ) -> float: """ Args: fluid_density: density of fluid (kg/m^3) volume: volume of object / liquid being displaced by object gravity: Acceleration from gravity. Gravitational force on system, Default is Earth Gravity returns: buoyant force on object in Newtons >>> archimedes_principle(fluid_density=997, volume=0.5, gravity=9.8) 4885.3 >>> archimedes_principle(fluid_density=997, volume=0.7) 6844.061035 """ if fluid_density <= 0: raise ValueError("Impossible fluid density") if volume < 0: raise ValueError("Impossible Object volume") if gravity <= 0: raise ValueError("Impossible Gravity") return fluid_density * gravity * volume if __name__ == "__main__": import doctest # run doctest doctest.testmod()
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") i = 2 ans = 0 if n == 2: return 2 while n > 2: while n % i != 0: i += 1 ans = i while n % i == 0: n = n // i i += 1 return int(ans) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") i = 2 ans = 0 if n == 2: return 2 while n > 2: while n % i != 0: i += 1 ans = i while n % i == 0: n = n // i i += 1 return int(ans) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,936
Fix ruff errors
### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-08-09T07:13:45Z"
"2023-08-09T07:55:31Z"
842d03fb2ab7d83e4d4081c248d71e89bb520809
ae0fc85401efd9816193a06e554a66600cc09a97
Fix ruff errors. ### Describe your change: Fixes #8935 Fixing ruff errors again due to the recent version update Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons: 1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow. 2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway. * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Hashes Hashing is the process of mapping any amount of data to a specified size using an algorithm. This is known as a hash value (or, if you're feeling fancy, a hash code, hash sums, or even a hash digest). Hashing is a one-way function, whereas encryption is a two-way function. While it is functionally conceivable to reverse-hash stuff, the required computing power makes it impractical. Hashing is a one-way street. Unlike encryption, which is intended to protect data in transit, hashing is intended to authenticate that a file or piece of data has not been altered—that it is authentic. In other words, it functions as a checksum. ## Common hashing algorithms ### MD5 This is one of the first algorithms that has gained widespread acceptance. MD5 is hashing algorithm made by Ray Rivest that is known to suffer vulnerabilities. It was created in 1992 as the successor to MD4. Currently MD6 is in the works, but as of 2009 Rivest had removed it from NIST consideration for SHA-3. ### SHA SHA stands for Security Hashing Algorithm and it’s probably best known as the hashing algorithm used in most SSL/TLS cipher suites. A cipher suite is a collection of ciphers and algorithms that are used for SSL/TLS connections. SHA handles the hashing aspects. SHA-1, as we mentioned earlier, is now deprecated. SHA-2 is now mandatory. SHA-2 is sometimes known has SHA-256, though variants with longer bit lengths are also available. ### SHA256 SHA 256 is a member of the SHA 2 algorithm family, under which SHA stands for Secure Hash Algorithm. It was a collaborative effort between both the NSA and NIST to implement a successor to the SHA 1 family, which was beginning to lose potency against brute force attacks. It was published in 2001. The importance of the 256 in the name refers to the final hash digest value, i.e. the hash value will remain 256 bits regardless of the size of the plaintext/cleartext. Other algorithms in the SHA family are similar to SHA 256 in some ways. ### Luhn The Luhn algorithm, also renowned as the modulus 10 or mod 10 algorithm, is a straightforward checksum formula used to validate a wide range of identification numbers, including credit card numbers, IMEI numbers, and Canadian Social Insurance Numbers. A community of mathematicians developed the LUHN formula in the late 1960s. Companies offering credit cards quickly followed suit. Since the algorithm is in the public interest, anyone can use it. The algorithm is used by most credit cards and many government identification numbers as a simple method of differentiating valid figures from mistyped or otherwise incorrect numbers. It was created to guard against unintentional errors, not malicious attacks.
# Hashes Hashing is the process of mapping any amount of data to a specified size using an algorithm. This is known as a hash value (or, if you're feeling fancy, a hash code, hash sums, or even a hash digest). Hashing is a one-way function, whereas encryption is a two-way function. While it is functionally conceivable to reverse-hash stuff, the required computing power makes it impractical. Hashing is a one-way street. Unlike encryption, which is intended to protect data in transit, hashing is intended to authenticate that a file or piece of data has not been altered—that it is authentic. In other words, it functions as a checksum. ## Common hashing algorithms ### MD5 This is one of the first algorithms that has gained widespread acceptance. MD5 is hashing algorithm made by Ray Rivest that is known to suffer vulnerabilities. It was created in 1992 as the successor to MD4. Currently MD6 is in the works, but as of 2009 Rivest had removed it from NIST consideration for SHA-3. ### SHA SHA stands for Security Hashing Algorithm and it’s probably best known as the hashing algorithm used in most SSL/TLS cipher suites. A cipher suite is a collection of ciphers and algorithms that are used for SSL/TLS connections. SHA handles the hashing aspects. SHA-1, as we mentioned earlier, is now deprecated. SHA-2 is now mandatory. SHA-2 is sometimes known has SHA-256, though variants with longer bit lengths are also available. ### SHA256 SHA 256 is a member of the SHA 2 algorithm family, under which SHA stands for Secure Hash Algorithm. It was a collaborative effort between both the NSA and NIST to implement a successor to the SHA 1 family, which was beginning to lose potency against brute force attacks. It was published in 2001. The importance of the 256 in the name refers to the final hash digest value, i.e. the hash value will remain 256 bits regardless of the size of the plaintext/cleartext. Other algorithms in the SHA family are similar to SHA 256 in some ways. ### Luhn The Luhn algorithm, also renowned as the modulus 10 or mod 10 algorithm, is a straightforward checksum formula used to validate a wide range of identification numbers, including credit card numbers, IMEI numbers, and Canadian Social Insurance Numbers. A community of mathematicians developed the LUHN formula in the late 1960s. Companies offering credit cards quickly followed suit. Since the algorithm is in the public interest, anyone can use it. The algorithm is used by most credit cards and many government identification numbers as a simple method of differentiating valid figures from mistyped or otherwise incorrect numbers. It was created to guard against unintentional errors, not malicious attacks.
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Power Sum](backtracking/power_sum.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) * [Word Search](backtracking/word_search.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Numbers Different Signs](bit_manipulation/numbers_different_signs.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Autokey](ciphers/autokey.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Lz77](compression/lz77.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Energy Conversions](conversions/energy_conversions.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * [Product Sum](data_structures/arrays/product_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Bloom Filter](data_structures/hashing/bloom_filter.py) * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Map](data_structures/hashing/hash_map.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Tests * [Test Hash Map](data_structures/hashing/tests/test_hash_map.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py) * [Queue On List](data_structures/queue/queue_on_list.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray](divide_and_conquer/max_subarray.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Product Subarray](dynamic_programming/max_product_subarray.py) * [Max Subarray Sum](dynamic_programming/max_subarray_sum.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) * [Word Break](dynamic_programming/word_break.py) ## Electronics * [Apparent Power](electronics/apparent_power.py) * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Circular Convolution](electronics/circular_convolution.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Real And Reactive Power](electronics/real_and_reactive_power.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Present Value](financial/present_value.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph Adjacency List](graphs/graph_adjacency_list.py) * [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py) * [Graph List](graphs/graph_list.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Linear Programming * [Simplex](linear_programming/simplex.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * [Dimensionality Reduction](machine_learning/dimensionality_reduction.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polynomial Regression](machine_learning/polynomial_regression.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Decimal To Fraction](maths/decimal_to_fraction.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial](maths/factorial.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Gcd Of N Numbers](maths/gcd_of_n_numbers.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Hexagonal Number](maths/hexagonal_number.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Int Palindrome](maths/is_int_palindrome.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Juggler Sequence](maths/juggler_sequence.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Odd Sieve](maths/odd_sieve.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Generator](maths/pi_generator.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Remove Digit](maths/remove_digit.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Tanh](maths/tanh.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * Activation Functions * [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Input Data](neural_network/input_data.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Guess The Number Search](other/guess_the_number_search.py) * [H Index](other/h_index.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subsequence](other/maximum_subsequence.py) * [Nested Brackets](other/nested_brackets.py) * [Number Container System](other/number_container_system.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Altitude Pressure](physics/altitude_pressure.py) * [Archimedes Principle](physics/archimedes_principle.py) * [Basic Orbital Capture](physics/basic_orbital_capture.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Grahams Law](physics/grahams_law.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) * [Speed Of Sound](physics/speed_of_sound.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 079 * [Sol1](project_euler/problem_079/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 082 * [Sol1](project_euler/problem_082/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 094 * [Sol1](project_euler/problem_094/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 100 * [Sol1](project_euler/problem_100/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 117 * [Sol1](project_euler/problem_117/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 131 * [Sol1](project_euler/problem_131/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 187 * [Sol1](project_euler/problem_187/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) * Problem 800 * [Sol1](project_euler/problem_800/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Binary Insertion Sort](sorts/binary_insertion_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [String Switch Case](strings/string_switch_case.py) * [Text Justification](strings/text_justification.py) * [Top K Frequent Words](strings/top_k_frequent_words.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Power Sum](backtracking/power_sum.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) * [Word Search](backtracking/word_search.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Numbers Different Signs](bit_manipulation/numbers_different_signs.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Autokey](ciphers/autokey.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Lz77](compression/lz77.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Energy Conversions](conversions/energy_conversions.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * [Product Sum](data_structures/arrays/product_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Bloom Filter](data_structures/hashing/bloom_filter.py) * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Map](data_structures/hashing/hash_map.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Tests * [Test Hash Map](data_structures/hashing/tests/test_hash_map.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue By List](data_structures/queue/queue_by_list.py) * [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray](divide_and_conquer/max_subarray.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Product Subarray](dynamic_programming/max_product_subarray.py) * [Max Subarray Sum](dynamic_programming/max_subarray_sum.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) * [Word Break](dynamic_programming/word_break.py) ## Electronics * [Apparent Power](electronics/apparent_power.py) * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Circular Convolution](electronics/circular_convolution.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Real And Reactive Power](electronics/real_and_reactive_power.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Present Value](financial/present_value.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph Adjacency List](graphs/graph_adjacency_list.py) * [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py) * [Graph List](graphs/graph_list.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Linear Programming * [Simplex](linear_programming/simplex.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * [Dimensionality Reduction](machine_learning/dimensionality_reduction.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polynomial Regression](machine_learning/polynomial_regression.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Decimal To Fraction](maths/decimal_to_fraction.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial](maths/factorial.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Gcd Of N Numbers](maths/gcd_of_n_numbers.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Hexagonal Number](maths/hexagonal_number.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Int Palindrome](maths/is_int_palindrome.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Juggler Sequence](maths/juggler_sequence.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Odd Sieve](maths/odd_sieve.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Generator](maths/pi_generator.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Remove Digit](maths/remove_digit.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Tanh](maths/tanh.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * Activation Functions * [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Input Data](neural_network/input_data.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Guess The Number Search](other/guess_the_number_search.py) * [H Index](other/h_index.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subsequence](other/maximum_subsequence.py) * [Nested Brackets](other/nested_brackets.py) * [Number Container System](other/number_container_system.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Altitude Pressure](physics/altitude_pressure.py) * [Archimedes Principle](physics/archimedes_principle.py) * [Basic Orbital Capture](physics/basic_orbital_capture.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Grahams Law](physics/grahams_law.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) * [Speed Of Sound](physics/speed_of_sound.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 079 * [Sol1](project_euler/problem_079/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 082 * [Sol1](project_euler/problem_082/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 094 * [Sol1](project_euler/problem_094/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 100 * [Sol1](project_euler/problem_100/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 117 * [Sol1](project_euler/problem_117/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 131 * [Sol1](project_euler/problem_131/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 187 * [Sol1](project_euler/problem_187/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) * Problem 800 * [Sol1](project_euler/problem_800/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Binary Insertion Sort](sorts/binary_insertion_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [String Switch Case](strings/string_switch_case.py) * [Text Justification](strings/text_justification.py) * [Top K Frequent Words](strings/top_k_frequent_words.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Eulerian Path is a path in graph that visits every edge exactly once. # Eulerian Circuit is an Eulerian Path which starts and ends on the same # vertex. # time complexity is O(V+E) # space complexity is O(VE) # using dfs for finding eulerian path traversal def dfs(u, graph, visited_edge, path=None): path = (path or []) + [u] for v in graph[u]: if visited_edge[u][v] is False: visited_edge[u][v], visited_edge[v][u] = True, True path = dfs(v, graph, visited_edge, path) return path # for checking in graph has euler path or circuit def check_circuit_or_path(graph, max_node): odd_degree_nodes = 0 odd_node = -1 for i in range(max_node): if i not in graph.keys(): continue if len(graph[i]) % 2 == 1: odd_degree_nodes += 1 odd_node = i if odd_degree_nodes == 0: return 1, odd_node if odd_degree_nodes == 2: return 2, odd_node return 3, odd_node def check_euler(graph, max_node): visited_edge = [[False for _ in range(max_node + 1)] for _ in range(max_node + 1)] check, odd_node = check_circuit_or_path(graph, max_node) if check == 3: print("graph is not Eulerian") print("no path") return start_node = 1 if check == 2: start_node = odd_node print("graph has a Euler path") if check == 1: print("graph has a Euler cycle") path = dfs(start_node, graph, visited_edge) print(path) def main(): g1 = {1: [2, 3, 4], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [4]} g2 = {1: [2, 3, 4, 5], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [1, 4]} g3 = {1: [2, 3, 4], 2: [1, 3, 4], 3: [1, 2], 4: [1, 2, 5], 5: [4]} g4 = {1: [2, 3], 2: [1, 3], 3: [1, 2]} g5 = { 1: [], 2: [] # all degree is zero } max_node = 10 check_euler(g1, max_node) check_euler(g2, max_node) check_euler(g3, max_node) check_euler(g4, max_node) check_euler(g5, max_node) if __name__ == "__main__": main()
# Eulerian Path is a path in graph that visits every edge exactly once. # Eulerian Circuit is an Eulerian Path which starts and ends on the same # vertex. # time complexity is O(V+E) # space complexity is O(VE) # using dfs for finding eulerian path traversal def dfs(u, graph, visited_edge, path=None): path = (path or []) + [u] for v in graph[u]: if visited_edge[u][v] is False: visited_edge[u][v], visited_edge[v][u] = True, True path = dfs(v, graph, visited_edge, path) return path # for checking in graph has euler path or circuit def check_circuit_or_path(graph, max_node): odd_degree_nodes = 0 odd_node = -1 for i in range(max_node): if i not in graph: continue if len(graph[i]) % 2 == 1: odd_degree_nodes += 1 odd_node = i if odd_degree_nodes == 0: return 1, odd_node if odd_degree_nodes == 2: return 2, odd_node return 3, odd_node def check_euler(graph, max_node): visited_edge = [[False for _ in range(max_node + 1)] for _ in range(max_node + 1)] check, odd_node = check_circuit_or_path(graph, max_node) if check == 3: print("graph is not Eulerian") print("no path") return start_node = 1 if check == 2: start_node = odd_node print("graph has a Euler path") if check == 1: print("graph has a Euler cycle") path = dfs(start_node, graph, visited_edge) print(path) def main(): g1 = {1: [2, 3, 4], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [4]} g2 = {1: [2, 3, 4, 5], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [1, 4]} g3 = {1: [2, 3, 4], 2: [1, 3, 4], 3: [1, 2], 4: [1, 2, 5], 5: [4]} g4 = {1: [2, 3], 2: [1, 3], 3: [1, 2]} g5 = { 1: [], 2: [] # all degree is zero } max_node = 10 check_euler(g1, max_node) check_euler(g2, max_node) check_euler(g3, max_node) check_euler(g4, max_node) check_euler(g5, max_node) if __name__ == "__main__": main()
1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Description : Newton's second law of motion pertains to the behavior of objects for which all existing forces are not balanced. The second law states that the acceleration of an object is dependent upon two variables - the net force acting upon the object and the mass of the object. The acceleration of an object depends directly upon the net force acting upon the object, and inversely upon the mass of the object. As the force acting upon an object is increased, the acceleration of the object is increased. As the mass of an object is increased, the acceleration of the object is decreased. Source: https://www.physicsclassroom.com/class/newtlaws/Lesson-3/Newton-s-Second-Law Formulation: Fnet = m • a Diagrammatic Explanation: Forces are unbalanced | | | V There is acceleration /\ / \ / \ / \ / \ / \ / \ __________________ ____ ________________ |The acceleration | |The acceleration | |depends directly | |depends inversely | |on the net Force | |upon the object's | |_________________| |mass_______________| Units: 1 Newton = 1 kg X meters / (seconds^2) How to use? Inputs: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |mass | (in kgs) | float | |-------------|-------------------------|-----------| |acceleration | (in meters/(seconds^2)) | float | |_____________|_________________________|___________| Output: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |force | (in Newtons) | float | |_____________|_________________________|___________| """ def newtons_second_law_of_motion(mass: float, acceleration: float) -> float: """ >>> newtons_second_law_of_motion(10, 10) 100 >>> newtons_second_law_of_motion(2.0, 1) 2.0 """ force = float() try: force = mass * acceleration except Exception: return -0.0 return force if __name__ == "__main__": import doctest # run doctest doctest.testmod() # demo mass = 12.5 acceleration = 10 force = newtons_second_law_of_motion(mass, acceleration) print("The force is ", force, "N")
""" Description : Newton's second law of motion pertains to the behavior of objects for which all existing forces are not balanced. The second law states that the acceleration of an object is dependent upon two variables - the net force acting upon the object and the mass of the object. The acceleration of an object depends directly upon the net force acting upon the object, and inversely upon the mass of the object. As the force acting upon an object is increased, the acceleration of the object is increased. As the mass of an object is increased, the acceleration of the object is decreased. Source: https://www.physicsclassroom.com/class/newtlaws/Lesson-3/Newton-s-Second-Law Formulation: Fnet = m • a Diagrammatic Explanation: Forces are unbalanced | | | V There is acceleration /\ / \ / \ / \ / \ / \ / \ __________________ ____ ________________ |The acceleration | |The acceleration | |depends directly | |depends inversely | |on the net Force | |upon the object's | |_________________| |mass_______________| Units: 1 Newton = 1 kg X meters / (seconds^2) How to use? Inputs: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |mass | (in kgs) | float | |-------------|-------------------------|-----------| |acceleration | (in meters/(seconds^2)) | float | |_____________|_________________________|___________| Output: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |force | (in Newtons) | float | |_____________|_________________________|___________| """ def newtons_second_law_of_motion(mass: float, acceleration: float) -> float: """ >>> newtons_second_law_of_motion(10, 10) 100 >>> newtons_second_law_of_motion(2.0, 1) 2.0 """ force = 0.0 try: force = mass * acceleration except Exception: return -0.0 return force if __name__ == "__main__": import doctest # run doctest doctest.testmod() # demo mass = 12.5 acceleration = 10 force = newtons_second_law_of_motion(mass, acceleration) print("The force is ", force, "N")
1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
#!/usr/bin/env python3 import requests giphy_api_key = "YOUR API KEY" # Can be fetched from https://developers.giphy.com/dashboard/ def get_gifs(query: str, api_key: str = giphy_api_key) -> list: """ Get a list of URLs of GIFs based on a given query.. """ formatted_query = "+".join(query.split()) url = f"https://api.giphy.com/v1/gifs/search?q={formatted_query}&api_key={api_key}" gifs = requests.get(url).json()["data"] return [gif["url"] for gif in gifs] if __name__ == "__main__": print("\n".join(get_gifs("space ship")))
#!/usr/bin/env python3 import requests giphy_api_key = "YOUR API KEY" # Can be fetched from https://developers.giphy.com/dashboard/ def get_gifs(query: str, api_key: str = giphy_api_key) -> list: """ Get a list of URLs of GIFs based on a given query.. """ formatted_query = "+".join(query.split()) url = f"https://api.giphy.com/v1/gifs/search?q={formatted_query}&api_key={api_key}" gifs = requests.get(url).json()["data"] return [gif["url"] for gif in gifs] if __name__ == "__main__": print("\n".join(get_gifs("space ship")))
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Author : Alexander Pantyukhin Date : October 14, 2022 This is implementation Dynamic Programming up bottom approach to find edit distance. The aim is to demonstate up bottom approach for solving the task. The implementation was tested on the leetcode: https://leetcode.com/problems/edit-distance/ Levinstein distance Dynamic Programming: up -> down. """ import functools def min_distance_up_bottom(word1: str, word2: str) -> int: """ >>> min_distance_up_bottom("intention", "execution") 5 >>> min_distance_up_bottom("intention", "") 9 >>> min_distance_up_bottom("", "") 0 >>> min_distance_up_bottom("zooicoarchaeologist", "zoologist") 10 """ len_word1 = len(word1) len_word2 = len(word2) @functools.cache def min_distance(index1: int, index2: int) -> int: # if first word index is overflow - delete all from the second word if index1 >= len_word1: return len_word2 - index2 # if second word index is overflow - delete all from the first word if index2 >= len_word2: return len_word1 - index1 diff = int(word1[index1] != word2[index2]) # current letters not identical return min( 1 + min_distance(index1 + 1, index2), 1 + min_distance(index1, index2 + 1), diff + min_distance(index1 + 1, index2 + 1), ) return min_distance(0, 0) if __name__ == "__main__": import doctest doctest.testmod()
""" Author : Alexander Pantyukhin Date : October 14, 2022 This is implementation Dynamic Programming up bottom approach to find edit distance. The aim is to demonstate up bottom approach for solving the task. The implementation was tested on the leetcode: https://leetcode.com/problems/edit-distance/ Levinstein distance Dynamic Programming: up -> down. """ import functools def min_distance_up_bottom(word1: str, word2: str) -> int: """ >>> min_distance_up_bottom("intention", "execution") 5 >>> min_distance_up_bottom("intention", "") 9 >>> min_distance_up_bottom("", "") 0 >>> min_distance_up_bottom("zooicoarchaeologist", "zoologist") 10 """ len_word1 = len(word1) len_word2 = len(word2) @functools.cache def min_distance(index1: int, index2: int) -> int: # if first word index is overflow - delete all from the second word if index1 >= len_word1: return len_word2 - index2 # if second word index is overflow - delete all from the first word if index2 >= len_word2: return len_word1 - index1 diff = int(word1[index1] != word2[index2]) # current letters not identical return min( 1 + min_distance(index1 + 1, index2), 1 + min_distance(index1, index2 + 1), diff + min_distance(index1 + 1, index2 + 1), ) return min_distance(0, 0) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
""" Harmonic mean Reference: https://en.wikipedia.org/wiki/Harmonic_mean Harmonic series Reference: https://en.wikipedia.org/wiki/Harmonic_series(mathematics) """ def is_harmonic_series(series: list) -> bool: """ checking whether the input series is arithmetic series or not >>> is_harmonic_series([ 1, 2/3, 1/2, 2/5, 1/3]) True >>> is_harmonic_series([ 1, 2/3, 2/5, 1/3]) False >>> is_harmonic_series([1, 2, 3]) False >>> is_harmonic_series([1/2, 1/3, 1/4]) True >>> is_harmonic_series([2/5, 2/10, 2/15, 2/20, 2/25]) True >>> is_harmonic_series(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [1, 2/3, 2] >>> is_harmonic_series([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list >>> is_harmonic_series([0]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element >>> is_harmonic_series([1,2,0,6]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [1, 2/3, 2]") if len(series) == 0: raise ValueError("Input list must be a non empty list") if len(series) == 1 and series[0] != 0: return True rec_series = [] series_len = len(series) for i in range(0, series_len): if series[i] == 0: raise ValueError("Input series cannot have 0 as an element") rec_series.append(1 / series[i]) common_diff = rec_series[1] - rec_series[0] for index in range(2, series_len): if rec_series[index] - rec_series[index - 1] != common_diff: return False return True def harmonic_mean(series: list) -> float: """ return the harmonic mean of series >>> harmonic_mean([1, 4, 4]) 2.0 >>> harmonic_mean([3, 6, 9, 12]) 5.759999999999999 >>> harmonic_mean(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [2, 4, 6] >>> harmonic_mean([1, 2, 3]) 1.6363636363636365 >>> harmonic_mean([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [2, 4, 6]") if len(series) == 0: raise ValueError("Input list must be a non empty list") answer = 0 for val in series: answer += 1 / val return len(series) / answer if __name__ == "__main__": import doctest doctest.testmod()
""" Harmonic mean Reference: https://en.wikipedia.org/wiki/Harmonic_mean Harmonic series Reference: https://en.wikipedia.org/wiki/Harmonic_series(mathematics) """ def is_harmonic_series(series: list) -> bool: """ checking whether the input series is arithmetic series or not >>> is_harmonic_series([ 1, 2/3, 1/2, 2/5, 1/3]) True >>> is_harmonic_series([ 1, 2/3, 2/5, 1/3]) False >>> is_harmonic_series([1, 2, 3]) False >>> is_harmonic_series([1/2, 1/3, 1/4]) True >>> is_harmonic_series([2/5, 2/10, 2/15, 2/20, 2/25]) True >>> is_harmonic_series(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [1, 2/3, 2] >>> is_harmonic_series([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list >>> is_harmonic_series([0]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element >>> is_harmonic_series([1,2,0,6]) Traceback (most recent call last): ... ValueError: Input series cannot have 0 as an element """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [1, 2/3, 2]") if len(series) == 0: raise ValueError("Input list must be a non empty list") if len(series) == 1 and series[0] != 0: return True rec_series = [] series_len = len(series) for i in range(0, series_len): if series[i] == 0: raise ValueError("Input series cannot have 0 as an element") rec_series.append(1 / series[i]) common_diff = rec_series[1] - rec_series[0] for index in range(2, series_len): if rec_series[index] - rec_series[index - 1] != common_diff: return False return True def harmonic_mean(series: list) -> float: """ return the harmonic mean of series >>> harmonic_mean([1, 4, 4]) 2.0 >>> harmonic_mean([3, 6, 9, 12]) 5.759999999999999 >>> harmonic_mean(4) Traceback (most recent call last): ... ValueError: Input series is not valid, valid series - [2, 4, 6] >>> harmonic_mean([1, 2, 3]) 1.6363636363636365 >>> harmonic_mean([]) Traceback (most recent call last): ... ValueError: Input list must be a non empty list """ if not isinstance(series, list): raise ValueError("Input series is not valid, valid series - [2, 4, 6]") if len(series) == 0: raise ValueError("Input list must be a non empty list") answer = 0 for val in series: answer += 1 / val return len(series) / answer if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
import os import random import sys from . import cryptomath_module as cryptomath from . import rabin_miller min_primitive_root = 3 # I have written my code naively same as definition of primitive root # however every time I run this program, memory exceeded... # so I used 4.80 Algorithm in # Handbook of Applied Cryptography(CRC Press, ISBN : 0-8493-8523-7, October 1996) # and it seems to run nicely! def primitive_root(p_val: int) -> int: print("Generating primitive root of p") while True: g = random.randrange(3, p_val) if pow(g, 2, p_val) == 1: continue if pow(g, p_val, p_val) == 1: continue return g def generate_key(key_size: int) -> tuple[tuple[int, int, int, int], tuple[int, int]]: print("Generating prime p...") p = rabin_miller.generate_large_prime(key_size) # select large prime number. e_1 = primitive_root(p) # one primitive root on modulo p. d = random.randrange(3, p) # private_key -> have to be greater than 2 for safety. e_2 = cryptomath.find_mod_inverse(pow(e_1, d, p), p) public_key = (key_size, e_1, e_2, p) private_key = (key_size, d) return public_key, private_key def make_key_files(name: str, key_size: int) -> None: if os.path.exists(f"{name}_pubkey.txt") or os.path.exists(f"{name}_privkey.txt"): print("\nWARNING:") print( f'"{name}_pubkey.txt" or "{name}_privkey.txt" already exists. \n' "Use a different name or delete these files and re-run this program." ) sys.exit() public_key, private_key = generate_key(key_size) print(f"\nWriting public key to file {name}_pubkey.txt...") with open(f"{name}_pubkey.txt", "w") as fo: fo.write(f"{public_key[0]},{public_key[1]},{public_key[2]},{public_key[3]}") print(f"Writing private key to file {name}_privkey.txt...") with open(f"{name}_privkey.txt", "w") as fo: fo.write(f"{private_key[0]},{private_key[1]}") def main() -> None: print("Making key files...") make_key_files("elgamal", 2048) print("Key files generation successful") if __name__ == "__main__": main()
import os import random import sys from . import cryptomath_module as cryptomath from . import rabin_miller min_primitive_root = 3 # I have written my code naively same as definition of primitive root # however every time I run this program, memory exceeded... # so I used 4.80 Algorithm in # Handbook of Applied Cryptography(CRC Press, ISBN : 0-8493-8523-7, October 1996) # and it seems to run nicely! def primitive_root(p_val: int) -> int: print("Generating primitive root of p") while True: g = random.randrange(3, p_val) if pow(g, 2, p_val) == 1: continue if pow(g, p_val, p_val) == 1: continue return g def generate_key(key_size: int) -> tuple[tuple[int, int, int, int], tuple[int, int]]: print("Generating prime p...") p = rabin_miller.generate_large_prime(key_size) # select large prime number. e_1 = primitive_root(p) # one primitive root on modulo p. d = random.randrange(3, p) # private_key -> have to be greater than 2 for safety. e_2 = cryptomath.find_mod_inverse(pow(e_1, d, p), p) public_key = (key_size, e_1, e_2, p) private_key = (key_size, d) return public_key, private_key def make_key_files(name: str, key_size: int) -> None: if os.path.exists(f"{name}_pubkey.txt") or os.path.exists(f"{name}_privkey.txt"): print("\nWARNING:") print( f'"{name}_pubkey.txt" or "{name}_privkey.txt" already exists. \n' "Use a different name or delete these files and re-run this program." ) sys.exit() public_key, private_key = generate_key(key_size) print(f"\nWriting public key to file {name}_pubkey.txt...") with open(f"{name}_pubkey.txt", "w") as fo: fo.write(f"{public_key[0]},{public_key[1]},{public_key[2]},{public_key[3]}") print(f"Writing private key to file {name}_privkey.txt...") with open(f"{name}_privkey.txt", "w") as fo: fo.write(f"{private_key[0]},{private_key[1]}") def main() -> None: print("Making key files...") make_key_files("elgamal", 2048) print("Key files generation successful") if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
def multiplicative_persistence(num: int) -> int: """ Return the persistence of a given number. https://en.wikipedia.org/wiki/Persistence_of_a_number >>> multiplicative_persistence(217) 2 >>> multiplicative_persistence(-1) Traceback (most recent call last): ... ValueError: multiplicative_persistence() does not accept negative values >>> multiplicative_persistence("long number") Traceback (most recent call last): ... ValueError: multiplicative_persistence() only accepts integral values """ if not isinstance(num, int): raise ValueError("multiplicative_persistence() only accepts integral values") if num < 0: raise ValueError("multiplicative_persistence() does not accept negative values") steps = 0 num_string = str(num) while len(num_string) != 1: numbers = [int(i) for i in num_string] total = 1 for i in range(0, len(numbers)): total *= numbers[i] num_string = str(total) steps += 1 return steps def additive_persistence(num: int) -> int: """ Return the persistence of a given number. https://en.wikipedia.org/wiki/Persistence_of_a_number >>> additive_persistence(199) 3 >>> additive_persistence(-1) Traceback (most recent call last): ... ValueError: additive_persistence() does not accept negative values >>> additive_persistence("long number") Traceback (most recent call last): ... ValueError: additive_persistence() only accepts integral values """ if not isinstance(num, int): raise ValueError("additive_persistence() only accepts integral values") if num < 0: raise ValueError("additive_persistence() does not accept negative values") steps = 0 num_string = str(num) while len(num_string) != 1: numbers = [int(i) for i in num_string] total = 0 for i in range(0, len(numbers)): total += numbers[i] num_string = str(total) steps += 1 return steps if __name__ == "__main__": import doctest doctest.testmod()
def multiplicative_persistence(num: int) -> int: """ Return the persistence of a given number. https://en.wikipedia.org/wiki/Persistence_of_a_number >>> multiplicative_persistence(217) 2 >>> multiplicative_persistence(-1) Traceback (most recent call last): ... ValueError: multiplicative_persistence() does not accept negative values >>> multiplicative_persistence("long number") Traceback (most recent call last): ... ValueError: multiplicative_persistence() only accepts integral values """ if not isinstance(num, int): raise ValueError("multiplicative_persistence() only accepts integral values") if num < 0: raise ValueError("multiplicative_persistence() does not accept negative values") steps = 0 num_string = str(num) while len(num_string) != 1: numbers = [int(i) for i in num_string] total = 1 for i in range(0, len(numbers)): total *= numbers[i] num_string = str(total) steps += 1 return steps def additive_persistence(num: int) -> int: """ Return the persistence of a given number. https://en.wikipedia.org/wiki/Persistence_of_a_number >>> additive_persistence(199) 3 >>> additive_persistence(-1) Traceback (most recent call last): ... ValueError: additive_persistence() does not accept negative values >>> additive_persistence("long number") Traceback (most recent call last): ... ValueError: additive_persistence() only accepts integral values """ if not isinstance(num, int): raise ValueError("additive_persistence() only accepts integral values") if num < 0: raise ValueError("additive_persistence() does not accept negative values") steps = 0 num_string = str(num) while len(num_string) != 1: numbers = [int(i) for i in num_string] total = 0 for i in range(0, len(numbers)): total += numbers[i] num_string = str(total) steps += 1 return steps if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
#!/usr/bin/env python3 """ Implementation of entropy of information https://en.wikipedia.org/wiki/Entropy_(information_theory) """ from __future__ import annotations import math from collections import Counter from string import ascii_lowercase def calculate_prob(text: str) -> None: """ This method takes path and two dict as argument and than calculates entropy of them. :param dict: :param dict: :return: Prints 1) Entropy of information based on 1 alphabet 2) Entropy of information based on couples of 2 alphabet 3) print Entropy of H(X n∣Xn−1) Text from random books. Also, random quotes. >>> text = ("Behind Winston’s back the voice " ... "from the telescreen was still " ... "babbling and the overfulfilment") >>> calculate_prob(text) 4.0 6.0 2.0 >>> text = ("The Ministry of Truth—Minitrue, in Newspeak [Newspeak was the official" ... "face in elegant lettering, the three") >>> calculate_prob(text) 4.0 5.0 1.0 >>> text = ("Had repulsive dashwoods suspicion sincerity but advantage now him. " ... "Remark easily garret nor nay. Civil those mrs enjoy shy fat merry. " ... "You greatest jointure saw horrible. He private he on be imagine " ... "suppose. Fertile beloved evident through no service elderly is. Blind " ... "there if every no so at. Own neglected you preferred way sincerity " ... "delivered his attempted. To of message cottage windows do besides " ... "against uncivil. Delightful unreserved impossible few estimating " ... "men favourable see entreaties. She propriety immediate was improving. " ... "He or entrance humoured likewise moderate. Much nor game son say " ... "feel. Fat make met can must form into gate. Me we offending prevailed " ... "discovery.") >>> calculate_prob(text) 4.0 7.0 3.0 """ single_char_strings, two_char_strings = analyze_text(text) my_alphas = list(" " + ascii_lowercase) # what is our total sum of probabilities. all_sum = sum(single_char_strings.values()) # one length string my_fir_sum = 0 # for each alpha we go in our dict and if it is in it we calculate entropy for ch in my_alphas: if ch in single_char_strings: my_str = single_char_strings[ch] prob = my_str / all_sum my_fir_sum += prob * math.log2(prob) # entropy formula. # print entropy print(f"{round(-1 * my_fir_sum):.1f}") # two len string all_sum = sum(two_char_strings.values()) my_sec_sum = 0 # for each alpha (two in size) calculate entropy. for ch0 in my_alphas: for ch1 in my_alphas: sequence = ch0 + ch1 if sequence in two_char_strings: my_str = two_char_strings[sequence] prob = int(my_str) / all_sum my_sec_sum += prob * math.log2(prob) # print second entropy print(f"{round(-1 * my_sec_sum):.1f}") # print the difference between them print(f"{round((-1 * my_sec_sum) - (-1 * my_fir_sum)):.1f}") def analyze_text(text: str) -> tuple[dict, dict]: """ Convert text input into two dicts of counts. The first dictionary stores the frequency of single character strings. The second dictionary stores the frequency of two character strings. """ single_char_strings = Counter() # type: ignore two_char_strings = Counter() # type: ignore single_char_strings[text[-1]] += 1 # first case when we have space at start. two_char_strings[" " + text[0]] += 1 for i in range(0, len(text) - 1): single_char_strings[text[i]] += 1 two_char_strings[text[i : i + 2]] += 1 return single_char_strings, two_char_strings def main(): import doctest doctest.testmod() # text = ( # "Had repulsive dashwoods suspicion sincerity but advantage now him. Remark " # "easily garret nor nay. Civil those mrs enjoy shy fat merry. You greatest " # "jointure saw horrible. He private he on be imagine suppose. Fertile " # "beloved evident through no service elderly is. Blind there if every no so " # "at. Own neglected you preferred way sincerity delivered his attempted. To " # "of message cottage windows do besides against uncivil. Delightful " # "unreserved impossible few estimating men favourable see entreaties. She " # "propriety immediate was improving. He or entrance humoured likewise " # "moderate. Much nor game son say feel. Fat make met can must form into " # "gate. Me we offending prevailed discovery. " # ) # calculate_prob(text) if __name__ == "__main__": main()
#!/usr/bin/env python3 """ Implementation of entropy of information https://en.wikipedia.org/wiki/Entropy_(information_theory) """ from __future__ import annotations import math from collections import Counter from string import ascii_lowercase def calculate_prob(text: str) -> None: """ This method takes path and two dict as argument and than calculates entropy of them. :param dict: :param dict: :return: Prints 1) Entropy of information based on 1 alphabet 2) Entropy of information based on couples of 2 alphabet 3) print Entropy of H(X n∣Xn−1) Text from random books. Also, random quotes. >>> text = ("Behind Winston’s back the voice " ... "from the telescreen was still " ... "babbling and the overfulfilment") >>> calculate_prob(text) 4.0 6.0 2.0 >>> text = ("The Ministry of Truth—Minitrue, in Newspeak [Newspeak was the official" ... "face in elegant lettering, the three") >>> calculate_prob(text) 4.0 5.0 1.0 >>> text = ("Had repulsive dashwoods suspicion sincerity but advantage now him. " ... "Remark easily garret nor nay. Civil those mrs enjoy shy fat merry. " ... "You greatest jointure saw horrible. He private he on be imagine " ... "suppose. Fertile beloved evident through no service elderly is. Blind " ... "there if every no so at. Own neglected you preferred way sincerity " ... "delivered his attempted. To of message cottage windows do besides " ... "against uncivil. Delightful unreserved impossible few estimating " ... "men favourable see entreaties. She propriety immediate was improving. " ... "He or entrance humoured likewise moderate. Much nor game son say " ... "feel. Fat make met can must form into gate. Me we offending prevailed " ... "discovery.") >>> calculate_prob(text) 4.0 7.0 3.0 """ single_char_strings, two_char_strings = analyze_text(text) my_alphas = list(" " + ascii_lowercase) # what is our total sum of probabilities. all_sum = sum(single_char_strings.values()) # one length string my_fir_sum = 0 # for each alpha we go in our dict and if it is in it we calculate entropy for ch in my_alphas: if ch in single_char_strings: my_str = single_char_strings[ch] prob = my_str / all_sum my_fir_sum += prob * math.log2(prob) # entropy formula. # print entropy print(f"{round(-1 * my_fir_sum):.1f}") # two len string all_sum = sum(two_char_strings.values()) my_sec_sum = 0 # for each alpha (two in size) calculate entropy. for ch0 in my_alphas: for ch1 in my_alphas: sequence = ch0 + ch1 if sequence in two_char_strings: my_str = two_char_strings[sequence] prob = int(my_str) / all_sum my_sec_sum += prob * math.log2(prob) # print second entropy print(f"{round(-1 * my_sec_sum):.1f}") # print the difference between them print(f"{round((-1 * my_sec_sum) - (-1 * my_fir_sum)):.1f}") def analyze_text(text: str) -> tuple[dict, dict]: """ Convert text input into two dicts of counts. The first dictionary stores the frequency of single character strings. The second dictionary stores the frequency of two character strings. """ single_char_strings = Counter() # type: ignore two_char_strings = Counter() # type: ignore single_char_strings[text[-1]] += 1 # first case when we have space at start. two_char_strings[" " + text[0]] += 1 for i in range(0, len(text) - 1): single_char_strings[text[i]] += 1 two_char_strings[text[i : i + 2]] += 1 return single_char_strings, two_char_strings def main(): import doctest doctest.testmod() # text = ( # "Had repulsive dashwoods suspicion sincerity but advantage now him. Remark " # "easily garret nor nay. Civil those mrs enjoy shy fat merry. You greatest " # "jointure saw horrible. He private he on be imagine suppose. Fertile " # "beloved evident through no service elderly is. Blind there if every no so " # "at. Own neglected you preferred way sincerity delivered his attempted. To " # "of message cottage windows do besides against uncivil. Delightful " # "unreserved impossible few estimating men favourable see entreaties. She " # "propriety immediate was improving. He or entrance humoured likewise " # "moderate. Much nor game son say feel. Fat make met can must form into " # "gate. Me we offending prevailed discovery. " # ) # calculate_prob(text) if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,913
Ruff fixes
### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
tianyizheng02
"2023-07-31T19:13:04Z"
"2023-07-31T20:53:26Z"
90a8e6e0d210a5c526c8f485fa825e1649d217e2
5cf34d901e32b65425103309bbad0068b1851238
Ruff fixes. ### Describe your change: Fix graphs/eulerian_path_and_circuit_for_undirected_graph.py and physics/newtons_second_law_of_motion.py, which are causing ruff to fail * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
# Implementation of First Come First Served scheduling algorithm # In this Algorithm we just care about the order that the processes arrived # without carring about their duration time # https://en.wikipedia.org/wiki/Scheduling_(computing)#First_come,_first_served from __future__ import annotations def calculate_waiting_times(duration_times: list[int]) -> list[int]: """ This function calculates the waiting time of some processes that have a specified duration time. Return: The waiting time for each process. >>> calculate_waiting_times([5, 10, 15]) [0, 5, 15] >>> calculate_waiting_times([1, 2, 3, 4, 5]) [0, 1, 3, 6, 10] >>> calculate_waiting_times([10, 3]) [0, 10] """ waiting_times = [0] * len(duration_times) for i in range(1, len(duration_times)): waiting_times[i] = duration_times[i - 1] + waiting_times[i - 1] return waiting_times def calculate_turnaround_times( duration_times: list[int], waiting_times: list[int] ) -> list[int]: """ This function calculates the turnaround time of some processes. Return: The time difference between the completion time and the arrival time. Practically waiting_time + duration_time >>> calculate_turnaround_times([5, 10, 15], [0, 5, 15]) [5, 15, 30] >>> calculate_turnaround_times([1, 2, 3, 4, 5], [0, 1, 3, 6, 10]) [1, 3, 6, 10, 15] >>> calculate_turnaround_times([10, 3], [0, 10]) [10, 13] """ return [ duration_time + waiting_times[i] for i, duration_time in enumerate(duration_times) ] def calculate_average_turnaround_time(turnaround_times: list[int]) -> float: """ This function calculates the average of the turnaround times Return: The average of the turnaround times. >>> calculate_average_turnaround_time([0, 5, 16]) 7.0 >>> calculate_average_turnaround_time([1, 5, 8, 12]) 6.5 >>> calculate_average_turnaround_time([10, 24]) 17.0 """ return sum(turnaround_times) / len(turnaround_times) def calculate_average_waiting_time(waiting_times: list[int]) -> float: """ This function calculates the average of the waiting times Return: The average of the waiting times. >>> calculate_average_waiting_time([0, 5, 16]) 7.0 >>> calculate_average_waiting_time([1, 5, 8, 12]) 6.5 >>> calculate_average_waiting_time([10, 24]) 17.0 """ return sum(waiting_times) / len(waiting_times) if __name__ == "__main__": # process id's processes = [1, 2, 3] # ensure that we actually have processes if len(processes) == 0: print("Zero amount of processes") raise SystemExit(0) # duration time of all processes duration_times = [19, 8, 9] # ensure we can match each id to a duration time if len(duration_times) != len(processes): print("Unable to match all id's with their duration time") raise SystemExit(0) # get the waiting times and the turnaround times waiting_times = calculate_waiting_times(duration_times) turnaround_times = calculate_turnaround_times(duration_times, waiting_times) # get the average times average_waiting_time = calculate_average_waiting_time(waiting_times) average_turnaround_time = calculate_average_turnaround_time(turnaround_times) # print all the results print("Process ID\tDuration Time\tWaiting Time\tTurnaround Time") for i, process in enumerate(processes): print( f"{process}\t\t{duration_times[i]}\t\t{waiting_times[i]}\t\t" f"{turnaround_times[i]}" ) print(f"Average waiting time = {average_waiting_time}") print(f"Average turn around time = {average_turnaround_time}")
# Implementation of First Come First Served scheduling algorithm # In this Algorithm we just care about the order that the processes arrived # without carring about their duration time # https://en.wikipedia.org/wiki/Scheduling_(computing)#First_come,_first_served from __future__ import annotations def calculate_waiting_times(duration_times: list[int]) -> list[int]: """ This function calculates the waiting time of some processes that have a specified duration time. Return: The waiting time for each process. >>> calculate_waiting_times([5, 10, 15]) [0, 5, 15] >>> calculate_waiting_times([1, 2, 3, 4, 5]) [0, 1, 3, 6, 10] >>> calculate_waiting_times([10, 3]) [0, 10] """ waiting_times = [0] * len(duration_times) for i in range(1, len(duration_times)): waiting_times[i] = duration_times[i - 1] + waiting_times[i - 1] return waiting_times def calculate_turnaround_times( duration_times: list[int], waiting_times: list[int] ) -> list[int]: """ This function calculates the turnaround time of some processes. Return: The time difference between the completion time and the arrival time. Practically waiting_time + duration_time >>> calculate_turnaround_times([5, 10, 15], [0, 5, 15]) [5, 15, 30] >>> calculate_turnaround_times([1, 2, 3, 4, 5], [0, 1, 3, 6, 10]) [1, 3, 6, 10, 15] >>> calculate_turnaround_times([10, 3], [0, 10]) [10, 13] """ return [ duration_time + waiting_times[i] for i, duration_time in enumerate(duration_times) ] def calculate_average_turnaround_time(turnaround_times: list[int]) -> float: """ This function calculates the average of the turnaround times Return: The average of the turnaround times. >>> calculate_average_turnaround_time([0, 5, 16]) 7.0 >>> calculate_average_turnaround_time([1, 5, 8, 12]) 6.5 >>> calculate_average_turnaround_time([10, 24]) 17.0 """ return sum(turnaround_times) / len(turnaround_times) def calculate_average_waiting_time(waiting_times: list[int]) -> float: """ This function calculates the average of the waiting times Return: The average of the waiting times. >>> calculate_average_waiting_time([0, 5, 16]) 7.0 >>> calculate_average_waiting_time([1, 5, 8, 12]) 6.5 >>> calculate_average_waiting_time([10, 24]) 17.0 """ return sum(waiting_times) / len(waiting_times) if __name__ == "__main__": # process id's processes = [1, 2, 3] # ensure that we actually have processes if len(processes) == 0: print("Zero amount of processes") raise SystemExit(0) # duration time of all processes duration_times = [19, 8, 9] # ensure we can match each id to a duration time if len(duration_times) != len(processes): print("Unable to match all id's with their duration time") raise SystemExit(0) # get the waiting times and the turnaround times waiting_times = calculate_waiting_times(duration_times) turnaround_times = calculate_turnaround_times(duration_times, waiting_times) # get the average times average_waiting_time = calculate_average_waiting_time(waiting_times) average_turnaround_time = calculate_average_turnaround_time(turnaround_times) # print all the results print("Process ID\tDuration Time\tWaiting Time\tTurnaround Time") for i, process in enumerate(processes): print( f"{process}\t\t{duration_times[i]}\t\t{waiting_times[i]}\t\t" f"{turnaround_times[i]}" ) print(f"Average waiting time = {average_waiting_time}") print(f"Average turn around time = {average_turnaround_time}")
-1