repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import base64
def base85_encode(string: str) -> bytes:
"""
>>> base85_encode("")
b''
>>> base85_encode("12345")
b'0etOA2#'
>>> base85_encode("base 85")
b'@UX=h+?24'
"""
# encoded the input to a bytes-like object and then a85encode that
return base64.a85encode(string.encode("utf-8"))
def base85_decode(a85encoded: bytes) -> str:
"""
>>> base85_decode(b"")
''
>>> base85_decode(b"0etOA2#")
'12345'
>>> base85_decode(b"@UX=h+?24")
'base 85'
"""
# a85decode the input into bytes and decode that into a human readable string
return base64.a85decode(a85encoded).decode("utf-8")
if __name__ == "__main__":
import doctest
doctest.testmod()
| import base64
def base85_encode(string: str) -> bytes:
"""
>>> base85_encode("")
b''
>>> base85_encode("12345")
b'0etOA2#'
>>> base85_encode("base 85")
b'@UX=h+?24'
"""
# encoded the input to a bytes-like object and then a85encode that
return base64.a85encode(string.encode("utf-8"))
def base85_decode(a85encoded: bytes) -> str:
"""
>>> base85_decode(b"")
''
>>> base85_decode(b"0etOA2#")
'12345'
>>> base85_decode(b"@UX=h+?24")
'base 85'
"""
# a85decode the input into bytes and decode that into a human readable string
return base64.a85decode(a85encoded).decode("utf-8")
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Information on binary shifts:
# https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types
# https://www.interviewcake.com/concept/java/bit-shift
def logical_left_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 positive integers.
'number' is the integer to be logically left shifted 'shift_amount' times.
i.e. (number << shift_amount)
Return the shifted binary representation.
>>> logical_left_shift(0, 1)
'0b00'
>>> logical_left_shift(1, 1)
'0b10'
>>> logical_left_shift(1, 5)
'0b100000'
>>> logical_left_shift(17, 2)
'0b1000100'
>>> logical_left_shift(1983, 4)
'0b111101111110000'
>>> logical_left_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))
binary_number += "0" * shift_amount
return binary_number
def logical_right_shift(number: int, shift_amount: int) -> str:
"""
Take in positive 2 integers.
'number' is the integer to be logically right shifted 'shift_amount' times.
i.e. (number >>> shift_amount)
Return the shifted binary representation.
>>> logical_right_shift(0, 1)
'0b0'
>>> logical_right_shift(1, 1)
'0b0'
>>> logical_right_shift(1, 5)
'0b0'
>>> logical_right_shift(17, 2)
'0b100'
>>> logical_right_shift(1983, 4)
'0b1111011'
>>> logical_right_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))[2:]
if shift_amount >= len(binary_number):
return "0b0"
shifted_binary_number = binary_number[: len(binary_number) - shift_amount]
return "0b" + shifted_binary_number
def arithmetic_right_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 integers.
'number' is the integer to be arithmetically right shifted 'shift_amount' times.
i.e. (number >> shift_amount)
Return the shifted binary representation.
>>> arithmetic_right_shift(0, 1)
'0b00'
>>> arithmetic_right_shift(1, 1)
'0b00'
>>> arithmetic_right_shift(-1, 1)
'0b11'
>>> arithmetic_right_shift(17, 2)
'0b000100'
>>> arithmetic_right_shift(-17, 2)
'0b111011'
>>> arithmetic_right_shift(-1983, 4)
'0b111110000100'
"""
if number >= 0: # Get binary representation of positive number
binary_number = "0" + str(bin(number)).strip("-")[2:]
else: # Get binary (2's complement) representation of negative number
binary_number_length = len(bin(number)[3:]) # Find 2's complement of number
binary_number = bin(abs(number) - (1 << binary_number_length))[3:]
binary_number = (
"1" + "0" * (binary_number_length - len(binary_number)) + binary_number
)
if shift_amount >= len(binary_number):
return "0b" + binary_number[0] * len(binary_number)
return (
"0b"
+ binary_number[0] * shift_amount
+ binary_number[: len(binary_number) - shift_amount]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Information on binary shifts:
# https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types
# https://www.interviewcake.com/concept/java/bit-shift
def logical_left_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 positive integers.
'number' is the integer to be logically left shifted 'shift_amount' times.
i.e. (number << shift_amount)
Return the shifted binary representation.
>>> logical_left_shift(0, 1)
'0b00'
>>> logical_left_shift(1, 1)
'0b10'
>>> logical_left_shift(1, 5)
'0b100000'
>>> logical_left_shift(17, 2)
'0b1000100'
>>> logical_left_shift(1983, 4)
'0b111101111110000'
>>> logical_left_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))
binary_number += "0" * shift_amount
return binary_number
def logical_right_shift(number: int, shift_amount: int) -> str:
"""
Take in positive 2 integers.
'number' is the integer to be logically right shifted 'shift_amount' times.
i.e. (number >>> shift_amount)
Return the shifted binary representation.
>>> logical_right_shift(0, 1)
'0b0'
>>> logical_right_shift(1, 1)
'0b0'
>>> logical_right_shift(1, 5)
'0b0'
>>> logical_right_shift(17, 2)
'0b100'
>>> logical_right_shift(1983, 4)
'0b1111011'
>>> logical_right_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))[2:]
if shift_amount >= len(binary_number):
return "0b0"
shifted_binary_number = binary_number[: len(binary_number) - shift_amount]
return "0b" + shifted_binary_number
def arithmetic_right_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 integers.
'number' is the integer to be arithmetically right shifted 'shift_amount' times.
i.e. (number >> shift_amount)
Return the shifted binary representation.
>>> arithmetic_right_shift(0, 1)
'0b00'
>>> arithmetic_right_shift(1, 1)
'0b00'
>>> arithmetic_right_shift(-1, 1)
'0b11'
>>> arithmetic_right_shift(17, 2)
'0b000100'
>>> arithmetic_right_shift(-17, 2)
'0b111011'
>>> arithmetic_right_shift(-1983, 4)
'0b111110000100'
"""
if number >= 0: # Get binary representation of positive number
binary_number = "0" + str(bin(number)).strip("-")[2:]
else: # Get binary (2's complement) representation of negative number
binary_number_length = len(bin(number)[3:]) # Find 2's complement of number
binary_number = bin(abs(number) - (1 << binary_number_length))[3:]
binary_number = (
"1" + "0" * (binary_number_length - len(binary_number)) + binary_number
)
if shift_amount >= len(binary_number):
return "0b" + binary_number[0] * len(binary_number)
return (
"0b"
+ binary_number[0] * shift_amount
+ binary_number[: len(binary_number) - shift_amount]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| NUMBERS_PLUS_LETTER = "Input must be a string of 8 numbers plus letter"
LOOKUP_LETTERS = "TRWAGMYFPDXBNJZSQVHLCKE"
def is_spain_national_id(spanish_id: str) -> bool:
"""
Spain National Id is a string composed by 8 numbers plus a letter
The letter in fact is not part of the ID, it acts as a validator,
checking you didn't do a mistake when entering it on a system or
are giving a fake one.
https://en.wikipedia.org/wiki/Documento_Nacional_de_Identidad_(Spain)#Number
>>> is_spain_national_id("12345678Z")
True
>>> is_spain_national_id("12345678z") # It is case-insensitive
True
>>> is_spain_national_id("12345678x")
False
>>> is_spain_national_id("12345678I")
False
>>> is_spain_national_id("12345678-Z") # Some systems add a dash
True
>>> is_spain_national_id("12345678")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("123456709")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234567--Z")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234Z")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234ZzZZ")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id(12345678)
Traceback (most recent call last):
...
TypeError: Expected string as input, found int
"""
if not isinstance(spanish_id, str):
msg = f"Expected string as input, found {type(spanish_id).__name__}"
raise TypeError(msg)
spanish_id_clean = spanish_id.replace("-", "").upper()
if len(spanish_id_clean) != 9:
raise ValueError(NUMBERS_PLUS_LETTER)
try:
number = int(spanish_id_clean[0:8])
letter = spanish_id_clean[8]
except ValueError as ex:
raise ValueError(NUMBERS_PLUS_LETTER) from ex
if letter.isdigit():
raise ValueError(NUMBERS_PLUS_LETTER)
return letter == LOOKUP_LETTERS[number % 23]
if __name__ == "__main__":
import doctest
doctest.testmod()
| NUMBERS_PLUS_LETTER = "Input must be a string of 8 numbers plus letter"
LOOKUP_LETTERS = "TRWAGMYFPDXBNJZSQVHLCKE"
def is_spain_national_id(spanish_id: str) -> bool:
"""
Spain National Id is a string composed by 8 numbers plus a letter
The letter in fact is not part of the ID, it acts as a validator,
checking you didn't do a mistake when entering it on a system or
are giving a fake one.
https://en.wikipedia.org/wiki/Documento_Nacional_de_Identidad_(Spain)#Number
>>> is_spain_national_id("12345678Z")
True
>>> is_spain_national_id("12345678z") # It is case-insensitive
True
>>> is_spain_national_id("12345678x")
False
>>> is_spain_national_id("12345678I")
False
>>> is_spain_national_id("12345678-Z") # Some systems add a dash
True
>>> is_spain_national_id("12345678")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("123456709")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234567--Z")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234Z")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id("1234ZzZZ")
Traceback (most recent call last):
...
ValueError: Input must be a string of 8 numbers plus letter
>>> is_spain_national_id(12345678)
Traceback (most recent call last):
...
TypeError: Expected string as input, found int
"""
if not isinstance(spanish_id, str):
msg = f"Expected string as input, found {type(spanish_id).__name__}"
raise TypeError(msg)
spanish_id_clean = spanish_id.replace("-", "").upper()
if len(spanish_id_clean) != 9:
raise ValueError(NUMBERS_PLUS_LETTER)
try:
number = int(spanish_id_clean[0:8])
letter = spanish_id_clean[8]
except ValueError as ex:
raise ValueError(NUMBERS_PLUS_LETTER) from ex
if letter.isdigit():
raise ValueError(NUMBERS_PLUS_LETTER)
return letter == LOOKUP_LETTERS[number % 23]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| B64_CHARSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
def base64_encode(data: bytes) -> bytes:
"""Encodes data according to RFC4648.
The data is first transformed to binary and appended with binary digits so that its
length becomes a multiple of 6, then each 6 binary digits will match a character in
the B64_CHARSET string. The number of appended binary digits would later determine
how many "=" signs should be added, the padding.
For every 2 binary digits added, a "=" sign is added in the output.
We can add any binary digits to make it a multiple of 6, for instance, consider the
following example:
"AA" -> 0010100100101001 -> 001010 010010 1001
As can be seen above, 2 more binary digits should be added, so there's 4
possibilities here: 00, 01, 10 or 11.
That being said, Base64 encoding can be used in Steganography to hide data in these
appended digits.
>>> from base64 import b64encode
>>> a = b"This pull request is part of Hacktoberfest20!"
>>> b = b"https://tools.ietf.org/html/rfc4648"
>>> c = b"A"
>>> base64_encode(a) == b64encode(a)
True
>>> base64_encode(b) == b64encode(b)
True
>>> base64_encode(c) == b64encode(c)
True
>>> base64_encode("abc")
Traceback (most recent call last):
...
TypeError: a bytes-like object is required, not 'str'
"""
# Make sure the supplied data is a bytes-like object
if not isinstance(data, bytes):
msg = f"a bytes-like object is required, not '{data.__class__.__name__}'"
raise TypeError(msg)
binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data)
padding_needed = len(binary_stream) % 6 != 0
if padding_needed:
# The padding that will be added later
padding = b"=" * ((6 - len(binary_stream) % 6) // 2)
# Append binary_stream with arbitrary binary digits (0's by default) to make its
# length a multiple of 6.
binary_stream += "0" * (6 - len(binary_stream) % 6)
else:
padding = b""
# Encode every 6 binary digits to their corresponding Base64 character
return (
"".join(
B64_CHARSET[int(binary_stream[index : index + 6], 2)]
for index in range(0, len(binary_stream), 6)
).encode()
+ padding
)
def base64_decode(encoded_data: str) -> bytes:
"""Decodes data according to RFC4648.
This does the reverse operation of base64_encode.
We first transform the encoded data back to a binary stream, take off the
previously appended binary digits according to the padding, at this point we
would have a binary stream whose length is multiple of 8, the last step is
to convert every 8 bits to a byte.
>>> from base64 import b64decode
>>> a = "VGhpcyBwdWxsIHJlcXVlc3QgaXMgcGFydCBvZiBIYWNrdG9iZXJmZXN0MjAh"
>>> b = "aHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3JmYzQ2NDg="
>>> c = "QQ=="
>>> base64_decode(a) == b64decode(a)
True
>>> base64_decode(b) == b64decode(b)
True
>>> base64_decode(c) == b64decode(c)
True
>>> base64_decode("abc")
Traceback (most recent call last):
...
AssertionError: Incorrect padding
"""
# Make sure encoded_data is either a string or a bytes-like object
if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str):
msg = (
"argument should be a bytes-like object or ASCII string, "
f"not '{encoded_data.__class__.__name__}'"
)
raise TypeError(msg)
# In case encoded_data is a bytes-like object, make sure it contains only
# ASCII characters so we convert it to a string object
if isinstance(encoded_data, bytes):
try:
encoded_data = encoded_data.decode("utf-8")
except UnicodeDecodeError:
raise ValueError("base64 encoded data should only contain ASCII characters")
padding = encoded_data.count("=")
# Check if the encoded string contains non base64 characters
if padding:
assert all(
char in B64_CHARSET for char in encoded_data[:-padding]
), "Invalid base64 character(s) found."
else:
assert all(
char in B64_CHARSET for char in encoded_data
), "Invalid base64 character(s) found."
# Check the padding
assert len(encoded_data) % 4 == 0 and padding < 3, "Incorrect padding"
if padding:
# Remove padding if there is one
encoded_data = encoded_data[:-padding]
binary_stream = "".join(
bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data
)[: -padding * 2]
else:
binary_stream = "".join(
bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data
)
data = [
int(binary_stream[index : index + 8], 2)
for index in range(0, len(binary_stream), 8)
]
return bytes(data)
if __name__ == "__main__":
import doctest
doctest.testmod()
| B64_CHARSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
def base64_encode(data: bytes) -> bytes:
"""Encodes data according to RFC4648.
The data is first transformed to binary and appended with binary digits so that its
length becomes a multiple of 6, then each 6 binary digits will match a character in
the B64_CHARSET string. The number of appended binary digits would later determine
how many "=" signs should be added, the padding.
For every 2 binary digits added, a "=" sign is added in the output.
We can add any binary digits to make it a multiple of 6, for instance, consider the
following example:
"AA" -> 0010100100101001 -> 001010 010010 1001
As can be seen above, 2 more binary digits should be added, so there's 4
possibilities here: 00, 01, 10 or 11.
That being said, Base64 encoding can be used in Steganography to hide data in these
appended digits.
>>> from base64 import b64encode
>>> a = b"This pull request is part of Hacktoberfest20!"
>>> b = b"https://tools.ietf.org/html/rfc4648"
>>> c = b"A"
>>> base64_encode(a) == b64encode(a)
True
>>> base64_encode(b) == b64encode(b)
True
>>> base64_encode(c) == b64encode(c)
True
>>> base64_encode("abc")
Traceback (most recent call last):
...
TypeError: a bytes-like object is required, not 'str'
"""
# Make sure the supplied data is a bytes-like object
if not isinstance(data, bytes):
msg = f"a bytes-like object is required, not '{data.__class__.__name__}'"
raise TypeError(msg)
binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data)
padding_needed = len(binary_stream) % 6 != 0
if padding_needed:
# The padding that will be added later
padding = b"=" * ((6 - len(binary_stream) % 6) // 2)
# Append binary_stream with arbitrary binary digits (0's by default) to make its
# length a multiple of 6.
binary_stream += "0" * (6 - len(binary_stream) % 6)
else:
padding = b""
# Encode every 6 binary digits to their corresponding Base64 character
return (
"".join(
B64_CHARSET[int(binary_stream[index : index + 6], 2)]
for index in range(0, len(binary_stream), 6)
).encode()
+ padding
)
def base64_decode(encoded_data: str) -> bytes:
"""Decodes data according to RFC4648.
This does the reverse operation of base64_encode.
We first transform the encoded data back to a binary stream, take off the
previously appended binary digits according to the padding, at this point we
would have a binary stream whose length is multiple of 8, the last step is
to convert every 8 bits to a byte.
>>> from base64 import b64decode
>>> a = "VGhpcyBwdWxsIHJlcXVlc3QgaXMgcGFydCBvZiBIYWNrdG9iZXJmZXN0MjAh"
>>> b = "aHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3JmYzQ2NDg="
>>> c = "QQ=="
>>> base64_decode(a) == b64decode(a)
True
>>> base64_decode(b) == b64decode(b)
True
>>> base64_decode(c) == b64decode(c)
True
>>> base64_decode("abc")
Traceback (most recent call last):
...
AssertionError: Incorrect padding
"""
# Make sure encoded_data is either a string or a bytes-like object
if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str):
msg = (
"argument should be a bytes-like object or ASCII string, "
f"not '{encoded_data.__class__.__name__}'"
)
raise TypeError(msg)
# In case encoded_data is a bytes-like object, make sure it contains only
# ASCII characters so we convert it to a string object
if isinstance(encoded_data, bytes):
try:
encoded_data = encoded_data.decode("utf-8")
except UnicodeDecodeError:
raise ValueError("base64 encoded data should only contain ASCII characters")
padding = encoded_data.count("=")
# Check if the encoded string contains non base64 characters
if padding:
assert all(
char in B64_CHARSET for char in encoded_data[:-padding]
), "Invalid base64 character(s) found."
else:
assert all(
char in B64_CHARSET for char in encoded_data
), "Invalid base64 character(s) found."
# Check the padding
assert len(encoded_data) % 4 == 0 and padding < 3, "Incorrect padding"
if padding:
# Remove padding if there is one
encoded_data = encoded_data[:-padding]
binary_stream = "".join(
bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data
)[: -padding * 2]
else:
binary_stream = "".join(
bin(B64_CHARSET.index(char))[2:].zfill(6) for char in encoded_data
)
data = [
int(binary_stream[index : index + 8], 2)
for index in range(0, len(binary_stream), 8)
]
return bytes(data)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import math
"""
In cryptography, the TRANSPOSITION cipher is a method of encryption where the
positions of plaintext are shifted a certain number(determined by the key) that
follows a regular system that results in the permuted text, known as the encrypted
text. The type of transposition cipher demonstrated under is the ROUTE cipher.
"""
def main() -> None:
message = input("Enter message: ")
key = int(input(f"Enter key [2-{len(message) - 1}]: "))
mode = input("Encryption/Decryption [e/d]: ")
if mode.lower().startswith("e"):
text = encrypt_message(key, message)
elif mode.lower().startswith("d"):
text = decrypt_message(key, message)
# Append pipe symbol (vertical bar) to identify spaces at the end.
print(f"Output:\n{text + '|'}")
def encrypt_message(key: int, message: str) -> str:
"""
>>> encrypt_message(6, 'Harshil Darji')
'Hlia rDsahrij'
"""
cipher_text = [""] * key
for col in range(key):
pointer = col
while pointer < len(message):
cipher_text[col] += message[pointer]
pointer += key
return "".join(cipher_text)
def decrypt_message(key: int, message: str) -> str:
"""
>>> decrypt_message(6, 'Hlia rDsahrij')
'Harshil Darji'
"""
num_cols = math.ceil(len(message) / key)
num_rows = key
num_shaded_boxes = (num_cols * num_rows) - len(message)
plain_text = [""] * num_cols
col = 0
row = 0
for symbol in message:
plain_text[col] += symbol
col += 1
if (
(col == num_cols)
or (col == num_cols - 1)
and (row >= num_rows - num_shaded_boxes)
):
col = 0
row += 1
return "".join(plain_text)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| import math
"""
In cryptography, the TRANSPOSITION cipher is a method of encryption where the
positions of plaintext are shifted a certain number(determined by the key) that
follows a regular system that results in the permuted text, known as the encrypted
text. The type of transposition cipher demonstrated under is the ROUTE cipher.
"""
def main() -> None:
message = input("Enter message: ")
key = int(input(f"Enter key [2-{len(message) - 1}]: "))
mode = input("Encryption/Decryption [e/d]: ")
if mode.lower().startswith("e"):
text = encrypt_message(key, message)
elif mode.lower().startswith("d"):
text = decrypt_message(key, message)
# Append pipe symbol (vertical bar) to identify spaces at the end.
print(f"Output:\n{text + '|'}")
def encrypt_message(key: int, message: str) -> str:
"""
>>> encrypt_message(6, 'Harshil Darji')
'Hlia rDsahrij'
"""
cipher_text = [""] * key
for col in range(key):
pointer = col
while pointer < len(message):
cipher_text[col] += message[pointer]
pointer += key
return "".join(cipher_text)
def decrypt_message(key: int, message: str) -> str:
"""
>>> decrypt_message(6, 'Hlia rDsahrij')
'Harshil Darji'
"""
num_cols = math.ceil(len(message) / key)
num_rows = key
num_shaded_boxes = (num_cols * num_rows) - len(message)
plain_text = [""] * num_cols
col = 0
row = 0
for symbol in message:
plain_text[col] += symbol
col += 1
if (
(col == num_cols)
or (col == num_cols - 1)
and (row >= num_rows - num_shaded_boxes)
):
col = 0
row += 1
return "".join(plain_text)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is used to convert the currency using the Amdoren Currency API
https://www.amdoren.com
"""
import os
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
# Currency and their description
list_of_currencies = """
AED United Arab Emirates Dirham
AFN Afghan Afghani
ALL Albanian Lek
AMD Armenian Dram
ANG Netherlands Antillean Guilder
AOA Angolan Kwanza
ARS Argentine Peso
AUD Australian Dollar
AWG Aruban Florin
AZN Azerbaijani Manat
BAM Bosnia & Herzegovina Convertible Mark
BBD Barbadian Dollar
BDT Bangladeshi Taka
BGN Bulgarian Lev
BHD Bahraini Dinar
BIF Burundian Franc
BMD Bermudian Dollar
BND Brunei Dollar
BOB Bolivian Boliviano
BRL Brazilian Real
BSD Bahamian Dollar
BTN Bhutanese Ngultrum
BWP Botswana Pula
BYN Belarus Ruble
BZD Belize Dollar
CAD Canadian Dollar
CDF Congolese Franc
CHF Swiss Franc
CLP Chilean Peso
CNY Chinese Yuan
COP Colombian Peso
CRC Costa Rican Colon
CUC Cuban Convertible Peso
CVE Cape Verdean Escudo
CZK Czech Republic Koruna
DJF Djiboutian Franc
DKK Danish Krone
DOP Dominican Peso
DZD Algerian Dinar
EGP Egyptian Pound
ERN Eritrean Nakfa
ETB Ethiopian Birr
EUR Euro
FJD Fiji Dollar
GBP British Pound Sterling
GEL Georgian Lari
GHS Ghanaian Cedi
GIP Gibraltar Pound
GMD Gambian Dalasi
GNF Guinea Franc
GTQ Guatemalan Quetzal
GYD Guyanaese Dollar
HKD Hong Kong Dollar
HNL Honduran Lempira
HRK Croatian Kuna
HTG Haiti Gourde
HUF Hungarian Forint
IDR Indonesian Rupiah
ILS Israeli Shekel
INR Indian Rupee
IQD Iraqi Dinar
IRR Iranian Rial
ISK Icelandic Krona
JMD Jamaican Dollar
JOD Jordanian Dinar
JPY Japanese Yen
KES Kenyan Shilling
KGS Kyrgystani Som
KHR Cambodian Riel
KMF Comorian Franc
KPW North Korean Won
KRW South Korean Won
KWD Kuwaiti Dinar
KYD Cayman Islands Dollar
KZT Kazakhstan Tenge
LAK Laotian Kip
LBP Lebanese Pound
LKR Sri Lankan Rupee
LRD Liberian Dollar
LSL Lesotho Loti
LYD Libyan Dinar
MAD Moroccan Dirham
MDL Moldovan Leu
MGA Malagasy Ariary
MKD Macedonian Denar
MMK Myanma Kyat
MNT Mongolian Tugrik
MOP Macau Pataca
MRO Mauritanian Ouguiya
MUR Mauritian Rupee
MVR Maldivian Rufiyaa
MWK Malawi Kwacha
MXN Mexican Peso
MYR Malaysian Ringgit
MZN Mozambican Metical
NAD Namibian Dollar
NGN Nigerian Naira
NIO Nicaragua Cordoba
NOK Norwegian Krone
NPR Nepalese Rupee
NZD New Zealand Dollar
OMR Omani Rial
PAB Panamanian Balboa
PEN Peruvian Nuevo Sol
PGK Papua New Guinean Kina
PHP Philippine Peso
PKR Pakistani Rupee
PLN Polish Zloty
PYG Paraguayan Guarani
QAR Qatari Riyal
RON Romanian Leu
RSD Serbian Dinar
RUB Russian Ruble
RWF Rwanda Franc
SAR Saudi Riyal
SBD Solomon Islands Dollar
SCR Seychellois Rupee
SDG Sudanese Pound
SEK Swedish Krona
SGD Singapore Dollar
SHP Saint Helena Pound
SLL Sierra Leonean Leone
SOS Somali Shilling
SRD Surinamese Dollar
SSP South Sudanese Pound
STD Sao Tome and Principe Dobra
SYP Syrian Pound
SZL Swazi Lilangeni
THB Thai Baht
TJS Tajikistan Somoni
TMT Turkmenistani Manat
TND Tunisian Dinar
TOP Tonga Paanga
TRY Turkish Lira
TTD Trinidad and Tobago Dollar
TWD New Taiwan Dollar
TZS Tanzanian Shilling
UAH Ukrainian Hryvnia
UGX Ugandan Shilling
USD United States Dollar
UYU Uruguayan Peso
UZS Uzbekistan Som
VEF Venezuelan Bolivar
VND Vietnamese Dong
VUV Vanuatu Vatu
WST Samoan Tala
XAF Central African CFA franc
XCD East Caribbean Dollar
XOF West African CFA franc
XPF CFP Franc
YER Yemeni Rial
ZAR South African Rand
ZMW Zambian Kwacha
"""
def convert_currency(
from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = ""
) -> str:
"""https://www.amdoren.com/currency-api/"""
# Instead of manually generating parameters
params = locals()
# from is a reserved keyword
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
TESTING = os.getenv("CI", "")
API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
raise KeyError(
"API key must be provided in the 'AMDOREN_API_KEY' environment variable."
)
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
API_KEY,
)
)
| """
This is used to convert the currency using the Amdoren Currency API
https://www.amdoren.com
"""
import os
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
# Currency and their description
list_of_currencies = """
AED United Arab Emirates Dirham
AFN Afghan Afghani
ALL Albanian Lek
AMD Armenian Dram
ANG Netherlands Antillean Guilder
AOA Angolan Kwanza
ARS Argentine Peso
AUD Australian Dollar
AWG Aruban Florin
AZN Azerbaijani Manat
BAM Bosnia & Herzegovina Convertible Mark
BBD Barbadian Dollar
BDT Bangladeshi Taka
BGN Bulgarian Lev
BHD Bahraini Dinar
BIF Burundian Franc
BMD Bermudian Dollar
BND Brunei Dollar
BOB Bolivian Boliviano
BRL Brazilian Real
BSD Bahamian Dollar
BTN Bhutanese Ngultrum
BWP Botswana Pula
BYN Belarus Ruble
BZD Belize Dollar
CAD Canadian Dollar
CDF Congolese Franc
CHF Swiss Franc
CLP Chilean Peso
CNY Chinese Yuan
COP Colombian Peso
CRC Costa Rican Colon
CUC Cuban Convertible Peso
CVE Cape Verdean Escudo
CZK Czech Republic Koruna
DJF Djiboutian Franc
DKK Danish Krone
DOP Dominican Peso
DZD Algerian Dinar
EGP Egyptian Pound
ERN Eritrean Nakfa
ETB Ethiopian Birr
EUR Euro
FJD Fiji Dollar
GBP British Pound Sterling
GEL Georgian Lari
GHS Ghanaian Cedi
GIP Gibraltar Pound
GMD Gambian Dalasi
GNF Guinea Franc
GTQ Guatemalan Quetzal
GYD Guyanaese Dollar
HKD Hong Kong Dollar
HNL Honduran Lempira
HRK Croatian Kuna
HTG Haiti Gourde
HUF Hungarian Forint
IDR Indonesian Rupiah
ILS Israeli Shekel
INR Indian Rupee
IQD Iraqi Dinar
IRR Iranian Rial
ISK Icelandic Krona
JMD Jamaican Dollar
JOD Jordanian Dinar
JPY Japanese Yen
KES Kenyan Shilling
KGS Kyrgystani Som
KHR Cambodian Riel
KMF Comorian Franc
KPW North Korean Won
KRW South Korean Won
KWD Kuwaiti Dinar
KYD Cayman Islands Dollar
KZT Kazakhstan Tenge
LAK Laotian Kip
LBP Lebanese Pound
LKR Sri Lankan Rupee
LRD Liberian Dollar
LSL Lesotho Loti
LYD Libyan Dinar
MAD Moroccan Dirham
MDL Moldovan Leu
MGA Malagasy Ariary
MKD Macedonian Denar
MMK Myanma Kyat
MNT Mongolian Tugrik
MOP Macau Pataca
MRO Mauritanian Ouguiya
MUR Mauritian Rupee
MVR Maldivian Rufiyaa
MWK Malawi Kwacha
MXN Mexican Peso
MYR Malaysian Ringgit
MZN Mozambican Metical
NAD Namibian Dollar
NGN Nigerian Naira
NIO Nicaragua Cordoba
NOK Norwegian Krone
NPR Nepalese Rupee
NZD New Zealand Dollar
OMR Omani Rial
PAB Panamanian Balboa
PEN Peruvian Nuevo Sol
PGK Papua New Guinean Kina
PHP Philippine Peso
PKR Pakistani Rupee
PLN Polish Zloty
PYG Paraguayan Guarani
QAR Qatari Riyal
RON Romanian Leu
RSD Serbian Dinar
RUB Russian Ruble
RWF Rwanda Franc
SAR Saudi Riyal
SBD Solomon Islands Dollar
SCR Seychellois Rupee
SDG Sudanese Pound
SEK Swedish Krona
SGD Singapore Dollar
SHP Saint Helena Pound
SLL Sierra Leonean Leone
SOS Somali Shilling
SRD Surinamese Dollar
SSP South Sudanese Pound
STD Sao Tome and Principe Dobra
SYP Syrian Pound
SZL Swazi Lilangeni
THB Thai Baht
TJS Tajikistan Somoni
TMT Turkmenistani Manat
TND Tunisian Dinar
TOP Tonga Paanga
TRY Turkish Lira
TTD Trinidad and Tobago Dollar
TWD New Taiwan Dollar
TZS Tanzanian Shilling
UAH Ukrainian Hryvnia
UGX Ugandan Shilling
USD United States Dollar
UYU Uruguayan Peso
UZS Uzbekistan Som
VEF Venezuelan Bolivar
VND Vietnamese Dong
VUV Vanuatu Vatu
WST Samoan Tala
XAF Central African CFA franc
XCD East Caribbean Dollar
XOF West African CFA franc
XPF CFP Franc
YER Yemeni Rial
ZAR South African Rand
ZMW Zambian Kwacha
"""
def convert_currency(
from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = ""
) -> str:
"""https://www.amdoren.com/currency-api/"""
# Instead of manually generating parameters
params = locals()
# from is a reserved keyword
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
TESTING = os.getenv("CI", "")
API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
raise KeyError(
"API key must be provided in the 'AMDOREN_API_KEY' environment variable."
)
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
API_KEY,
)
)
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
See https://en.wikipedia.org/wiki/Bloom_filter
The use of this data structure is to test membership in a set.
Compared to Python's built-in set() it is more space-efficient.
In the following example, only 8 bits of memory will be used:
>>> bloom = Bloom(size=8)
Initially, the filter contains all zeros:
>>> bloom.bitstring
'00000000'
When an element is added, two bits are set to 1
since there are 2 hash functions in this implementation:
>>> "Titanic" in bloom
False
>>> bloom.add("Titanic")
>>> bloom.bitstring
'01100000'
>>> "Titanic" in bloom
True
However, sometimes only one bit is added
because both hash functions return the same value
>>> bloom.add("Avatar")
>>> "Avatar" in bloom
True
>>> bloom.format_hash("Avatar")
'00000100'
>>> bloom.bitstring
'01100100'
Not added elements should return False ...
>>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction")
>>> {
... film: bloom.format_hash(film) for film in not_present_films
... } # doctest: +NORMALIZE_WHITESPACE
{'The Godfather': '00000101',
'Interstellar': '00000011',
'Parasite': '00010010',
'Pulp Fiction': '10000100'}
>>> any(film in bloom for film in not_present_films)
False
but sometimes there are false positives:
>>> "Ratatouille" in bloom
True
>>> bloom.format_hash("Ratatouille")
'01100000'
The probability increases with the number of elements added.
The probability decreases with the number of bits in the bitarray.
>>> bloom.estimated_error_rate
0.140625
>>> bloom.add("The Godfather")
>>> bloom.estimated_error_rate
0.25
>>> bloom.bitstring
'01100101'
"""
from hashlib import md5, sha256
HASH_FUNCTIONS = (sha256, md5)
class Bloom:
def __init__(self, size: int = 8) -> None:
self.bitarray = 0b0
self.size = size
def add(self, value: str) -> None:
h = self.hash_(value)
self.bitarray |= h
def exists(self, value: str) -> bool:
h = self.hash_(value)
return (h & self.bitarray) == h
def __contains__(self, other: str) -> bool:
return self.exists(other)
def format_bin(self, bitarray: int) -> str:
res = bin(bitarray)[2:]
return res.zfill(self.size)
@property
def bitstring(self) -> str:
return self.format_bin(self.bitarray)
def hash_(self, value: str) -> int:
res = 0b0
for func in HASH_FUNCTIONS:
position = (
int.from_bytes(func(value.encode()).digest(), "little") % self.size
)
res |= 2**position
return res
def format_hash(self, value: str) -> str:
return self.format_bin(self.hash_(value))
@property
def estimated_error_rate(self) -> float:
n_ones = bin(self.bitarray).count("1")
return (n_ones / self.size) ** len(HASH_FUNCTIONS)
| """
See https://en.wikipedia.org/wiki/Bloom_filter
The use of this data structure is to test membership in a set.
Compared to Python's built-in set() it is more space-efficient.
In the following example, only 8 bits of memory will be used:
>>> bloom = Bloom(size=8)
Initially, the filter contains all zeros:
>>> bloom.bitstring
'00000000'
When an element is added, two bits are set to 1
since there are 2 hash functions in this implementation:
>>> "Titanic" in bloom
False
>>> bloom.add("Titanic")
>>> bloom.bitstring
'01100000'
>>> "Titanic" in bloom
True
However, sometimes only one bit is added
because both hash functions return the same value
>>> bloom.add("Avatar")
>>> "Avatar" in bloom
True
>>> bloom.format_hash("Avatar")
'00000100'
>>> bloom.bitstring
'01100100'
Not added elements should return False ...
>>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction")
>>> {
... film: bloom.format_hash(film) for film in not_present_films
... } # doctest: +NORMALIZE_WHITESPACE
{'The Godfather': '00000101',
'Interstellar': '00000011',
'Parasite': '00010010',
'Pulp Fiction': '10000100'}
>>> any(film in bloom for film in not_present_films)
False
but sometimes there are false positives:
>>> "Ratatouille" in bloom
True
>>> bloom.format_hash("Ratatouille")
'01100000'
The probability increases with the number of elements added.
The probability decreases with the number of bits in the bitarray.
>>> bloom.estimated_error_rate
0.140625
>>> bloom.add("The Godfather")
>>> bloom.estimated_error_rate
0.25
>>> bloom.bitstring
'01100101'
"""
from hashlib import md5, sha256
HASH_FUNCTIONS = (sha256, md5)
class Bloom:
def __init__(self, size: int = 8) -> None:
self.bitarray = 0b0
self.size = size
def add(self, value: str) -> None:
h = self.hash_(value)
self.bitarray |= h
def exists(self, value: str) -> bool:
h = self.hash_(value)
return (h & self.bitarray) == h
def __contains__(self, other: str) -> bool:
return self.exists(other)
def format_bin(self, bitarray: int) -> str:
res = bin(bitarray)[2:]
return res.zfill(self.size)
@property
def bitstring(self) -> str:
return self.format_bin(self.bitarray)
def hash_(self, value: str) -> int:
res = 0b0
for func in HASH_FUNCTIONS:
position = (
int.from_bytes(func(value.encode()).digest(), "little") % self.size
)
res |= 2**position
return res
def format_hash(self, value: str) -> str:
return self.format_bin(self.hash_(value))
@property
def estimated_error_rate(self) -> float:
n_ones = bin(self.bitarray).count("1")
return (n_ones / self.size) ** len(HASH_FUNCTIONS)
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # A Python implementation of the Banker's Algorithm in Operating Systems using
# Processes and Resources
# {
# "Author: "Biney Kingsley ([email protected]), [email protected]",
# "Date": 28-10-2018
# }
"""
The Banker's algorithm is a resource allocation and deadlock avoidance algorithm
developed by Edsger Dijkstra that tests for safety by simulating the allocation of
predetermined maximum possible amounts of all resources, and then makes a "s-state"
check to test for possible deadlock conditions for all other pending activities,
before deciding whether allocation should be allowed to continue.
[Source] Wikipedia
[Credit] Rosetta Code C implementation helped very much.
(https://rosettacode.org/wiki/Banker%27s_algorithm)
"""
from __future__ import annotations
import time
import numpy as np
test_claim_vector = [8, 5, 9, 7]
test_allocated_res_table = [
[2, 0, 1, 1],
[0, 1, 2, 1],
[4, 0, 0, 3],
[0, 2, 1, 0],
[1, 0, 3, 0],
]
test_maximum_claim_table = [
[3, 2, 1, 4],
[0, 2, 5, 2],
[5, 1, 0, 5],
[1, 5, 3, 0],
[3, 0, 3, 3],
]
class BankersAlgorithm:
def __init__(
self,
claim_vector: list[int],
allocated_resources_table: list[list[int]],
maximum_claim_table: list[list[int]],
) -> None:
"""
:param claim_vector: A nxn/nxm list depicting the amount of each resources
(eg. memory, interface, semaphores, etc.) available.
:param allocated_resources_table: A nxn/nxm list depicting the amount of each
resource each process is currently holding
:param maximum_claim_table: A nxn/nxm list depicting how much of each resource
the system currently has available
"""
self.__claim_vector = claim_vector
self.__allocated_resources_table = allocated_resources_table
self.__maximum_claim_table = maximum_claim_table
def __processes_resource_summation(self) -> list[int]:
"""
Check for allocated resources in line with each resource in the claim vector
"""
return [
sum(p_item[i] for p_item in self.__allocated_resources_table)
for i in range(len(self.__allocated_resources_table[0]))
]
def __available_resources(self) -> list[int]:
"""
Check for available resources in line with each resource in the claim vector
"""
return np.array(self.__claim_vector) - np.array(
self.__processes_resource_summation()
)
def __need(self) -> list[list[int]]:
"""
Implement safety checker that calculates the needs by ensuring that
max_claim[i][j] - alloc_table[i][j] <= avail[j]
"""
return [
list(np.array(self.__maximum_claim_table[i]) - np.array(allocated_resource))
for i, allocated_resource in enumerate(self.__allocated_resources_table)
]
def __need_index_manager(self) -> dict[int, list[int]]:
"""
This function builds an index control dictionary to track original ids/indices
of processes when altered during execution of method "main"
Return: {0: [a: int, b: int], 1: [c: int, d: int]}
>>> (BankersAlgorithm(test_claim_vector, test_allocated_res_table,
... test_maximum_claim_table)._BankersAlgorithm__need_index_manager()
... ) # doctest: +NORMALIZE_WHITESPACE
{0: [1, 2, 0, 3], 1: [0, 1, 3, 1], 2: [1, 1, 0, 2], 3: [1, 3, 2, 0],
4: [2, 0, 0, 3]}
"""
return {self.__need().index(i): i for i in self.__need()}
def main(self, **kwargs) -> None:
"""
Utilize various methods in this class to simulate the Banker's algorithm
Return: None
>>> BankersAlgorithm(test_claim_vector, test_allocated_res_table,
... test_maximum_claim_table).main(describe=True)
Allocated Resource Table
P1 2 0 1 1
<BLANKLINE>
P2 0 1 2 1
<BLANKLINE>
P3 4 0 0 3
<BLANKLINE>
P4 0 2 1 0
<BLANKLINE>
P5 1 0 3 0
<BLANKLINE>
System Resource Table
P1 3 2 1 4
<BLANKLINE>
P2 0 2 5 2
<BLANKLINE>
P3 5 1 0 5
<BLANKLINE>
P4 1 5 3 0
<BLANKLINE>
P5 3 0 3 3
<BLANKLINE>
Current Usage by Active Processes: 8 5 9 7
Initial Available Resources: 1 2 2 2
__________________________________________________
<BLANKLINE>
Process 3 is executing.
Updated available resource stack for processes: 5 2 2 5
The process is in a safe state.
<BLANKLINE>
Process 1 is executing.
Updated available resource stack for processes: 7 2 3 6
The process is in a safe state.
<BLANKLINE>
Process 2 is executing.
Updated available resource stack for processes: 7 3 5 7
The process is in a safe state.
<BLANKLINE>
Process 4 is executing.
Updated available resource stack for processes: 7 5 6 7
The process is in a safe state.
<BLANKLINE>
Process 5 is executing.
Updated available resource stack for processes: 8 5 9 7
The process is in a safe state.
<BLANKLINE>
"""
need_list = self.__need()
alloc_resources_table = self.__allocated_resources_table
available_resources = self.__available_resources()
need_index_manager = self.__need_index_manager()
for kw, val in kwargs.items():
if kw and val is True:
self.__pretty_data()
print("_" * 50 + "\n")
while need_list:
safe = False
for each_need in need_list:
execution = True
for index, need in enumerate(each_need):
if need > available_resources[index]:
execution = False
break
if execution:
safe = True
# get the original index of the process from ind_ctrl db
for original_need_index, need_clone in need_index_manager.items():
if each_need == need_clone:
process_number = original_need_index
print(f"Process {process_number + 1} is executing.")
# remove the process run from stack
need_list.remove(each_need)
# update available/freed resources stack
available_resources = np.array(available_resources) + np.array(
alloc_resources_table[process_number]
)
print(
"Updated available resource stack for processes: "
+ " ".join([str(x) for x in available_resources])
)
break
if safe:
print("The process is in a safe state.\n")
else:
print("System in unsafe state. Aborting...\n")
break
def __pretty_data(self):
"""
Properly align display of the algorithm's solution
"""
print(" " * 9 + "Allocated Resource Table")
for item in self.__allocated_resources_table:
print(
f"P{self.__allocated_resources_table.index(item) + 1}"
+ " ".join(f"{it:>8}" for it in item)
+ "\n"
)
print(" " * 9 + "System Resource Table")
for item in self.__maximum_claim_table:
print(
f"P{self.__maximum_claim_table.index(item) + 1}"
+ " ".join(f"{it:>8}" for it in item)
+ "\n"
)
print(
"Current Usage by Active Processes: "
+ " ".join(str(x) for x in self.__claim_vector)
)
print(
"Initial Available Resources: "
+ " ".join(str(x) for x in self.__available_resources())
)
time.sleep(1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # A Python implementation of the Banker's Algorithm in Operating Systems using
# Processes and Resources
# {
# "Author: "Biney Kingsley ([email protected]), [email protected]",
# "Date": 28-10-2018
# }
"""
The Banker's algorithm is a resource allocation and deadlock avoidance algorithm
developed by Edsger Dijkstra that tests for safety by simulating the allocation of
predetermined maximum possible amounts of all resources, and then makes a "s-state"
check to test for possible deadlock conditions for all other pending activities,
before deciding whether allocation should be allowed to continue.
[Source] Wikipedia
[Credit] Rosetta Code C implementation helped very much.
(https://rosettacode.org/wiki/Banker%27s_algorithm)
"""
from __future__ import annotations
import time
import numpy as np
test_claim_vector = [8, 5, 9, 7]
test_allocated_res_table = [
[2, 0, 1, 1],
[0, 1, 2, 1],
[4, 0, 0, 3],
[0, 2, 1, 0],
[1, 0, 3, 0],
]
test_maximum_claim_table = [
[3, 2, 1, 4],
[0, 2, 5, 2],
[5, 1, 0, 5],
[1, 5, 3, 0],
[3, 0, 3, 3],
]
class BankersAlgorithm:
def __init__(
self,
claim_vector: list[int],
allocated_resources_table: list[list[int]],
maximum_claim_table: list[list[int]],
) -> None:
"""
:param claim_vector: A nxn/nxm list depicting the amount of each resources
(eg. memory, interface, semaphores, etc.) available.
:param allocated_resources_table: A nxn/nxm list depicting the amount of each
resource each process is currently holding
:param maximum_claim_table: A nxn/nxm list depicting how much of each resource
the system currently has available
"""
self.__claim_vector = claim_vector
self.__allocated_resources_table = allocated_resources_table
self.__maximum_claim_table = maximum_claim_table
def __processes_resource_summation(self) -> list[int]:
"""
Check for allocated resources in line with each resource in the claim vector
"""
return [
sum(p_item[i] for p_item in self.__allocated_resources_table)
for i in range(len(self.__allocated_resources_table[0]))
]
def __available_resources(self) -> list[int]:
"""
Check for available resources in line with each resource in the claim vector
"""
return np.array(self.__claim_vector) - np.array(
self.__processes_resource_summation()
)
def __need(self) -> list[list[int]]:
"""
Implement safety checker that calculates the needs by ensuring that
max_claim[i][j] - alloc_table[i][j] <= avail[j]
"""
return [
list(np.array(self.__maximum_claim_table[i]) - np.array(allocated_resource))
for i, allocated_resource in enumerate(self.__allocated_resources_table)
]
def __need_index_manager(self) -> dict[int, list[int]]:
"""
This function builds an index control dictionary to track original ids/indices
of processes when altered during execution of method "main"
Return: {0: [a: int, b: int], 1: [c: int, d: int]}
>>> (BankersAlgorithm(test_claim_vector, test_allocated_res_table,
... test_maximum_claim_table)._BankersAlgorithm__need_index_manager()
... ) # doctest: +NORMALIZE_WHITESPACE
{0: [1, 2, 0, 3], 1: [0, 1, 3, 1], 2: [1, 1, 0, 2], 3: [1, 3, 2, 0],
4: [2, 0, 0, 3]}
"""
return {self.__need().index(i): i for i in self.__need()}
def main(self, **kwargs) -> None:
"""
Utilize various methods in this class to simulate the Banker's algorithm
Return: None
>>> BankersAlgorithm(test_claim_vector, test_allocated_res_table,
... test_maximum_claim_table).main(describe=True)
Allocated Resource Table
P1 2 0 1 1
<BLANKLINE>
P2 0 1 2 1
<BLANKLINE>
P3 4 0 0 3
<BLANKLINE>
P4 0 2 1 0
<BLANKLINE>
P5 1 0 3 0
<BLANKLINE>
System Resource Table
P1 3 2 1 4
<BLANKLINE>
P2 0 2 5 2
<BLANKLINE>
P3 5 1 0 5
<BLANKLINE>
P4 1 5 3 0
<BLANKLINE>
P5 3 0 3 3
<BLANKLINE>
Current Usage by Active Processes: 8 5 9 7
Initial Available Resources: 1 2 2 2
__________________________________________________
<BLANKLINE>
Process 3 is executing.
Updated available resource stack for processes: 5 2 2 5
The process is in a safe state.
<BLANKLINE>
Process 1 is executing.
Updated available resource stack for processes: 7 2 3 6
The process is in a safe state.
<BLANKLINE>
Process 2 is executing.
Updated available resource stack for processes: 7 3 5 7
The process is in a safe state.
<BLANKLINE>
Process 4 is executing.
Updated available resource stack for processes: 7 5 6 7
The process is in a safe state.
<BLANKLINE>
Process 5 is executing.
Updated available resource stack for processes: 8 5 9 7
The process is in a safe state.
<BLANKLINE>
"""
need_list = self.__need()
alloc_resources_table = self.__allocated_resources_table
available_resources = self.__available_resources()
need_index_manager = self.__need_index_manager()
for kw, val in kwargs.items():
if kw and val is True:
self.__pretty_data()
print("_" * 50 + "\n")
while need_list:
safe = False
for each_need in need_list:
execution = True
for index, need in enumerate(each_need):
if need > available_resources[index]:
execution = False
break
if execution:
safe = True
# get the original index of the process from ind_ctrl db
for original_need_index, need_clone in need_index_manager.items():
if each_need == need_clone:
process_number = original_need_index
print(f"Process {process_number + 1} is executing.")
# remove the process run from stack
need_list.remove(each_need)
# update available/freed resources stack
available_resources = np.array(available_resources) + np.array(
alloc_resources_table[process_number]
)
print(
"Updated available resource stack for processes: "
+ " ".join([str(x) for x in available_resources])
)
break
if safe:
print("The process is in a safe state.\n")
else:
print("System in unsafe state. Aborting...\n")
break
def __pretty_data(self):
"""
Properly align display of the algorithm's solution
"""
print(" " * 9 + "Allocated Resource Table")
for item in self.__allocated_resources_table:
print(
f"P{self.__allocated_resources_table.index(item) + 1}"
+ " ".join(f"{it:>8}" for it in item)
+ "\n"
)
print(" " * 9 + "System Resource Table")
for item in self.__maximum_claim_table:
print(
f"P{self.__maximum_claim_table.index(item) + 1}"
+ " ".join(f"{it:>8}" for it in item)
+ "\n"
)
print(
"Current Usage by Active Processes: "
+ " ".join(str(x) for x in self.__claim_vector)
)
print(
"Initial Available Resources: "
+ " ".join(str(x) for x in self.__available_resources())
)
time.sleep(1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
import math
def default_matrix_multiplication(a: list, b: list) -> list:
"""
Multiplication only for 2x2 matrices
"""
if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2:
raise Exception("Matrices are not 2x2")
new_matrix = [
[a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]],
[a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]],
]
return new_matrix
def matrix_addition(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def matrix_subtraction(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def split_matrix(a: list) -> tuple[list, list, list, list]:
"""
Given an even length matrix, returns the top_left, top_right, bot_left, bot_right
quadrant.
>>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]])
([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]])
>>> split_matrix([
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6],
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6]
... ]) # doctest: +NORMALIZE_WHITESPACE
([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4],
[2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1],
[6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3],
[8, 4, 1, 6]])
"""
if len(a) % 2 != 0 or len(a[0]) % 2 != 0:
raise Exception("Odd matrices are not supported!")
matrix_length = len(a)
mid = matrix_length // 2
top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)]
bot_right = [
[a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length)
]
top_left = [[a[i][j] for j in range(mid)] for i in range(mid)]
bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)]
return top_left, top_right, bot_left, bot_right
def matrix_dimensions(matrix: list) -> tuple[int, int]:
return len(matrix), len(matrix[0])
def print_matrix(matrix: list) -> None:
print("\n".join(str(line) for line in matrix))
def actual_strassen(matrix_a: list, matrix_b: list) -> list:
"""
Recursive function to calculate the product of two matrices, using the Strassen
Algorithm. It only supports even length matrices.
"""
if matrix_dimensions(matrix_a) == (2, 2):
return default_matrix_multiplication(matrix_a, matrix_b)
a, b, c, d = split_matrix(matrix_a)
e, f, g, h = split_matrix(matrix_b)
t1 = actual_strassen(a, matrix_subtraction(f, h))
t2 = actual_strassen(matrix_addition(a, b), h)
t3 = actual_strassen(matrix_addition(c, d), e)
t4 = actual_strassen(d, matrix_subtraction(g, e))
t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h))
t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h))
t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f))
top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6)
top_right = matrix_addition(t1, t2)
bot_left = matrix_addition(t3, t4)
bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7)
# construct the new matrix from our 4 quadrants
new_matrix = []
for i in range(len(top_right)):
new_matrix.append(top_left[i] + top_right[i])
for i in range(len(bot_right)):
new_matrix.append(bot_left[i] + bot_right[i])
return new_matrix
def strassen(matrix1: list, matrix2: list) -> list:
"""
>>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]])
[[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]]
>>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]])
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
msg = (
"Unable to multiply these matrices, please check the dimensions.\n"
f"Matrix A: {matrix1}\n"
f"Matrix B: {matrix2}"
)
raise Exception(msg)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
maximum = max(*dimension1, *dimension2)
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
# Adding zeros to the matrices so that the arrays dimensions are the same and also
# power of 2
for i in range(0, maxim):
if i < dimension1[0]:
for _ in range(dimension1[1], maxim):
new_matrix1[i].append(0)
else:
new_matrix1.append([0] * maxim)
if i < dimension2[0]:
for _ in range(dimension2[1], maxim):
new_matrix2[i].append(0)
else:
new_matrix2.append([0] * maxim)
final_matrix = actual_strassen(new_matrix1, new_matrix2)
# Removing the additional zeros
for i in range(0, maxim):
if i < dimension1[0]:
for _ in range(dimension2[1], maxim):
final_matrix[i].pop()
else:
final_matrix.pop()
return final_matrix
if __name__ == "__main__":
matrix1 = [
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 2, 3, 1],
]
matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]]
print(strassen(matrix1, matrix2))
| from __future__ import annotations
import math
def default_matrix_multiplication(a: list, b: list) -> list:
"""
Multiplication only for 2x2 matrices
"""
if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2:
raise Exception("Matrices are not 2x2")
new_matrix = [
[a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]],
[a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]],
]
return new_matrix
def matrix_addition(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def matrix_subtraction(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def split_matrix(a: list) -> tuple[list, list, list, list]:
"""
Given an even length matrix, returns the top_left, top_right, bot_left, bot_right
quadrant.
>>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]])
([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]])
>>> split_matrix([
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6],
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6]
... ]) # doctest: +NORMALIZE_WHITESPACE
([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4],
[2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1],
[6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3],
[8, 4, 1, 6]])
"""
if len(a) % 2 != 0 or len(a[0]) % 2 != 0:
raise Exception("Odd matrices are not supported!")
matrix_length = len(a)
mid = matrix_length // 2
top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)]
bot_right = [
[a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length)
]
top_left = [[a[i][j] for j in range(mid)] for i in range(mid)]
bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)]
return top_left, top_right, bot_left, bot_right
def matrix_dimensions(matrix: list) -> tuple[int, int]:
return len(matrix), len(matrix[0])
def print_matrix(matrix: list) -> None:
print("\n".join(str(line) for line in matrix))
def actual_strassen(matrix_a: list, matrix_b: list) -> list:
"""
Recursive function to calculate the product of two matrices, using the Strassen
Algorithm. It only supports even length matrices.
"""
if matrix_dimensions(matrix_a) == (2, 2):
return default_matrix_multiplication(matrix_a, matrix_b)
a, b, c, d = split_matrix(matrix_a)
e, f, g, h = split_matrix(matrix_b)
t1 = actual_strassen(a, matrix_subtraction(f, h))
t2 = actual_strassen(matrix_addition(a, b), h)
t3 = actual_strassen(matrix_addition(c, d), e)
t4 = actual_strassen(d, matrix_subtraction(g, e))
t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h))
t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h))
t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f))
top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6)
top_right = matrix_addition(t1, t2)
bot_left = matrix_addition(t3, t4)
bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7)
# construct the new matrix from our 4 quadrants
new_matrix = []
for i in range(len(top_right)):
new_matrix.append(top_left[i] + top_right[i])
for i in range(len(bot_right)):
new_matrix.append(bot_left[i] + bot_right[i])
return new_matrix
def strassen(matrix1: list, matrix2: list) -> list:
"""
>>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]])
[[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]]
>>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]])
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
msg = (
"Unable to multiply these matrices, please check the dimensions.\n"
f"Matrix A: {matrix1}\n"
f"Matrix B: {matrix2}"
)
raise Exception(msg)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
maximum = max(*dimension1, *dimension2)
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
# Adding zeros to the matrices so that the arrays dimensions are the same and also
# power of 2
for i in range(0, maxim):
if i < dimension1[0]:
for _ in range(dimension1[1], maxim):
new_matrix1[i].append(0)
else:
new_matrix1.append([0] * maxim)
if i < dimension2[0]:
for _ in range(dimension2[1], maxim):
new_matrix2[i].append(0)
else:
new_matrix2.append([0] * maxim)
final_matrix = actual_strassen(new_matrix1, new_matrix2)
# Removing the additional zeros
for i in range(0, maxim):
if i < dimension1[0]:
for _ in range(dimension2[1], maxim):
final_matrix[i].pop()
else:
final_matrix.pop()
return final_matrix
if __name__ == "__main__":
matrix1 = [
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 2, 3, 1],
]
matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]]
print(strassen(matrix1, matrix2))
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Python program to show the usage of Fermat's little theorem in a division
# According to Fermat's little theorem, (a / b) mod p always equals
# a * (b ^ (p - 2)) mod p
# Here we assume that p is a prime number, b divides a, and p doesn't divide b
# Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
def binary_exponentiation(a, n, mod):
if n == 0:
return 1
elif n % 2 == 1:
return (binary_exponentiation(a, n - 1, mod) * a) % mod
else:
b = binary_exponentiation(a, n / 2, mod)
return (b * b) % mod
# a prime number
p = 701
a = 1000000000
b = 10
# using binary exponentiation function, O(log(p)):
print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p)
# using Python operators:
print((a / b) % p == (a * b ** (p - 2)) % p)
| # Python program to show the usage of Fermat's little theorem in a division
# According to Fermat's little theorem, (a / b) mod p always equals
# a * (b ^ (p - 2)) mod p
# Here we assume that p is a prime number, b divides a, and p doesn't divide b
# Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
def binary_exponentiation(a, n, mod):
if n == 0:
return 1
elif n % 2 == 1:
return (binary_exponentiation(a, n - 1, mod) * a) % mod
else:
b = binary_exponentiation(a, n / 2, mod)
return (b * b) % mod
# a prime number
p = 701
a = 1000000000
b = 10
# using binary exponentiation function, O(log(p)):
print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p)
# using Python operators:
print((a / b) % p == (a * b ** (p - 2)) % p)
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Welcome to Quantum Algorithms
Started at https://github.com/TheAlgorithms/Python/issues/1831
* D-Wave: https://www.dwavesys.com and https://github.com/dwavesystems
* Google: https://research.google/teams/applied-science/quantum
* IBM: https://qiskit.org and https://github.com/Qiskit
* Rigetti: https://rigetti.com and https://github.com/rigetti
* Zapata: https://www.zapatacomputing.com and https://github.com/zapatacomputing
## IBM Qiskit
- Start using by installing `pip install qiskit`, refer the [docs](https://qiskit.org/documentation/install.html) for more info.
- Tutorials & References
- https://github.com/Qiskit/qiskit-tutorials
- https://quantum-computing.ibm.com/docs/iql/first-circuit
- https://medium.com/qiskit/how-to-program-a-quantum-computer-982a9329ed02
## Google Cirq
- Start using by installing `python -m pip install cirq`, refer the [docs](https://quantumai.google/cirq/start/install) for more info.
- Tutorials & references
- https://github.com/quantumlib/cirq
- https://quantumai.google/cirq/experiments
- https://tanishabassan.medium.com/quantum-programming-with-google-cirq-3209805279bc
| # Welcome to Quantum Algorithms
Started at https://github.com/TheAlgorithms/Python/issues/1831
* D-Wave: https://www.dwavesys.com and https://github.com/dwavesystems
* Google: https://research.google/teams/applied-science/quantum
* IBM: https://qiskit.org and https://github.com/Qiskit
* Rigetti: https://rigetti.com and https://github.com/rigetti
* Zapata: https://www.zapatacomputing.com and https://github.com/zapatacomputing
## IBM Qiskit
- Start using by installing `pip install qiskit`, refer the [docs](https://qiskit.org/documentation/install.html) for more info.
- Tutorials & References
- https://github.com/Qiskit/qiskit-tutorials
- https://quantum-computing.ibm.com/docs/iql/first-circuit
- https://medium.com/qiskit/how-to-program-a-quantum-computer-982a9329ed02
## Google Cirq
- Start using by installing `python -m pip install cirq`, refer the [docs](https://quantumai.google/cirq/start/install) for more info.
- Tutorials & references
- https://github.com/quantumlib/cirq
- https://quantumai.google/cirq/experiments
- https://tanishabassan.medium.com/quantum-programming-with-google-cirq-3209805279bc
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def reverse_long_words(sentence: str) -> str:
"""
Reverse all words that are longer than 4 characters in a sentence.
>>> reverse_long_words("Hey wollef sroirraw")
'Hey fellow warriors'
>>> reverse_long_words("nohtyP is nohtyP")
'Python is Python'
>>> reverse_long_words("1 12 123 1234 54321 654321")
'1 12 123 1234 12345 123456'
"""
return " ".join(
"".join(word[::-1]) if len(word) > 4 else word for word in sentence.split()
)
if __name__ == "__main__":
import doctest
doctest.testmod()
print(reverse_long_words("Hey wollef sroirraw"))
| def reverse_long_words(sentence: str) -> str:
"""
Reverse all words that are longer than 4 characters in a sentence.
>>> reverse_long_words("Hey wollef sroirraw")
'Hey fellow warriors'
>>> reverse_long_words("nohtyP is nohtyP")
'Python is Python'
>>> reverse_long_words("1 12 123 1234 54321 654321")
'1 12 123 1234 12345 123456'
"""
return " ".join(
"".join(word[::-1]) if len(word) > 4 else word for word in sentence.split()
)
if __name__ == "__main__":
import doctest
doctest.testmod()
print(reverse_long_words("Hey wollef sroirraw"))
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = (
"73167176531330624919225119674426574742355349194934"
"96983520312774506326239578318016984801869478851843"
"85861560789112949495459501737958331952853208805511"
"12540698747158523863050715693290963295227443043557"
"66896648950445244523161731856403098711121722383113"
"62229893423380308135336276614282806444486645238749"
"30358907296290491560440772390713810515859307960866"
"70172427121883998797908792274921901699720888093776"
"65727333001053367881220235421809751254540594752243"
"52584907711670556013604839586446706324415722155397"
"53697817977846174064955149290862569321978468622482"
"83972241375657056057490261407972968652414535100474"
"82166370484403199890008895243450658541227588666881"
"16427171479924442928230863465674813919123162824586"
"17866458359124566529476545682848912883142607690042"
"24219022671055626321111109370544217506941658960408"
"07198403850962455444362981230987879927244284909188"
"84580156166097919133875499200524063689912560717606"
"05886116467109405077541002256983155200055935729725"
"71636269561882670428252483600823257530420752963450"
)
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = (
"73167176531330624919225119674426574742355349194934"
"96983520312774506326239578318016984801869478851843"
"85861560789112949495459501737958331952853208805511"
"12540698747158523863050715693290963295227443043557"
"66896648950445244523161731856403098711121722383113"
"62229893423380308135336276614282806444486645238749"
"30358907296290491560440772390713810515859307960866"
"70172427121883998797908792274921901699720888093776"
"65727333001053367881220235421809751254540594752243"
"52584907711670556013604839586446706324415722155397"
"53697817977846174064955149290862569321978468622482"
"83972241375657056057490261407972968652414535100474"
"82166370484403199890008895243450658541227588666881"
"16427171479924442928230863465674813919123162824586"
"17866458359124566529476545682848912883142607690042"
"24219022671055626321111109370544217506941658960408"
"07198403850962455444362981230987879927244284909188"
"84580156166097919133875499200524063689912560717606"
"05886116467109405077541002256983155200055935729725"
"71636269561882670428252483600823257530420752963450"
)
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
== Hexagonal Number ==
The nth hexagonal number hn is the number of distinct dots
in a pattern of dots consisting of the outlines of regular
hexagons with sides up to n dots, when the hexagons are
overlaid so that they share one vertex.
https://en.wikipedia.org/wiki/Hexagonal_number
"""
# Author : Akshay Dubey (https://github.com/itsAkshayDubey)
def hexagonal(number: int) -> int:
"""
:param number: nth hexagonal number to calculate
:return: the nth hexagonal number
Note: A hexagonal number is only defined for positive integers
>>> hexagonal(4)
28
>>> hexagonal(11)
231
>>> hexagonal(22)
946
>>> hexagonal(0)
Traceback (most recent call last):
...
ValueError: Input must be a positive integer
>>> hexagonal(-1)
Traceback (most recent call last):
...
ValueError: Input must be a positive integer
>>> hexagonal(11.0)
Traceback (most recent call last):
...
TypeError: Input value of [number=11.0] must be an integer
"""
if not isinstance(number, int):
msg = f"Input value of [number={number}] must be an integer"
raise TypeError(msg)
if number < 1:
raise ValueError("Input must be a positive integer")
return number * (2 * number - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
== Hexagonal Number ==
The nth hexagonal number hn is the number of distinct dots
in a pattern of dots consisting of the outlines of regular
hexagons with sides up to n dots, when the hexagons are
overlaid so that they share one vertex.
https://en.wikipedia.org/wiki/Hexagonal_number
"""
# Author : Akshay Dubey (https://github.com/itsAkshayDubey)
def hexagonal(number: int) -> int:
"""
:param number: nth hexagonal number to calculate
:return: the nth hexagonal number
Note: A hexagonal number is only defined for positive integers
>>> hexagonal(4)
28
>>> hexagonal(11)
231
>>> hexagonal(22)
946
>>> hexagonal(0)
Traceback (most recent call last):
...
ValueError: Input must be a positive integer
>>> hexagonal(-1)
Traceback (most recent call last):
...
ValueError: Input must be a positive integer
>>> hexagonal(11.0)
Traceback (most recent call last):
...
TypeError: Input value of [number=11.0] must be an integer
"""
if not isinstance(number, int):
msg = f"Input value of [number={number}] must be an integer"
raise TypeError(msg)
if number < 1:
raise ValueError("Input must be a positive integer")
return number * (2 * number - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import random
import sys
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def main() -> None:
message = input("Enter message: ")
key = "LFWOAYUISVKMNXPBDCRJTQEGHZ"
resp = input("Encrypt/Decrypt [e/d]: ")
check_valid_key(key)
if resp.lower().startswith("e"):
mode = "encrypt"
translated = encrypt_message(key, message)
elif resp.lower().startswith("d"):
mode = "decrypt"
translated = decrypt_message(key, message)
print(f"\n{mode.title()}ion: \n{translated}")
def check_valid_key(key: str) -> None:
key_list = list(key)
letters_list = list(LETTERS)
key_list.sort()
letters_list.sort()
if key_list != letters_list:
sys.exit("Error in the key or symbol set.")
def encrypt_message(key: str, message: str) -> str:
"""
>>> encrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji')
'Ilcrism Olcvs'
"""
return translate_message(key, message, "encrypt")
def decrypt_message(key: str, message: str) -> str:
"""
>>> decrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs')
'Harshil Darji'
"""
return translate_message(key, message, "decrypt")
def translate_message(key: str, message: str, mode: str) -> str:
translated = ""
chars_a = LETTERS
chars_b = key
if mode == "decrypt":
chars_a, chars_b = chars_b, chars_a
for symbol in message:
if symbol.upper() in chars_a:
sym_index = chars_a.find(symbol.upper())
if symbol.isupper():
translated += chars_b[sym_index].upper()
else:
translated += chars_b[sym_index].lower()
else:
translated += symbol
return translated
def get_random_key() -> str:
key = list(LETTERS)
random.shuffle(key)
return "".join(key)
if __name__ == "__main__":
main()
| import random
import sys
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def main() -> None:
message = input("Enter message: ")
key = "LFWOAYUISVKMNXPBDCRJTQEGHZ"
resp = input("Encrypt/Decrypt [e/d]: ")
check_valid_key(key)
if resp.lower().startswith("e"):
mode = "encrypt"
translated = encrypt_message(key, message)
elif resp.lower().startswith("d"):
mode = "decrypt"
translated = decrypt_message(key, message)
print(f"\n{mode.title()}ion: \n{translated}")
def check_valid_key(key: str) -> None:
key_list = list(key)
letters_list = list(LETTERS)
key_list.sort()
letters_list.sort()
if key_list != letters_list:
sys.exit("Error in the key or symbol set.")
def encrypt_message(key: str, message: str) -> str:
"""
>>> encrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji')
'Ilcrism Olcvs'
"""
return translate_message(key, message, "encrypt")
def decrypt_message(key: str, message: str) -> str:
"""
>>> decrypt_message('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs')
'Harshil Darji'
"""
return translate_message(key, message, "decrypt")
def translate_message(key: str, message: str, mode: str) -> str:
translated = ""
chars_a = LETTERS
chars_b = key
if mode == "decrypt":
chars_a, chars_b = chars_b, chars_a
for symbol in message:
if symbol.upper() in chars_a:
sym_index = chars_a.find(symbol.upper())
if symbol.isupper():
translated += chars_b[sym_index].upper()
else:
translated += chars_b[sym_index].lower()
else:
translated += symbol
return translated
def get_random_key() -> str:
key = list(LETTERS)
random.shuffle(key)
return "".join(key)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import secrets
from random import shuffle
from string import ascii_letters, ascii_lowercase, ascii_uppercase, digits, punctuation
def password_generator(length: int = 8) -> str:
"""
Password Generator allows you to generate a random password of length N.
>>> len(password_generator())
8
>>> len(password_generator(length=16))
16
>>> len(password_generator(257))
257
>>> len(password_generator(length=0))
0
>>> len(password_generator(-1))
0
"""
chars = ascii_letters + digits + punctuation
return "".join(secrets.choice(chars) for _ in range(length))
# ALTERNATIVE METHODS
# chars_incl= characters that must be in password
# i= how many letters or characters the password length will be
def alternative_password_generator(chars_incl: str, i: int) -> str:
# Password Generator = full boot with random_number, random_letters, and
# random_character FUNCTIONS
# Put your code here...
i -= len(chars_incl)
quotient = i // 3
remainder = i % 3
# chars = chars_incl + random_letters(ascii_letters, i / 3 + remainder) +
# random_number(digits, i / 3) + random_characters(punctuation, i / 3)
chars = (
chars_incl
+ random(ascii_letters, quotient + remainder)
+ random(digits, quotient)
+ random(punctuation, quotient)
)
list_of_chars = list(chars)
shuffle(list_of_chars)
return "".join(list_of_chars)
# random is a generalised function for letters, characters and numbers
def random(chars_incl: str, i: int) -> str:
return "".join(secrets.choice(chars_incl) for _ in range(i))
def random_number(chars_incl, i):
pass # Put your code here...
def random_letters(chars_incl, i):
pass # Put your code here...
def random_characters(chars_incl, i):
pass # Put your code here...
# This Will Check Whether A Given Password Is Strong Or Not
# It Follows The Rule that Length Of Password Should Be At Least 8 Characters
# And At Least 1 Lower, 1 Upper, 1 Number And 1 Special Character
def is_strong_password(password: str, min_length: int = 8) -> bool:
"""
>>> is_strong_password('Hwea7$2!')
True
>>> is_strong_password('Sh0r1')
False
>>> is_strong_password('Hello123')
False
>>> is_strong_password('Hello1238udfhiaf038fajdvjjf!jaiuFhkqi1')
True
>>> is_strong_password('0')
False
"""
if len(password) < min_length:
# Your Password must be at least 8 characters long
return False
upper = any(char in ascii_uppercase for char in password)
lower = any(char in ascii_lowercase for char in password)
num = any(char in digits for char in password)
spec_char = any(char in punctuation for char in password)
return upper and lower and num and spec_char
# Passwords should contain UPPERCASE, lowerase
# numbers, and special characters
def main():
length = int(input("Please indicate the max length of your password: ").strip())
chars_incl = input(
"Please indicate the characters that must be in your password: "
).strip()
print("Password generated:", password_generator(length))
print(
"Alternative Password generated:",
alternative_password_generator(chars_incl, length),
)
print("[If you are thinking of using this passsword, You better save it.]")
if __name__ == "__main__":
main()
| import secrets
from random import shuffle
from string import ascii_letters, ascii_lowercase, ascii_uppercase, digits, punctuation
def password_generator(length: int = 8) -> str:
"""
Password Generator allows you to generate a random password of length N.
>>> len(password_generator())
8
>>> len(password_generator(length=16))
16
>>> len(password_generator(257))
257
>>> len(password_generator(length=0))
0
>>> len(password_generator(-1))
0
"""
chars = ascii_letters + digits + punctuation
return "".join(secrets.choice(chars) for _ in range(length))
# ALTERNATIVE METHODS
# chars_incl= characters that must be in password
# i= how many letters or characters the password length will be
def alternative_password_generator(chars_incl: str, i: int) -> str:
# Password Generator = full boot with random_number, random_letters, and
# random_character FUNCTIONS
# Put your code here...
i -= len(chars_incl)
quotient = i // 3
remainder = i % 3
# chars = chars_incl + random_letters(ascii_letters, i / 3 + remainder) +
# random_number(digits, i / 3) + random_characters(punctuation, i / 3)
chars = (
chars_incl
+ random(ascii_letters, quotient + remainder)
+ random(digits, quotient)
+ random(punctuation, quotient)
)
list_of_chars = list(chars)
shuffle(list_of_chars)
return "".join(list_of_chars)
# random is a generalised function for letters, characters and numbers
def random(chars_incl: str, i: int) -> str:
return "".join(secrets.choice(chars_incl) for _ in range(i))
def random_number(chars_incl, i):
pass # Put your code here...
def random_letters(chars_incl, i):
pass # Put your code here...
def random_characters(chars_incl, i):
pass # Put your code here...
# This Will Check Whether A Given Password Is Strong Or Not
# It Follows The Rule that Length Of Password Should Be At Least 8 Characters
# And At Least 1 Lower, 1 Upper, 1 Number And 1 Special Character
def is_strong_password(password: str, min_length: int = 8) -> bool:
"""
>>> is_strong_password('Hwea7$2!')
True
>>> is_strong_password('Sh0r1')
False
>>> is_strong_password('Hello123')
False
>>> is_strong_password('Hello1238udfhiaf038fajdvjjf!jaiuFhkqi1')
True
>>> is_strong_password('0')
False
"""
if len(password) < min_length:
# Your Password must be at least 8 characters long
return False
upper = any(char in ascii_uppercase for char in password)
lower = any(char in ascii_lowercase for char in password)
num = any(char in digits for char in password)
spec_char = any(char in punctuation for char in password)
return upper and lower and num and spec_char
# Passwords should contain UPPERCASE, lowerase
# numbers, and special characters
def main():
length = int(input("Please indicate the max length of your password: ").strip())
chars_incl = input(
"Please indicate the characters that must be in your password: "
).strip()
print("Password generated:", password_generator(length))
print(
"Alternative Password generated:",
alternative_password_generator(chars_incl, length),
)
print("[If you are thinking of using this passsword, You better save it.]")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| name: Bug report
description: Create a bug report to help us address errors in the repository
labels: [bug]
body:
- type: markdown
attributes:
value: >
Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/bug).
Usage questions such as "How do I...?" belong on the
[Discord](https://discord.gg/c7MnfGFGa6) and will be closed.
- type: input
attributes:
label: "Repository commit"
description: >
The commit hash for `TheAlgorithms/Python` repository. You can get this
by running the command `git rev-parse HEAD` locally.
placeholder: "a0b0f414ae134aa1772d33bb930e5a960f9979e8"
validations:
required: true
- type: input
attributes:
label: "Python version (python --version)"
placeholder: "Python 3.10.7"
validations:
required: true
- type: textarea
attributes:
label: "Dependencies version (pip freeze)"
description: >
This is the output of the command `pip freeze --all`. Note that the
actual output might be different as compared to the placeholder text.
placeholder: |
appnope==0.1.3
asttokens==2.0.8
backcall==0.2.0
...
validations:
required: true
- type: textarea
attributes:
label: "Expected behavior"
description: "Describe the behavior you expect. May include images or videos."
validations:
required: true
- type: textarea
attributes:
label: "Actual behavior"
validations:
required: true
| name: Bug report
description: Create a bug report to help us address errors in the repository
labels: [bug]
body:
- type: markdown
attributes:
value: >
Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/bug).
Usage questions such as "How do I...?" belong on the
[Discord](https://discord.gg/c7MnfGFGa6) and will be closed.
- type: input
attributes:
label: "Repository commit"
description: >
The commit hash for `TheAlgorithms/Python` repository. You can get this
by running the command `git rev-parse HEAD` locally.
placeholder: "a0b0f414ae134aa1772d33bb930e5a960f9979e8"
validations:
required: true
- type: input
attributes:
label: "Python version (python --version)"
placeholder: "Python 3.10.7"
validations:
required: true
- type: textarea
attributes:
label: "Dependencies version (pip freeze)"
description: >
This is the output of the command `pip freeze --all`. Note that the
actual output might be different as compared to the placeholder text.
placeholder: |
appnope==0.1.3
asttokens==2.0.8
backcall==0.2.0
...
validations:
required: true
- type: textarea
attributes:
label: "Expected behavior"
description: "Describe the behavior you expect. May include images or videos."
validations:
required: true
- type: textarea
attributes:
label: "Actual behavior"
validations:
required: true
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 205: https://projecteuler.net/problem=205
Peter has nine four-sided (pyramidal) dice, each with faces numbered 1, 2, 3, 4.
Colin has six six-sided (cubic) dice, each with faces numbered 1, 2, 3, 4, 5, 6.
Peter and Colin roll their dice and compare totals: the highest total wins.
The result is a draw if the totals are equal.
What is the probability that Pyramidal Peter beats Cubic Colin?
Give your answer rounded to seven decimal places in the form 0.abcdefg
"""
from itertools import product
def total_frequency_distribution(sides_number: int, dice_number: int) -> list[int]:
"""
Returns frequency distribution of total
>>> total_frequency_distribution(sides_number=6, dice_number=1)
[0, 1, 1, 1, 1, 1, 1]
>>> total_frequency_distribution(sides_number=4, dice_number=2)
[0, 0, 1, 2, 3, 4, 3, 2, 1]
"""
max_face_number = sides_number
max_total = max_face_number * dice_number
totals_frequencies = [0] * (max_total + 1)
min_face_number = 1
faces_numbers = range(min_face_number, max_face_number + 1)
for dice_numbers in product(faces_numbers, repeat=dice_number):
total = sum(dice_numbers)
totals_frequencies[total] += 1
return totals_frequencies
def solution() -> float:
"""
Returns probability that Pyramidal Peter beats Cubic Colin
rounded to seven decimal places in the form 0.abcdefg
>>> solution()
0.5731441
"""
peter_totals_frequencies = total_frequency_distribution(
sides_number=4, dice_number=9
)
colin_totals_frequencies = total_frequency_distribution(
sides_number=6, dice_number=6
)
peter_wins_count = 0
min_peter_total = 9
max_peter_total = 4 * 9
min_colin_total = 6
for peter_total in range(min_peter_total, max_peter_total + 1):
peter_wins_count += peter_totals_frequencies[peter_total] * sum(
colin_totals_frequencies[min_colin_total:peter_total]
)
total_games_number = (4**9) * (6**6)
peter_win_probability = peter_wins_count / total_games_number
rounded_peter_win_probability = round(peter_win_probability, ndigits=7)
return rounded_peter_win_probability
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 205: https://projecteuler.net/problem=205
Peter has nine four-sided (pyramidal) dice, each with faces numbered 1, 2, 3, 4.
Colin has six six-sided (cubic) dice, each with faces numbered 1, 2, 3, 4, 5, 6.
Peter and Colin roll their dice and compare totals: the highest total wins.
The result is a draw if the totals are equal.
What is the probability that Pyramidal Peter beats Cubic Colin?
Give your answer rounded to seven decimal places in the form 0.abcdefg
"""
from itertools import product
def total_frequency_distribution(sides_number: int, dice_number: int) -> list[int]:
"""
Returns frequency distribution of total
>>> total_frequency_distribution(sides_number=6, dice_number=1)
[0, 1, 1, 1, 1, 1, 1]
>>> total_frequency_distribution(sides_number=4, dice_number=2)
[0, 0, 1, 2, 3, 4, 3, 2, 1]
"""
max_face_number = sides_number
max_total = max_face_number * dice_number
totals_frequencies = [0] * (max_total + 1)
min_face_number = 1
faces_numbers = range(min_face_number, max_face_number + 1)
for dice_numbers in product(faces_numbers, repeat=dice_number):
total = sum(dice_numbers)
totals_frequencies[total] += 1
return totals_frequencies
def solution() -> float:
"""
Returns probability that Pyramidal Peter beats Cubic Colin
rounded to seven decimal places in the form 0.abcdefg
>>> solution()
0.5731441
"""
peter_totals_frequencies = total_frequency_distribution(
sides_number=4, dice_number=9
)
colin_totals_frequencies = total_frequency_distribution(
sides_number=6, dice_number=6
)
peter_wins_count = 0
min_peter_total = 9
max_peter_total = 4 * 9
min_colin_total = 6
for peter_total in range(min_peter_total, max_peter_total + 1):
peter_wins_count += peter_totals_frequencies[peter_total] * sum(
colin_totals_frequencies[min_colin_total:peter_total]
)
total_games_number = (4**9) * (6**6)
peter_win_probability = peter_wins_count / total_games_number
rounded_peter_win_probability = round(peter_win_probability, ndigits=7)
return rounded_peter_win_probability
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Linear algebra library for Python
This module contains classes and functions for doing linear algebra.
---
## Overview
### class Vector
-
- This class represents a vector of arbitrary size and related operations.
**Overview of the methods:**
- constructor(components) : init the vector
- set(components) : changes the vector components.
- \_\_str\_\_() : toString method
- component(i): gets the i-th component (0-indexed)
- \_\_len\_\_() : gets the size / length of the vector (number of components)
- euclidean_length() : returns the eulidean length of the vector
- operator + : vector addition
- operator - : vector subtraction
- operator * : scalar multiplication and dot product
- copy() : copies this vector and returns it
- change_component(pos,value) : changes the specified component
- function zero_vector(dimension)
- returns a zero vector of 'dimension'
- function unit_basis_vector(dimension, pos)
- returns a unit basis vector with a one at index 'pos' (0-indexed)
- function axpy(scalar, vector1, vector2)
- computes the axpy operation
- function random_vector(N, a, b)
- returns a random vector of size N, with random integer components between 'a' and 'b' inclusive
### class Matrix
-
- This class represents a matrix of arbitrary size and operations on it.
**Overview of the methods:**
- \_\_str\_\_() : returns a string representation
- operator * : implements the matrix vector multiplication
implements the matrix-scalar multiplication.
- change_component(x, y, value) : changes the specified component.
- component(x, y) : returns the specified component.
- width() : returns the width of the matrix
- height() : returns the height of the matrix
- determinant() : returns the determinant of the matrix if it is square
- operator + : implements the matrix-addition.
- operator - : implements the matrix-subtraction
- function square_zero_matrix(N)
- returns a square zero-matrix of dimension NxN
- function random_matrix(W, H, a, b)
- returns a random matrix WxH with integer components between 'a' and 'b' inclusive
---
## Documentation
This module uses docstrings to enable the use of Python's in-built `help(...)` function.
For instance, try `help(Vector)`, `help(unit_basis_vector)`, and `help(CLASSNAME.METHODNAME)`.
---
## Usage
Import the module `lib.py` from the **src** directory into your project.
Alternatively, you can directly use the Python bytecode file `lib.pyc`.
---
## Tests
`src/tests.py` contains Python unit tests which can be run with `python3 -m unittest -v`.
| # Linear algebra library for Python
This module contains classes and functions for doing linear algebra.
---
## Overview
### class Vector
-
- This class represents a vector of arbitrary size and related operations.
**Overview of the methods:**
- constructor(components) : init the vector
- set(components) : changes the vector components.
- \_\_str\_\_() : toString method
- component(i): gets the i-th component (0-indexed)
- \_\_len\_\_() : gets the size / length of the vector (number of components)
- euclidean_length() : returns the eulidean length of the vector
- operator + : vector addition
- operator - : vector subtraction
- operator * : scalar multiplication and dot product
- copy() : copies this vector and returns it
- change_component(pos,value) : changes the specified component
- function zero_vector(dimension)
- returns a zero vector of 'dimension'
- function unit_basis_vector(dimension, pos)
- returns a unit basis vector with a one at index 'pos' (0-indexed)
- function axpy(scalar, vector1, vector2)
- computes the axpy operation
- function random_vector(N, a, b)
- returns a random vector of size N, with random integer components between 'a' and 'b' inclusive
### class Matrix
-
- This class represents a matrix of arbitrary size and operations on it.
**Overview of the methods:**
- \_\_str\_\_() : returns a string representation
- operator * : implements the matrix vector multiplication
implements the matrix-scalar multiplication.
- change_component(x, y, value) : changes the specified component.
- component(x, y) : returns the specified component.
- width() : returns the width of the matrix
- height() : returns the height of the matrix
- determinant() : returns the determinant of the matrix if it is square
- operator + : implements the matrix-addition.
- operator - : implements the matrix-subtraction
- function square_zero_matrix(N)
- returns a square zero-matrix of dimension NxN
- function random_matrix(W, H, a, b)
- returns a random matrix WxH with integer components between 'a' and 'b' inclusive
---
## Documentation
This module uses docstrings to enable the use of Python's in-built `help(...)` function.
For instance, try `help(Vector)`, `help(unit_basis_vector)`, and `help(CLASSNAME.METHODNAME)`.
---
## Usage
Import the module `lib.py` from the **src** directory into your project.
Alternatively, you can directly use the Python bytecode file `lib.pyc`.
---
## Tests
`src/tests.py` contains Python unit tests which can be run with `python3 -m unittest -v`.
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is a pure Python implementation of the greedy-merge-sort algorithm
reference: https://www.geeksforgeeks.org/optimal-file-merge-patterns/
For doctests run following command:
python3 -m doctest -v greedy_merge_sort.py
Objective
Merge a set of sorted files of different length into a single sorted file.
We need to find an optimal solution, where the resultant file
will be generated in minimum time.
Approach
If the number of sorted files are given, there are many ways
to merge them into a single sorted file.
This merge can be performed pair wise.
To merge a m-record file and a n-record file requires possibly m+n record moves
the optimal choice being,
merge the two smallest files together at each step (greedy approach).
"""
def optimal_merge_pattern(files: list) -> float:
"""Function to merge all the files with optimum cost
Args:
files [list]: A list of sizes of different files to be merged
Returns:
optimal_merge_cost [int]: Optimal cost to merge all those files
Examples:
>>> optimal_merge_pattern([2, 3, 4])
14
>>> optimal_merge_pattern([5, 10, 20, 30, 30])
205
>>> optimal_merge_pattern([8, 8, 8, 8, 8])
96
"""
optimal_merge_cost = 0
while len(files) > 1:
temp = 0
# Consider two files with minimum cost to be merged
for _ in range(2):
min_index = files.index(min(files))
temp += files[min_index]
files.pop(min_index)
files.append(temp)
optimal_merge_cost += temp
return optimal_merge_cost
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
This is a pure Python implementation of the greedy-merge-sort algorithm
reference: https://www.geeksforgeeks.org/optimal-file-merge-patterns/
For doctests run following command:
python3 -m doctest -v greedy_merge_sort.py
Objective
Merge a set of sorted files of different length into a single sorted file.
We need to find an optimal solution, where the resultant file
will be generated in minimum time.
Approach
If the number of sorted files are given, there are many ways
to merge them into a single sorted file.
This merge can be performed pair wise.
To merge a m-record file and a n-record file requires possibly m+n record moves
the optimal choice being,
merge the two smallest files together at each step (greedy approach).
"""
def optimal_merge_pattern(files: list) -> float:
"""Function to merge all the files with optimum cost
Args:
files [list]: A list of sizes of different files to be merged
Returns:
optimal_merge_cost [int]: Optimal cost to merge all those files
Examples:
>>> optimal_merge_pattern([2, 3, 4])
14
>>> optimal_merge_pattern([5, 10, 20, 30, 30])
205
>>> optimal_merge_pattern([8, 8, 8, 8, 8])
96
"""
optimal_merge_cost = 0
while len(files) > 1:
temp = 0
# Consider two files with minimum cost to be merged
for _ in range(2):
min_index = files.index(min(files))
temp += files[min_index]
files.pop(min_index)
files.append(temp)
optimal_merge_cost += temp
return optimal_merge_cost
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Approximization algorithm for minimum vertex cover problem.
Matching Approach. Uses graphs represented with an adjacency list
URL: https://mathworld.wolfram.com/MinimumVertexCover.html
URL: https://www.princeton.edu/~aaa/Public/Teaching/ORF523/ORF523_Lec6.pdf
"""
def matching_min_vertex_cover(graph: dict) -> set:
"""
APX Algorithm for min Vertex Cover using Matching Approach
@input: graph (graph stored in an adjacency list where each vertex
is represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
>>> matching_min_vertex_cover(graph)
{0, 1, 2, 4}
"""
# chosen_vertices = set of chosen vertices
chosen_vertices = set()
# edges = list of graph's edges
edges = get_edges(graph)
# While there are still elements in edges list, take an arbitrary edge
# (from_node, to_node) and add his extremity to chosen_vertices and then
# remove all arcs adjacent to the from_node and to_node
while edges:
from_node, to_node = edges.pop()
chosen_vertices.add(from_node)
chosen_vertices.add(to_node)
for edge in edges.copy():
if from_node in edge or to_node in edge:
edges.discard(edge)
return chosen_vertices
def get_edges(graph: dict) -> set:
"""
Return a set of couples that represents all of the edges.
@input: graph (graph stored in an adjacency list where each vertex is
represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3], 3: [0, 1, 2]}
>>> get_edges(graph)
{(0, 1), (3, 1), (0, 3), (2, 0), (3, 0), (2, 3), (1, 0), (3, 2), (1, 3)}
"""
edges = set()
for from_node, to_nodes in graph.items():
for to_node in to_nodes:
edges.add((from_node, to_node))
return edges
if __name__ == "__main__":
import doctest
doctest.testmod()
# graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
# print(f"Matching vertex cover:\n{matching_min_vertex_cover(graph)}")
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Approximization algorithm for minimum vertex cover problem.
Matching Approach. Uses graphs represented with an adjacency list
URL: https://mathworld.wolfram.com/MinimumVertexCover.html
URL: https://www.princeton.edu/~aaa/Public/Teaching/ORF523/ORF523_Lec6.pdf
"""
def matching_min_vertex_cover(graph: dict) -> set:
"""
APX Algorithm for min Vertex Cover using Matching Approach
@input: graph (graph stored in an adjacency list where each vertex
is represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
>>> matching_min_vertex_cover(graph)
{0, 1, 2, 4}
"""
# chosen_vertices = set of chosen vertices
chosen_vertices = set()
# edges = list of graph's edges
edges = get_edges(graph)
# While there are still elements in edges list, take an arbitrary edge
# (from_node, to_node) and add his extremity to chosen_vertices and then
# remove all arcs adjacent to the from_node and to_node
while edges:
from_node, to_node = edges.pop()
chosen_vertices.add(from_node)
chosen_vertices.add(to_node)
for edge in edges.copy():
if from_node in edge or to_node in edge:
edges.discard(edge)
return chosen_vertices
def get_edges(graph: dict) -> set:
"""
Return a set of couples that represents all of the edges.
@input: graph (graph stored in an adjacency list where each vertex is
represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3], 3: [0, 1, 2]}
>>> get_edges(graph)
{(0, 1), (3, 1), (0, 3), (2, 0), (3, 0), (2, 3), (1, 0), (3, 2), (1, 3)}
"""
edges = set()
for from_node, to_nodes in graph.items():
for to_node in to_nodes:
edges.add((from_node, to_node))
return edges
if __name__ == "__main__":
import doctest
doctest.testmod()
# graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
# print(f"Matching vertex cover:\n{matching_min_vertex_cover(graph)}")
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU)
or swish function.
* https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
* https://en.wikipedia.org/wiki/Swish_function
The function takes a vector x of K real numbers as input and returns x * sigmoid(x).
Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x).
Extensive experiments shows that Swish consistently matches or outperforms ReLU
on deep networks applied to a variety of challenging domains such as
image classification and machine translation.
This script is inspired by a corresponding research paper.
* https://arxiv.org/abs/1710.05941
"""
import numpy as np
def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Mathematical function sigmoid takes a vector x of K real numbers as input and
returns 1/ (1 + e^-x).
https://en.wikipedia.org/wiki/Sigmoid_function
>>> sigmoid(np.array([-1.0, 1.0, 2.0]))
array([0.26894142, 0.73105858, 0.88079708])
"""
return 1 / (1 + np.exp(-vector))
def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
Implements the Sigmoid Linear Unit (SiLU) or swish function
Parameters:
vector (np.ndarray): A numpy array consisting of real values
Returns:
swish_vec (np.ndarray): The input numpy array, after applying swish
Examples:
>>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0]))
array([-0.26894142, 0.73105858, 1.76159416])
>>> sigmoid_linear_unit(np.array([-2]))
array([-0.23840584])
"""
return vector * sigmoid(vector)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU)
or swish function.
* https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
* https://en.wikipedia.org/wiki/Swish_function
The function takes a vector x of K real numbers as input and returns x * sigmoid(x).
Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x).
Extensive experiments shows that Swish consistently matches or outperforms ReLU
on deep networks applied to a variety of challenging domains such as
image classification and machine translation.
This script is inspired by a corresponding research paper.
* https://arxiv.org/abs/1710.05941
"""
import numpy as np
def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Mathematical function sigmoid takes a vector x of K real numbers as input and
returns 1/ (1 + e^-x).
https://en.wikipedia.org/wiki/Sigmoid_function
>>> sigmoid(np.array([-1.0, 1.0, 2.0]))
array([0.26894142, 0.73105858, 0.88079708])
"""
return 1 / (1 + np.exp(-vector))
def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
Implements the Sigmoid Linear Unit (SiLU) or swish function
Parameters:
vector (np.ndarray): A numpy array consisting of real values
Returns:
swish_vec (np.ndarray): The input numpy array, after applying swish
Examples:
>>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0]))
array([-0.26894142, 0.73105858, 1.76159416])
>>> sigmoid_linear_unit(np.array([-2]))
array([-0.23840584])
"""
return vector * sigmoid(vector)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from math import pi
def radians(degree: float) -> float:
"""
Coverts the given angle from degrees to radians
https://en.wikipedia.org/wiki/Radian
>>> radians(180)
3.141592653589793
>>> radians(92)
1.6057029118347832
>>> radians(274)
4.782202150464463
>>> radians(109.82)
1.9167205845401725
>>> from math import radians as math_radians
>>> all(abs(radians(i)-math_radians(i)) <= 0.00000001 for i in range(-2, 361))
True
"""
return degree / (180 / pi)
if __name__ == "__main__":
from doctest import testmod
testmod()
| from math import pi
def radians(degree: float) -> float:
"""
Coverts the given angle from degrees to radians
https://en.wikipedia.org/wiki/Radian
>>> radians(180)
3.141592653589793
>>> radians(92)
1.6057029118347832
>>> radians(274)
4.782202150464463
>>> radians(109.82)
1.9167205845401725
>>> from math import radians as math_radians
>>> all(abs(radians(i)-math_radians(i)) <= 0.00000001 for i in range(-2, 361))
True
"""
return degree / (180 / pi)
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,936 | Fix ruff errors | ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-08-09T07:13:45Z" | "2023-08-09T07:55:31Z" | 842d03fb2ab7d83e4d4081c248d71e89bb520809 | ae0fc85401efd9816193a06e554a66600cc09a97 | Fix ruff errors. ### Describe your change:
Fixes #8935
Fixing ruff errors again due to the recent version update
Notably, I didn't fix the ruff error in [neural_network/input_data.py](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/input_data.py) because it appears that the file was taken directly from TensorFlow's codebase, so I don't want to modify it _just_ yet. Instead, I renamed it to neural_network/input_data.py_tf because it should be left out of the directory for the following reasons:
1. Its sole purpose is to be used by [neural_network/gan.py_tf](https://github.com/TheAlgorithms/Python/blob/842d03fb2ab7d83e4d4081c248d71e89bb520809/neural_network/gan.py_tf), which is itself left out of the directory because of issues with TensorFlow.
2. All of it's actually deprecated—TensorFlow explicitly says so in the code and recommends getting the necessary input data using a different function. If/when neural_network/gan.py_tf is eventually added back to the directory, its implementation should be changed to not use neural_network/input_data.py anyway.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| DIRC
eq$%eq$% \L 4|fvJU2?[ .devcontainer/Dockerfile eq$$eq$$ \O CŨUU$c&EzI .devcontainer/devcontainer.json eZ3deZ3d \G jERrgkk# .gitattributes eq$$eq$$ \I 'nImZ`UU .github/CODEOWNERS ene;åene;å \\ 5L͵,$4O7zj %.github/ISSUE_TEMPLATE/bug_report.yml ene;åene;å \c b8|[;{]
!.github/ISSUE_TEMPLATE/config.yml ep<ep< |