repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Algorithm for calculating the most cost-efficient sequence for converting one string
into another.
The only allowed operations are
--- Cost to copy a character is copy_cost
--- Cost to replace a character is replace_cost
--- Cost to delete a character is delete_cost
--- Cost to insert a character is insert_cost
"""
def compute_transform_tables(
source_string: str,
destination_string: str,
copy_cost: int,
replace_cost: int,
delete_cost: int,
insert_cost: int,
) -> tuple[list[list[int]], list[list[str]]]:
source_seq = list(source_string)
destination_seq = list(destination_string)
len_source_seq = len(source_seq)
len_destination_seq = len(destination_seq)
costs = [
[0 for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
ops = [
["0" for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
for i in range(1, len_source_seq + 1):
costs[i][0] = i * delete_cost
ops[i][0] = f"D{source_seq[i - 1]:c}"
for i in range(1, len_destination_seq + 1):
costs[0][i] = i * insert_cost
ops[0][i] = f"I{destination_seq[i - 1]:c}"
for i in range(1, len_source_seq + 1):
for j in range(1, len_destination_seq + 1):
if source_seq[i - 1] == destination_seq[j - 1]:
costs[i][j] = costs[i - 1][j - 1] + copy_cost
ops[i][j] = f"C{source_seq[i - 1]:c}"
else:
costs[i][j] = costs[i - 1][j - 1] + replace_cost
ops[i][j] = f"R{source_seq[i - 1]:c}" + str(destination_seq[j - 1])
if costs[i - 1][j] + delete_cost < costs[i][j]:
costs[i][j] = costs[i - 1][j] + delete_cost
ops[i][j] = f"D{source_seq[i - 1]:c}"
if costs[i][j - 1] + insert_cost < costs[i][j]:
costs[i][j] = costs[i][j - 1] + insert_cost
ops[i][j] = f"I{destination_seq[j - 1]:c}"
return costs, ops
def assemble_transformation(ops: list[list[str]], i: int, j: int) -> list[str]:
if i == 0 and j == 0:
return []
else:
if ops[i][j][0] == "C" or ops[i][j][0] == "R":
seq = assemble_transformation(ops, i - 1, j - 1)
seq.append(ops[i][j])
return seq
elif ops[i][j][0] == "D":
seq = assemble_transformation(ops, i - 1, j)
seq.append(ops[i][j])
return seq
else:
seq = assemble_transformation(ops, i, j - 1)
seq.append(ops[i][j])
return seq
if __name__ == "__main__":
_, operations = compute_transform_tables("Python", "Algorithms", -1, 1, 2, 2)
m = len(operations)
n = len(operations[0])
sequence = assemble_transformation(operations, m - 1, n - 1)
string = list("Python")
i = 0
cost = 0
with open("min_cost.txt", "w") as file:
for op in sequence:
print("".join(string))
if op[0] == "C":
file.write("%-16s" % "Copy %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost -= 1
elif op[0] == "R":
string[i] = op[2]
file.write("%-16s" % ("Replace %c" % op[1] + " with " + str(op[2])))
file.write("\t\t" + "".join(string))
file.write("\r\n")
cost += 1
elif op[0] == "D":
string.pop(i)
file.write("%-16s" % "Delete %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
else:
string.insert(i, op[1])
file.write("%-16s" % "Insert %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
i += 1
print("".join(string))
print("Cost: ", cost)
file.write("\r\nMinimum cost: " + str(cost))
| """
Algorithm for calculating the most cost-efficient sequence for converting one string
into another.
The only allowed operations are
--- Cost to copy a character is copy_cost
--- Cost to replace a character is replace_cost
--- Cost to delete a character is delete_cost
--- Cost to insert a character is insert_cost
"""
def compute_transform_tables(
source_string: str,
destination_string: str,
copy_cost: int,
replace_cost: int,
delete_cost: int,
insert_cost: int,
) -> tuple[list[list[int]], list[list[str]]]:
source_seq = list(source_string)
destination_seq = list(destination_string)
len_source_seq = len(source_seq)
len_destination_seq = len(destination_seq)
costs = [
[0 for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
ops = [
["0" for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
for i in range(1, len_source_seq + 1):
costs[i][0] = i * delete_cost
ops[i][0] = f"D{source_seq[i - 1]:c}"
for i in range(1, len_destination_seq + 1):
costs[0][i] = i * insert_cost
ops[0][i] = f"I{destination_seq[i - 1]:c}"
for i in range(1, len_source_seq + 1):
for j in range(1, len_destination_seq + 1):
if source_seq[i - 1] == destination_seq[j - 1]:
costs[i][j] = costs[i - 1][j - 1] + copy_cost
ops[i][j] = f"C{source_seq[i - 1]:c}"
else:
costs[i][j] = costs[i - 1][j - 1] + replace_cost
ops[i][j] = f"R{source_seq[i - 1]:c}" + str(destination_seq[j - 1])
if costs[i - 1][j] + delete_cost < costs[i][j]:
costs[i][j] = costs[i - 1][j] + delete_cost
ops[i][j] = f"D{source_seq[i - 1]:c}"
if costs[i][j - 1] + insert_cost < costs[i][j]:
costs[i][j] = costs[i][j - 1] + insert_cost
ops[i][j] = f"I{destination_seq[j - 1]:c}"
return costs, ops
def assemble_transformation(ops: list[list[str]], i: int, j: int) -> list[str]:
if i == 0 and j == 0:
return []
else:
if ops[i][j][0] in {"C", "R"}:
seq = assemble_transformation(ops, i - 1, j - 1)
seq.append(ops[i][j])
return seq
elif ops[i][j][0] == "D":
seq = assemble_transformation(ops, i - 1, j)
seq.append(ops[i][j])
return seq
else:
seq = assemble_transformation(ops, i, j - 1)
seq.append(ops[i][j])
return seq
if __name__ == "__main__":
_, operations = compute_transform_tables("Python", "Algorithms", -1, 1, 2, 2)
m = len(operations)
n = len(operations[0])
sequence = assemble_transformation(operations, m - 1, n - 1)
string = list("Python")
i = 0
cost = 0
with open("min_cost.txt", "w") as file:
for op in sequence:
print("".join(string))
if op[0] == "C":
file.write("%-16s" % "Copy %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost -= 1
elif op[0] == "R":
string[i] = op[2]
file.write("%-16s" % ("Replace %c" % op[1] + " with " + str(op[2])))
file.write("\t\t" + "".join(string))
file.write("\r\n")
cost += 1
elif op[0] == "D":
string.pop(i)
file.write("%-16s" % "Delete %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
else:
string.insert(i, op[1])
file.write("%-16s" % "Insert %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
i += 1
print("".join(string))
print("Cost: ", cost)
file.write("\r\nMinimum cost: " + str(cost))
| 1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import math
def convert(number: int) -> str:
"""
Given a number return the number in words.
>>> convert(123)
'OneHundred,TwentyThree'
"""
if number == 0:
words = "Zero"
return words
else:
digits = math.log10(number)
digits = digits + 1
singles = {}
singles[0] = ""
singles[1] = "One"
singles[2] = "Two"
singles[3] = "Three"
singles[4] = "Four"
singles[5] = "Five"
singles[6] = "Six"
singles[7] = "Seven"
singles[8] = "Eight"
singles[9] = "Nine"
doubles = {}
doubles[0] = ""
doubles[2] = "Twenty"
doubles[3] = "Thirty"
doubles[4] = "Forty"
doubles[5] = "Fifty"
doubles[6] = "Sixty"
doubles[7] = "Seventy"
doubles[8] = "Eighty"
doubles[9] = "Ninety"
teens = {}
teens[0] = "Ten"
teens[1] = "Eleven"
teens[2] = "Twelve"
teens[3] = "Thirteen"
teens[4] = "Fourteen"
teens[5] = "Fifteen"
teens[6] = "Sixteen"
teens[7] = "Seventeen"
teens[8] = "Eighteen"
teens[9] = "Nineteen"
placevalue = {}
placevalue[2] = "Hundred,"
placevalue[3] = "Thousand,"
placevalue[5] = "Lakh,"
placevalue[7] = "Crore,"
temp_num = number
words = ""
counter = 0
digits = int(digits)
while counter < digits:
current = temp_num % 10
if counter % 2 == 0:
addition = ""
if counter in placevalue and current != 0:
addition = placevalue[counter]
if counter == 2:
words = singles[current] + addition + words
elif counter == 0:
if ((temp_num % 100) // 10) == 1:
words = teens[current] + addition + words
temp_num = temp_num // 10
counter += 1
else:
words = singles[current] + addition + words
else:
words = doubles[current] + addition + words
else:
if counter == 1:
if current == 1:
words = teens[number % 10] + words
else:
addition = ""
if counter in placevalue:
addition = placevalue[counter]
words = doubles[current] + addition + words
else:
addition = ""
if counter in placevalue:
if current == 0 and ((temp_num % 100) // 10) == 0:
addition = ""
else:
addition = placevalue[counter]
if ((temp_num % 100) // 10) == 1:
words = teens[current] + addition + words
temp_num = temp_num // 10
counter += 1
else:
words = singles[current] + addition + words
counter += 1
temp_num = temp_num // 10
return words
if __name__ == "__main__":
import doctest
doctest.testmod()
| import math
def convert(number: int) -> str:
"""
Given a number return the number in words.
>>> convert(123)
'OneHundred,TwentyThree'
"""
if number == 0:
words = "Zero"
return words
else:
digits = math.log10(number)
digits = digits + 1
singles = {}
singles[0] = ""
singles[1] = "One"
singles[2] = "Two"
singles[3] = "Three"
singles[4] = "Four"
singles[5] = "Five"
singles[6] = "Six"
singles[7] = "Seven"
singles[8] = "Eight"
singles[9] = "Nine"
doubles = {}
doubles[0] = ""
doubles[2] = "Twenty"
doubles[3] = "Thirty"
doubles[4] = "Forty"
doubles[5] = "Fifty"
doubles[6] = "Sixty"
doubles[7] = "Seventy"
doubles[8] = "Eighty"
doubles[9] = "Ninety"
teens = {}
teens[0] = "Ten"
teens[1] = "Eleven"
teens[2] = "Twelve"
teens[3] = "Thirteen"
teens[4] = "Fourteen"
teens[5] = "Fifteen"
teens[6] = "Sixteen"
teens[7] = "Seventeen"
teens[8] = "Eighteen"
teens[9] = "Nineteen"
placevalue = {}
placevalue[2] = "Hundred,"
placevalue[3] = "Thousand,"
placevalue[5] = "Lakh,"
placevalue[7] = "Crore,"
temp_num = number
words = ""
counter = 0
digits = int(digits)
while counter < digits:
current = temp_num % 10
if counter % 2 == 0:
addition = ""
if counter in placevalue and current != 0:
addition = placevalue[counter]
if counter == 2:
words = singles[current] + addition + words
elif counter == 0:
if ((temp_num % 100) // 10) == 1:
words = teens[current] + addition + words
temp_num = temp_num // 10
counter += 1
else:
words = singles[current] + addition + words
else:
words = doubles[current] + addition + words
else:
if counter == 1:
if current == 1:
words = teens[number % 10] + words
else:
addition = ""
if counter in placevalue:
addition = placevalue[counter]
words = doubles[current] + addition + words
else:
addition = ""
if counter in placevalue:
if current != 0 and ((temp_num % 100) // 10) != 0:
addition = placevalue[counter]
if ((temp_num % 100) // 10) == 1:
words = teens[current] + addition + words
temp_num = temp_num // 10
counter += 1
else:
words = singles[current] + addition + words
counter += 1
temp_num = temp_num // 10
return words
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Implementation of double ended queue.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from typing import Any
class Deque:
"""
Deque data structure.
Operations
----------
append(val: Any) -> None
appendleft(val: Any) -> None
extend(iterable: Iterable) -> None
extendleft(iterable: Iterable) -> None
pop() -> Any
popleft() -> Any
Observers
---------
is_empty() -> bool
Attributes
----------
_front: _Node
front of the deque a.k.a. the first element
_back: _Node
back of the element a.k.a. the last element
_len: int
the number of nodes
"""
__slots__ = ("_front", "_back", "_len")
@dataclass
class _Node:
"""
Representation of a node.
Contains a value and a pointer to the next node as well as to the previous one.
"""
val: Any = None
next_node: Deque._Node | None = None
prev_node: Deque._Node | None = None
class _Iterator:
"""
Helper class for iteration. Will be used to implement iteration.
Attributes
----------
_cur: _Node
the current node of the iteration.
"""
__slots__ = ("_cur",)
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
def __iter__(self) -> Deque._Iterator:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
"""
return self
def __next__(self) -> Any:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
>>> next(iterator)
1
>>> next(iterator)
2
>>> next(iterator)
3
"""
if self._cur is None:
# finished iterating
raise StopIteration
val = self._cur.val
self._cur = self._cur.next_node
return val
def __init__(self, iterable: Iterable[Any] | None = None) -> None:
self._front: Any = None
self._back: Any = None
self._len: int = 0
if iterable is not None:
# append every value to the deque
for val in iterable:
self.append(val)
def append(self, val: Any) -> None:
"""
Adds val to the end of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.append(4)
>>> our_deque_1
[1, 2, 3, 4]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.append('c')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.append(4)
>>> deque_collections_1
deque([1, 2, 3, 4])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.append('c')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
self._back.next_node = node
node.prev_node = self._back
self._back = node # assign new back to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def appendleft(self, val: Any) -> None:
"""
Adds val to the beginning of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([2, 3])
>>> our_deque_1.appendleft(1)
>>> our_deque_1
[1, 2, 3]
>>> our_deque_2 = Deque('bc')
>>> our_deque_2.appendleft('a')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([2, 3])
>>> deque_collections_1.appendleft(1)
>>> deque_collections_1
deque([1, 2, 3])
>>> deque_collections_2 = deque('bc')
>>> deque_collections_2.appendleft('a')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
node.next_node = self._front
self._front.prev_node = node
self._front = node # assign new front to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def extend(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the end of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extend([4, 5])
>>> our_deque_1
[1, 2, 3, 4, 5]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.extend('cd')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extend([4, 5])
>>> deque_collections_1
deque([1, 2, 3, 4, 5])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.extend('cd')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.append(val)
def extendleft(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the beginning of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extendleft([0, -1])
>>> our_deque_1
[-1, 0, 1, 2, 3]
>>> our_deque_2 = Deque('cd')
>>> our_deque_2.extendleft('ba')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extendleft([0, -1])
>>> deque_collections_1
deque([-1, 0, 1, 2, 3])
>>> deque_collections_2 = deque('cd')
>>> deque_collections_2.extendleft('ba')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.appendleft(val)
def pop(self) -> Any:
"""
Removes the last element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([1, 2, 3, 15182])
>>> our_popped = our_deque.pop()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3, 15182])
>>> collections_popped = deque_collections.pop()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._back
self._back = self._back.prev_node # set new back
# drop the last node - python will deallocate memory automatically
self._back.next_node = None
self._len -= 1
return topop.val
def popleft(self) -> Any:
"""
Removes the first element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([15182, 1, 2, 3])
>>> our_popped = our_deque.popleft()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([15182, 1, 2, 3])
>>> collections_popped = deque_collections.popleft()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._front
self._front = self._front.next_node # set new front and drop the first node
self._front.prev_node = None
self._len -= 1
return topop.val
def is_empty(self) -> bool:
"""
Checks if the deque is empty.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque.is_empty()
False
>>> our_empty_deque = Deque()
>>> our_empty_deque.is_empty()
True
>>> from collections import deque
>>> empty_deque_collections = deque()
>>> list(our_empty_deque) == list(empty_deque_collections)
True
"""
return self._front is None
def __len__(self) -> int:
"""
Implements len() function. Returns the length of the deque.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> len(our_deque)
3
>>> our_empty_deque = Deque()
>>> len(our_empty_deque)
0
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> len(deque_collections)
3
>>> empty_deque_collections = deque()
>>> len(empty_deque_collections)
0
>>> len(our_empty_deque) == len(empty_deque_collections)
True
"""
return self._len
def __eq__(self, other: object) -> bool:
"""
Implements "==" operator. Returns if *self* is equal to *other*.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_2 = Deque([1, 2, 3])
>>> our_deque_1 == our_deque_2
True
>>> our_deque_3 = Deque([1, 2])
>>> our_deque_1 == our_deque_3
False
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_2 = deque([1, 2, 3])
>>> deque_collections_1 == deque_collections_2
True
>>> deque_collections_3 = deque([1, 2])
>>> deque_collections_1 == deque_collections_3
False
>>> (our_deque_1 == our_deque_2) == (deque_collections_1 == deque_collections_2)
True
>>> (our_deque_1 == our_deque_3) == (deque_collections_1 == deque_collections_3)
True
"""
if not isinstance(other, Deque):
return NotImplemented
me = self._front
oth = other._front
# if the length of the dequeues are not the same, they are not equal
if len(self) != len(other):
return False
while me is not None and oth is not None:
# compare every value
if me.val != oth.val:
return False
me = me.next_node
oth = oth.next_node
return True
def __iter__(self) -> Deque._Iterator:
"""
Implements iteration.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> for v in our_deque:
... print(v)
1
2
3
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> for v in deque_collections:
... print(v)
1
2
3
"""
return Deque._Iterator(self._front)
def __repr__(self) -> str:
"""
Implements representation of the deque.
Represents it as a list, with its values between '[' and ']'.
Time complexity: O(n)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque
[1, 2, 3]
"""
values_list = []
aux = self._front
while aux is not None:
# append the values in a list to display
values_list.append(aux.val)
aux = aux.next_node
return f"[{', '.join(repr(val) for val in values_list)}]"
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Implementation of double ended queue.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from typing import Any
class Deque:
"""
Deque data structure.
Operations
----------
append(val: Any) -> None
appendleft(val: Any) -> None
extend(iterable: Iterable) -> None
extendleft(iterable: Iterable) -> None
pop() -> Any
popleft() -> Any
Observers
---------
is_empty() -> bool
Attributes
----------
_front: _Node
front of the deque a.k.a. the first element
_back: _Node
back of the element a.k.a. the last element
_len: int
the number of nodes
"""
__slots__ = ("_front", "_back", "_len")
@dataclass
class _Node:
"""
Representation of a node.
Contains a value and a pointer to the next node as well as to the previous one.
"""
val: Any = None
next_node: Deque._Node | None = None
prev_node: Deque._Node | None = None
class _Iterator:
"""
Helper class for iteration. Will be used to implement iteration.
Attributes
----------
_cur: _Node
the current node of the iteration.
"""
__slots__ = ("_cur",)
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
def __iter__(self) -> Deque._Iterator:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
"""
return self
def __next__(self) -> Any:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
>>> next(iterator)
1
>>> next(iterator)
2
>>> next(iterator)
3
"""
if self._cur is None:
# finished iterating
raise StopIteration
val = self._cur.val
self._cur = self._cur.next_node
return val
def __init__(self, iterable: Iterable[Any] | None = None) -> None:
self._front: Any = None
self._back: Any = None
self._len: int = 0
if iterable is not None:
# append every value to the deque
for val in iterable:
self.append(val)
def append(self, val: Any) -> None:
"""
Adds val to the end of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.append(4)
>>> our_deque_1
[1, 2, 3, 4]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.append('c')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.append(4)
>>> deque_collections_1
deque([1, 2, 3, 4])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.append('c')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
self._back.next_node = node
node.prev_node = self._back
self._back = node # assign new back to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def appendleft(self, val: Any) -> None:
"""
Adds val to the beginning of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([2, 3])
>>> our_deque_1.appendleft(1)
>>> our_deque_1
[1, 2, 3]
>>> our_deque_2 = Deque('bc')
>>> our_deque_2.appendleft('a')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([2, 3])
>>> deque_collections_1.appendleft(1)
>>> deque_collections_1
deque([1, 2, 3])
>>> deque_collections_2 = deque('bc')
>>> deque_collections_2.appendleft('a')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
node.next_node = self._front
self._front.prev_node = node
self._front = node # assign new front to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def extend(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the end of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extend([4, 5])
>>> our_deque_1
[1, 2, 3, 4, 5]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.extend('cd')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extend([4, 5])
>>> deque_collections_1
deque([1, 2, 3, 4, 5])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.extend('cd')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.append(val)
def extendleft(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the beginning of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extendleft([0, -1])
>>> our_deque_1
[-1, 0, 1, 2, 3]
>>> our_deque_2 = Deque('cd')
>>> our_deque_2.extendleft('ba')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extendleft([0, -1])
>>> deque_collections_1
deque([-1, 0, 1, 2, 3])
>>> deque_collections_2 = deque('cd')
>>> deque_collections_2.extendleft('ba')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.appendleft(val)
def pop(self) -> Any:
"""
Removes the last element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([1, 2, 3, 15182])
>>> our_popped = our_deque.pop()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3, 15182])
>>> collections_popped = deque_collections.pop()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._back
self._back = self._back.prev_node # set new back
# drop the last node - python will deallocate memory automatically
self._back.next_node = None
self._len -= 1
return topop.val
def popleft(self) -> Any:
"""
Removes the first element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([15182, 1, 2, 3])
>>> our_popped = our_deque.popleft()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([15182, 1, 2, 3])
>>> collections_popped = deque_collections.popleft()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._front
self._front = self._front.next_node # set new front and drop the first node
self._front.prev_node = None
self._len -= 1
return topop.val
def is_empty(self) -> bool:
"""
Checks if the deque is empty.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque.is_empty()
False
>>> our_empty_deque = Deque()
>>> our_empty_deque.is_empty()
True
>>> from collections import deque
>>> empty_deque_collections = deque()
>>> list(our_empty_deque) == list(empty_deque_collections)
True
"""
return self._front is None
def __len__(self) -> int:
"""
Implements len() function. Returns the length of the deque.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> len(our_deque)
3
>>> our_empty_deque = Deque()
>>> len(our_empty_deque)
0
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> len(deque_collections)
3
>>> empty_deque_collections = deque()
>>> len(empty_deque_collections)
0
>>> len(our_empty_deque) == len(empty_deque_collections)
True
"""
return self._len
def __eq__(self, other: object) -> bool:
"""
Implements "==" operator. Returns if *self* is equal to *other*.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_2 = Deque([1, 2, 3])
>>> our_deque_1 == our_deque_2
True
>>> our_deque_3 = Deque([1, 2])
>>> our_deque_1 == our_deque_3
False
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_2 = deque([1, 2, 3])
>>> deque_collections_1 == deque_collections_2
True
>>> deque_collections_3 = deque([1, 2])
>>> deque_collections_1 == deque_collections_3
False
>>> (our_deque_1 == our_deque_2) == (deque_collections_1 == deque_collections_2)
True
>>> (our_deque_1 == our_deque_3) == (deque_collections_1 == deque_collections_3)
True
"""
if not isinstance(other, Deque):
return NotImplemented
me = self._front
oth = other._front
# if the length of the dequeues are not the same, they are not equal
if len(self) != len(other):
return False
while me is not None and oth is not None:
# compare every value
if me.val != oth.val:
return False
me = me.next_node
oth = oth.next_node
return True
def __iter__(self) -> Deque._Iterator:
"""
Implements iteration.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> for v in our_deque:
... print(v)
1
2
3
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> for v in deque_collections:
... print(v)
1
2
3
"""
return Deque._Iterator(self._front)
def __repr__(self) -> str:
"""
Implements representation of the deque.
Represents it as a list, with its values between '[' and ']'.
Time complexity: O(n)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque
[1, 2, 3]
"""
values_list = []
aux = self._front
while aux is not None:
# append the values in a list to display
values_list.append(aux.val)
aux = aux.next_node
return f"[{', '.join(repr(val) for val in values_list)}]"
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A binary search Tree
"""
from collections.abc import Iterable
from typing import Any
class Node:
def __init__(self, value: int | None = None):
self.value = value
self.parent: Node | None = None # Added in order to delete a node easier
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root: Node | None = None):
self.root = root
def __str__(self) -> str:
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = new_children
def is_right(self, node: Node) -> bool:
if node.parent and node.parent.right:
return node == node.parent.right
return False
def empty(self) -> bool:
return self.root is None
def __insert(self, value) -> None:
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
if parent_node is None:
return
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values) -> None:
for value in values:
self.__insert(value)
def search(self, value) -> Node | None:
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node: Node | None = None) -> Node | None:
"""
We go deep on the right branch
"""
if node is None:
if self.root is None:
return None
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node: Node | None = None) -> Node | None:
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if self.root is None:
return None
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value: int) -> None:
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value) # type: ignore
node.value = (
tmp_node.value # type: ignore
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node: Node | None) -> Iterable:
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None) -> Any:
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node | None) -> None:
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree"""
arr: list[int] = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node: Node | None) -> list[Node]:
"""
postOrder (left, right, self)
"""
node_list = []
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree() -> None:
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree()
>>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value) # type: ignore
print("Min Value: ", t.get_min().value) # type: ignore
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| """
A binary search Tree
"""
from collections.abc import Iterable
from typing import Any
class Node:
def __init__(self, value: int | None = None):
self.value = value
self.parent: Node | None = None # Added in order to delete a node easier
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root: Node | None = None):
self.root = root
def __str__(self) -> str:
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = new_children
def is_right(self, node: Node) -> bool:
if node.parent and node.parent.right:
return node == node.parent.right
return False
def empty(self) -> bool:
return self.root is None
def __insert(self, value) -> None:
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
if parent_node is None:
return
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values) -> None:
for value in values:
self.__insert(value)
def search(self, value) -> Node | None:
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node: Node | None = None) -> Node | None:
"""
We go deep on the right branch
"""
if node is None:
if self.root is None:
return None
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node: Node | None = None) -> Node | None:
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if self.root is None:
return None
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value: int) -> None:
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value) # type: ignore
node.value = (
tmp_node.value # type: ignore
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node: Node | None) -> Iterable:
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None) -> Any:
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node | None) -> None:
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree"""
arr: list[int] = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node: Node | None) -> list[Node]:
"""
postOrder (left, right, self)
"""
node_list = []
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree() -> None:
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree()
>>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value) # type: ignore
print("Min Value: ", t.get_min().value) # type: ignore
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Title : Calculating the speed of sound
Description :
The speed of sound (c) is the speed that a sound wave travels
per unit time (m/s). During propagation, the sound wave propagates
through an elastic medium. Its SI unit is meter per second (m/s).
Only longitudinal waves can propagate in liquids and gas other then
solid where they also travel in transverse wave. The following Algo-
rithem calculates the speed of sound in fluid depanding on the bulk
module and the density of the fluid.
Equation for calculating speed od sound in fluid:
c_fluid = (K_s*p)**0.5
c_fluid: speed of sound in fluid
K_s: isentropic bulk modulus
p: density of fluid
Source : https://en.wikipedia.org/wiki/Speed_of_sound
"""
def speed_of_sound_in_a_fluid(density: float, bulk_modulus: float) -> float:
"""
This method calculates the speed of sound in fluid -
This is calculated from the other two provided values
Examples:
Example 1 --> Water 20°C: bulk_moduls= 2.15MPa, density=998kg/m³
Example 2 --> Murcery 20°: bulk_moduls= 28.5MPa, density=13600kg/m³
>>> speed_of_sound_in_a_fluid(bulk_modulus=2.15*10**9, density=998)
1467.7563207952705
>>> speed_of_sound_in_a_fluid(bulk_modulus=28.5*10**9, density=13600)
1447.614670861731
"""
if density <= 0:
raise ValueError("Impossible fluid density")
if bulk_modulus <= 0:
raise ValueError("Impossible bulk modulus")
return (bulk_modulus / density) ** 0.5
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Title : Calculating the speed of sound
Description :
The speed of sound (c) is the speed that a sound wave travels
per unit time (m/s). During propagation, the sound wave propagates
through an elastic medium. Its SI unit is meter per second (m/s).
Only longitudinal waves can propagate in liquids and gas other then
solid where they also travel in transverse wave. The following Algo-
rithem calculates the speed of sound in fluid depanding on the bulk
module and the density of the fluid.
Equation for calculating speed od sound in fluid:
c_fluid = (K_s*p)**0.5
c_fluid: speed of sound in fluid
K_s: isentropic bulk modulus
p: density of fluid
Source : https://en.wikipedia.org/wiki/Speed_of_sound
"""
def speed_of_sound_in_a_fluid(density: float, bulk_modulus: float) -> float:
"""
This method calculates the speed of sound in fluid -
This is calculated from the other two provided values
Examples:
Example 1 --> Water 20°C: bulk_moduls= 2.15MPa, density=998kg/m³
Example 2 --> Murcery 20°: bulk_moduls= 28.5MPa, density=13600kg/m³
>>> speed_of_sound_in_a_fluid(bulk_modulus=2.15*10**9, density=998)
1467.7563207952705
>>> speed_of_sound_in_a_fluid(bulk_modulus=28.5*10**9, density=13600)
1447.614670861731
"""
if density <= 0:
raise ValueError("Impossible fluid density")
if bulk_modulus <= 0:
raise ValueError("Impossible bulk modulus")
return (bulk_modulus / density) ** 0.5
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/env python3
import requests
giphy_api_key = "YOUR API KEY"
# Can be fetched from https://developers.giphy.com/dashboard/
def get_gifs(query: str, api_key: str = giphy_api_key) -> list:
"""
Get a list of URLs of GIFs based on a given query..
"""
formatted_query = "+".join(query.split())
url = f"https://api.giphy.com/v1/gifs/search?q={formatted_query}&api_key={api_key}"
gifs = requests.get(url).json()["data"]
return [gif["url"] for gif in gifs]
if __name__ == "__main__":
print("\n".join(get_gifs("space ship")))
| #!/usr/bin/env python3
import requests
giphy_api_key = "YOUR API KEY"
# Can be fetched from https://developers.giphy.com/dashboard/
def get_gifs(query: str, api_key: str = giphy_api_key) -> list:
"""
Get a list of URLs of GIFs based on a given query..
"""
formatted_query = "+".join(query.split())
url = f"https://api.giphy.com/v1/gifs/search?q={formatted_query}&api_key={api_key}"
gifs = requests.get(url).json()["data"]
return [gif["url"] for gif in gifs]
if __name__ == "__main__":
print("\n".join(get_gifs("space ship")))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 115: https://projecteuler.net/problem=115
NOTE: This is a more difficult version of Problem 114
(https://projecteuler.net/problem=114).
A row measuring n units in length has red blocks
with a minimum length of m units placed on it, such that any two red blocks
(which are allowed to be different lengths) are separated by at least one black square.
Let the fill-count function, F(m, n),
represent the number of ways that a row can be filled.
For example, F(3, 29) = 673135 and F(3, 30) = 1089155.
That is, for m = 3, it can be seen that n = 30 is the smallest value
for which the fill-count function first exceeds one million.
In the same way, for m = 10, it can be verified that
F(10, 56) = 880711 and F(10, 57) = 1148904, so n = 57 is the least value
for which the fill-count function first exceeds one million.
For m = 50, find the least value of n
for which the fill-count function first exceeds one million.
"""
from itertools import count
def solution(min_block_length: int = 50) -> int:
"""
Returns for given minimum block length the least value of n
for which the fill-count function first exceeds one million
>>> solution(3)
30
>>> solution(10)
57
"""
fill_count_functions = [1] * min_block_length
for n in count(min_block_length):
fill_count_functions.append(1)
for block_length in range(min_block_length, n + 1):
for block_start in range(n - block_length):
fill_count_functions[n] += fill_count_functions[
n - block_start - block_length - 1
]
fill_count_functions[n] += 1
if fill_count_functions[n] > 1_000_000:
break
return n
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 115: https://projecteuler.net/problem=115
NOTE: This is a more difficult version of Problem 114
(https://projecteuler.net/problem=114).
A row measuring n units in length has red blocks
with a minimum length of m units placed on it, such that any two red blocks
(which are allowed to be different lengths) are separated by at least one black square.
Let the fill-count function, F(m, n),
represent the number of ways that a row can be filled.
For example, F(3, 29) = 673135 and F(3, 30) = 1089155.
That is, for m = 3, it can be seen that n = 30 is the smallest value
for which the fill-count function first exceeds one million.
In the same way, for m = 10, it can be verified that
F(10, 56) = 880711 and F(10, 57) = 1148904, so n = 57 is the least value
for which the fill-count function first exceeds one million.
For m = 50, find the least value of n
for which the fill-count function first exceeds one million.
"""
from itertools import count
def solution(min_block_length: int = 50) -> int:
"""
Returns for given minimum block length the least value of n
for which the fill-count function first exceeds one million
>>> solution(3)
30
>>> solution(10)
57
"""
fill_count_functions = [1] * min_block_length
for n in count(min_block_length):
fill_count_functions.append(1)
for block_length in range(min_block_length, n + 1):
for block_start in range(n - block_length):
fill_count_functions[n] += fill_count_functions[
n - block_start - block_length - 1
]
fill_count_functions[n] += 1
if fill_count_functions[n] > 1_000_000:
break
return n
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 6: https://projecteuler.net/problem=6
Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten
natural numbers and the square of the sum is 3025 - 385 = 2640.
Find the difference between the sum of the squares of the first one
hundred natural numbers and the square of the sum.
"""
import math
def solution(n: int = 100) -> int:
"""
Returns the difference between the sum of the squares of the first n
natural numbers and the square of the sum.
>>> solution(10)
2640
>>> solution(15)
13160
>>> solution(20)
41230
>>> solution(50)
1582700
"""
sum_of_squares = sum(i * i for i in range(1, n + 1))
square_of_sum = int(math.pow(sum(range(1, n + 1)), 2))
return square_of_sum - sum_of_squares
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 6: https://projecteuler.net/problem=6
Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten
natural numbers and the square of the sum is 3025 - 385 = 2640.
Find the difference between the sum of the squares of the first one
hundred natural numbers and the square of the sum.
"""
import math
def solution(n: int = 100) -> int:
"""
Returns the difference between the sum of the squares of the first n
natural numbers and the square of the sum.
>>> solution(10)
2640
>>> solution(15)
13160
>>> solution(20)
41230
>>> solution(50)
1582700
"""
sum_of_squares = sum(i * i for i in range(1, n + 1))
square_of_sum = int(math.pow(sum(range(1, n + 1)), 2))
return square_of_sum - sum_of_squares
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from collections import defaultdict
from graphs.minimum_spanning_tree_prims import prisms_algorithm as mst
def test_prim_successful_result():
num_nodes, num_edges = 9, 14 # noqa: F841
edges = [
[0, 1, 4],
[0, 7, 8],
[1, 2, 8],
[7, 8, 7],
[7, 6, 1],
[2, 8, 2],
[8, 6, 6],
[2, 3, 7],
[2, 5, 4],
[6, 5, 2],
[3, 5, 14],
[3, 4, 9],
[5, 4, 10],
[1, 7, 11],
]
adjancency = defaultdict(list)
for node1, node2, cost in edges:
adjancency[node1].append([node2, cost])
adjancency[node2].append([node1, cost])
result = mst(adjancency)
expected = [
[7, 6, 1],
[2, 8, 2],
[6, 5, 2],
[0, 1, 4],
[2, 5, 4],
[2, 3, 7],
[0, 7, 8],
[3, 4, 9],
]
for answer in expected:
edge = tuple(answer[:2])
reverse = tuple(edge[::-1])
assert edge in result or reverse in result
| from collections import defaultdict
from graphs.minimum_spanning_tree_prims import prisms_algorithm as mst
def test_prim_successful_result():
num_nodes, num_edges = 9, 14 # noqa: F841
edges = [
[0, 1, 4],
[0, 7, 8],
[1, 2, 8],
[7, 8, 7],
[7, 6, 1],
[2, 8, 2],
[8, 6, 6],
[2, 3, 7],
[2, 5, 4],
[6, 5, 2],
[3, 5, 14],
[3, 4, 9],
[5, 4, 10],
[1, 7, 11],
]
adjancency = defaultdict(list)
for node1, node2, cost in edges:
adjancency[node1].append([node2, cost])
adjancency[node2].append([node1, cost])
result = mst(adjancency)
expected = [
[7, 6, 1],
[2, 8, 2],
[6, 5, 2],
[0, 1, 4],
[2, 5, 4],
[2, 3, 7],
[0, 7, 8],
[3, 4, 9],
]
for answer in expected:
edge = tuple(answer[:2])
reverse = tuple(edge[::-1])
assert edge in result or reverse in result
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 36
https://projecteuler.net/problem=36
Problem Statement:
Double-base palindromes
Problem 36
The decimal number, 585 = 10010010012 (binary), is palindromic in both bases.
Find the sum of all numbers, less than one million, which are palindromic in
base 10 and base 2.
(Please note that the palindromic number, in either base, may not include
leading zeros.)
"""
from __future__ import annotations
def is_palindrome(n: int | str) -> bool:
"""
Return true if the input n is a palindrome.
Otherwise return false. n can be an integer or a string.
>>> is_palindrome(909)
True
>>> is_palindrome(908)
False
>>> is_palindrome('10101')
True
>>> is_palindrome('10111')
False
"""
n = str(n)
return n == n[::-1]
def solution(n: int = 1000000):
"""Return the sum of all numbers, less than n , which are palindromic in
base 10 and base 2.
>>> solution(1000000)
872187
>>> solution(500000)
286602
>>> solution(100000)
286602
>>> solution(1000)
1772
>>> solution(100)
157
>>> solution(10)
25
>>> solution(2)
1
>>> solution(1)
0
"""
total = 0
for i in range(1, n):
if is_palindrome(i) and is_palindrome(bin(i).split("b")[1]):
total += i
return total
if __name__ == "__main__":
print(solution(int(str(input().strip()))))
| """
Project Euler Problem 36
https://projecteuler.net/problem=36
Problem Statement:
Double-base palindromes
Problem 36
The decimal number, 585 = 10010010012 (binary), is palindromic in both bases.
Find the sum of all numbers, less than one million, which are palindromic in
base 10 and base 2.
(Please note that the palindromic number, in either base, may not include
leading zeros.)
"""
from __future__ import annotations
def is_palindrome(n: int | str) -> bool:
"""
Return true if the input n is a palindrome.
Otherwise return false. n can be an integer or a string.
>>> is_palindrome(909)
True
>>> is_palindrome(908)
False
>>> is_palindrome('10101')
True
>>> is_palindrome('10111')
False
"""
n = str(n)
return n == n[::-1]
def solution(n: int = 1000000):
"""Return the sum of all numbers, less than n , which are palindromic in
base 10 and base 2.
>>> solution(1000000)
872187
>>> solution(500000)
286602
>>> solution(100000)
286602
>>> solution(1000)
1772
>>> solution(100)
157
>>> solution(10)
25
>>> solution(2)
1
>>> solution(1)
0
"""
total = 0
for i in range(1, n):
if is_palindrome(i) and is_palindrome(bin(i).split("b")[1]):
total += i
return total
if __name__ == "__main__":
print(solution(int(str(input().strip()))))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
"""
Flip image and bounding box for computer vision task
https://paperswithcode.com/method/randomhorizontalflip
"""
# Params
LABEL_DIR = ""
IMAGE_DIR = ""
OUTPUT_DIR = ""
FLIP_TYPE = 1 # (0 is vertical, 1 is horizontal)
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMAGE_DIR)
print("Processing...")
new_images, new_annos, paths = update_image_and_anno(img_paths, annos, FLIP_TYPE)
for index, image in enumerate(new_images):
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = paths[index].split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_FLIP_{letter_code}"
cv2.imwrite(f"/{file_root}.jpg", image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Success {index+1}/{len(new_images)} with {file_name}")
annos_list = []
for anno in new_annos[index]:
obj = f"{anno[0]} {anno[1]} {anno[2]} {anno[3]} {anno[4]}"
annos_list.append(obj)
with open(f"/{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
boxes.append(
[
int(obj[0]),
float(obj[1]),
float(obj[2]),
float(obj[3]),
float(obj[4]),
]
)
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
img_list: list, anno_list: list, flip_type: int = 1
) -> tuple[list, list, list]:
"""
- img_list <type: list>: list of all images
- anno_list <type: list>: list of all annotations of specific image
- flip_type <type: int>: 0 is vertical, 1 is horizontal
Return:
- new_imgs_list <type: narray>: image after resize
- new_annos_lists <type: list>: list of new annotation after scale
- path_list <type: list>: list the name of image file
"""
new_annos_lists = []
path_list = []
new_imgs_list = []
for idx in range(len(img_list)):
new_annos = []
path = img_list[idx]
path_list.append(path)
img_annos = anno_list[idx]
img = cv2.imread(path)
if flip_type == 1:
new_img = cv2.flip(img, flip_type)
for bbox in img_annos:
x_center_new = 1 - bbox[1]
new_annos.append([bbox[0], x_center_new, bbox[2], bbox[3], bbox[4]])
elif flip_type == 0:
new_img = cv2.flip(img, flip_type)
for bbox in img_annos:
y_center_new = 1 - bbox[2]
new_annos.append([bbox[0], bbox[1], y_center_new, bbox[3], bbox[4]])
new_annos_lists.append(new_annos)
new_imgs_list.append(new_img)
return new_imgs_list, new_annos_lists, path_list
def random_chars(number_char: int = 32) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
"""
Flip image and bounding box for computer vision task
https://paperswithcode.com/method/randomhorizontalflip
"""
# Params
LABEL_DIR = ""
IMAGE_DIR = ""
OUTPUT_DIR = ""
FLIP_TYPE = 1 # (0 is vertical, 1 is horizontal)
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMAGE_DIR)
print("Processing...")
new_images, new_annos, paths = update_image_and_anno(img_paths, annos, FLIP_TYPE)
for index, image in enumerate(new_images):
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = paths[index].split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_FLIP_{letter_code}"
cv2.imwrite(f"/{file_root}.jpg", image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Success {index+1}/{len(new_images)} with {file_name}")
annos_list = []
for anno in new_annos[index]:
obj = f"{anno[0]} {anno[1]} {anno[2]} {anno[3]} {anno[4]}"
annos_list.append(obj)
with open(f"/{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
boxes.append(
[
int(obj[0]),
float(obj[1]),
float(obj[2]),
float(obj[3]),
float(obj[4]),
]
)
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
img_list: list, anno_list: list, flip_type: int = 1
) -> tuple[list, list, list]:
"""
- img_list <type: list>: list of all images
- anno_list <type: list>: list of all annotations of specific image
- flip_type <type: int>: 0 is vertical, 1 is horizontal
Return:
- new_imgs_list <type: narray>: image after resize
- new_annos_lists <type: list>: list of new annotation after scale
- path_list <type: list>: list the name of image file
"""
new_annos_lists = []
path_list = []
new_imgs_list = []
for idx in range(len(img_list)):
new_annos = []
path = img_list[idx]
path_list.append(path)
img_annos = anno_list[idx]
img = cv2.imread(path)
if flip_type == 1:
new_img = cv2.flip(img, flip_type)
for bbox in img_annos:
x_center_new = 1 - bbox[1]
new_annos.append([bbox[0], x_center_new, bbox[2], bbox[3], bbox[4]])
elif flip_type == 0:
new_img = cv2.flip(img, flip_type)
for bbox in img_annos:
y_center_new = 1 - bbox[2]
new_annos.append([bbox[0], bbox[1], y_center_new, bbox[3], bbox[4]])
new_annos_lists.append(new_annos)
new_imgs_list.append(new_img)
return new_imgs_list, new_annos_lists, path_list
def random_chars(number_char: int = 32) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #
| #
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Youtube Explanation: https://www.youtube.com/watch?v=lBRtnuxg-gU
from __future__ import annotations
def minimum_cost_path(matrix: list[list[int]]) -> int:
"""
Find the minimum cost traced by all possible paths from top left to bottom right in
a given matrix
>>> minimum_cost_path([[2, 1], [3, 1], [4, 2]])
6
>>> minimum_cost_path([[2, 1, 4], [2, 1, 3], [3, 2, 1]])
7
"""
# preprocessing the first row
for i in range(1, len(matrix[0])):
matrix[0][i] += matrix[0][i - 1]
# preprocessing the first column
for i in range(1, len(matrix)):
matrix[i][0] += matrix[i - 1][0]
# updating the path cost for current position
for i in range(1, len(matrix)):
for j in range(1, len(matrix[0])):
matrix[i][j] += min(matrix[i - 1][j], matrix[i][j - 1])
return matrix[-1][-1]
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Youtube Explanation: https://www.youtube.com/watch?v=lBRtnuxg-gU
from __future__ import annotations
def minimum_cost_path(matrix: list[list[int]]) -> int:
"""
Find the minimum cost traced by all possible paths from top left to bottom right in
a given matrix
>>> minimum_cost_path([[2, 1], [3, 1], [4, 2]])
6
>>> minimum_cost_path([[2, 1, 4], [2, 1, 3], [3, 2, 1]])
7
"""
# preprocessing the first row
for i in range(1, len(matrix[0])):
matrix[0][i] += matrix[0][i - 1]
# preprocessing the first column
for i in range(1, len(matrix)):
matrix[i][0] += matrix[i - 1][0]
# updating the path cost for current position
for i in range(1, len(matrix)):
for j in range(1, len(matrix[0])):
matrix[i][j] += min(matrix[i - 1][j], matrix[i][j - 1])
return matrix[-1][-1]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
The maximum subarray sum problem is the task of finding the maximum sum that can be
obtained from a contiguous subarray within a given array of numbers. For example, given
the array [-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum
is [4, -1, 2, 1], so the maximum subarray sum is 6.
Kadane's algorithm is a simple dynamic programming algorithm that solves the maximum
subarray sum problem in O(n) time and O(1) space.
Reference: https://en.wikipedia.org/wiki/Maximum_subarray_problem
"""
from collections.abc import Sequence
def max_subarray_sum(
arr: Sequence[float], allow_empty_subarrays: bool = False
) -> float:
"""
Solves the maximum subarray sum problem using Kadane's algorithm.
:param arr: the given array of numbers
:param allow_empty_subarrays: if True, then the algorithm considers empty subarrays
>>> max_subarray_sum([2, 8, 9])
19
>>> max_subarray_sum([0, 0])
0
>>> max_subarray_sum([-1.0, 0.0, 1.0])
1.0
>>> max_subarray_sum([1, 2, 3, 4, -2])
10
>>> max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4])
6
>>> max_subarray_sum([2, 3, -9, 8, -2])
8
>>> max_subarray_sum([-2, -3, -1, -4, -6])
-1
>>> max_subarray_sum([-2, -3, -1, -4, -6], allow_empty_subarrays=True)
0
>>> max_subarray_sum([])
0
"""
if not arr:
return 0
max_sum = 0 if allow_empty_subarrays else float("-inf")
curr_sum = 0.0
for num in arr:
curr_sum = max(0 if allow_empty_subarrays else num, curr_sum + num)
max_sum = max(max_sum, curr_sum)
return max_sum
if __name__ == "__main__":
from doctest import testmod
testmod()
nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
print(f"{max_subarray_sum(nums) = }")
| """
The maximum subarray sum problem is the task of finding the maximum sum that can be
obtained from a contiguous subarray within a given array of numbers. For example, given
the array [-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum
is [4, -1, 2, 1], so the maximum subarray sum is 6.
Kadane's algorithm is a simple dynamic programming algorithm that solves the maximum
subarray sum problem in O(n) time and O(1) space.
Reference: https://en.wikipedia.org/wiki/Maximum_subarray_problem
"""
from collections.abc import Sequence
def max_subarray_sum(
arr: Sequence[float], allow_empty_subarrays: bool = False
) -> float:
"""
Solves the maximum subarray sum problem using Kadane's algorithm.
:param arr: the given array of numbers
:param allow_empty_subarrays: if True, then the algorithm considers empty subarrays
>>> max_subarray_sum([2, 8, 9])
19
>>> max_subarray_sum([0, 0])
0
>>> max_subarray_sum([-1.0, 0.0, 1.0])
1.0
>>> max_subarray_sum([1, 2, 3, 4, -2])
10
>>> max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4])
6
>>> max_subarray_sum([2, 3, -9, 8, -2])
8
>>> max_subarray_sum([-2, -3, -1, -4, -6])
-1
>>> max_subarray_sum([-2, -3, -1, -4, -6], allow_empty_subarrays=True)
0
>>> max_subarray_sum([])
0
"""
if not arr:
return 0
max_sum = 0 if allow_empty_subarrays else float("-inf")
curr_sum = 0.0
for num in arr:
curr_sum = max(0 if allow_empty_subarrays else num, curr_sum + num)
max_sum = max(max_sum, curr_sum)
return max_sum
if __name__ == "__main__":
from doctest import testmod
testmod()
nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
print(f"{max_subarray_sum(nums) = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x: float) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x: float) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
An edge is a bridge if, after removing it count of connected components in graph will
be increased by one. Bridges represent vulnerabilities in a connected network and are
useful for designing reliable networks. For example, in a wired computer network, an
articulation point indicates the critical computers and a bridge indicates the critical
wires or connections.
For more details, refer this article:
https://www.geeksforgeeks.org/bridge-in-a-graph/
"""
def __get_demo_graph(index):
return [
{
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
},
{
0: [6],
1: [9],
2: [4, 5],
3: [4],
4: [2, 3],
5: [2],
6: [0, 7],
7: [6],
8: [],
9: [1],
},
{
0: [4],
1: [6],
2: [],
3: [5, 6, 7],
4: [0, 6],
5: [3, 8, 9],
6: [1, 3, 4, 7],
7: [3, 6, 8, 9],
8: [5, 7],
9: [5, 7],
},
{
0: [1, 3],
1: [0, 2, 4],
2: [1, 3, 4],
3: [0, 2, 4],
4: [1, 2, 3],
},
][index]
def compute_bridges(graph: dict[int, list[int]]) -> list[tuple[int, int]]:
"""
Return the list of undirected graph bridges [(a1, b1), ..., (ak, bk)]; ai <= bi
>>> compute_bridges(__get_demo_graph(0))
[(3, 4), (2, 3), (2, 5)]
>>> compute_bridges(__get_demo_graph(1))
[(6, 7), (0, 6), (1, 9), (3, 4), (2, 4), (2, 5)]
>>> compute_bridges(__get_demo_graph(2))
[(1, 6), (4, 6), (0, 4)]
>>> compute_bridges(__get_demo_graph(3))
[]
>>> compute_bridges({})
[]
"""
id_ = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id_):
visited[at] = True
low[at] = id_
id_ += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id_)
low[at] = min(low[at], low[to])
if id_ <= low[to]:
bridges.append((at, to) if at < to else (to, at))
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], low[to])
bridges: list[tuple[int, int]] = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id_)
return bridges
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
An edge is a bridge if, after removing it count of connected components in graph will
be increased by one. Bridges represent vulnerabilities in a connected network and are
useful for designing reliable networks. For example, in a wired computer network, an
articulation point indicates the critical computers and a bridge indicates the critical
wires or connections.
For more details, refer this article:
https://www.geeksforgeeks.org/bridge-in-a-graph/
"""
def __get_demo_graph(index):
return [
{
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
},
{
0: [6],
1: [9],
2: [4, 5],
3: [4],
4: [2, 3],
5: [2],
6: [0, 7],
7: [6],
8: [],
9: [1],
},
{
0: [4],
1: [6],
2: [],
3: [5, 6, 7],
4: [0, 6],
5: [3, 8, 9],
6: [1, 3, 4, 7],
7: [3, 6, 8, 9],
8: [5, 7],
9: [5, 7],
},
{
0: [1, 3],
1: [0, 2, 4],
2: [1, 3, 4],
3: [0, 2, 4],
4: [1, 2, 3],
},
][index]
def compute_bridges(graph: dict[int, list[int]]) -> list[tuple[int, int]]:
"""
Return the list of undirected graph bridges [(a1, b1), ..., (ak, bk)]; ai <= bi
>>> compute_bridges(__get_demo_graph(0))
[(3, 4), (2, 3), (2, 5)]
>>> compute_bridges(__get_demo_graph(1))
[(6, 7), (0, 6), (1, 9), (3, 4), (2, 4), (2, 5)]
>>> compute_bridges(__get_demo_graph(2))
[(1, 6), (4, 6), (0, 4)]
>>> compute_bridges(__get_demo_graph(3))
[]
>>> compute_bridges({})
[]
"""
id_ = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id_):
visited[at] = True
low[at] = id_
id_ += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id_)
low[at] = min(low[at], low[to])
if id_ <= low[to]:
bridges.append((at, to) if at < to else (to, at))
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], low[to])
bridges: list[tuple[int, int]] = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id_)
return bridges
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is a pure Python implementation of the Harmonic Series algorithm
https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
For doctests run following command:
python -m doctest -v harmonic_series.py
or
python3 -m doctest -v harmonic_series.py
For manual testing run:
python3 harmonic_series.py
"""
def harmonic_series(n_term: str) -> list:
"""Pure Python implementation of Harmonic Series algorithm
:param n_term: The last (nth) term of Harmonic Series
:return: The Harmonic Series starting from 1 to last (nth) term
Examples:
>>> harmonic_series(5)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.0)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.1)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(-5)
[]
>>> harmonic_series(0)
[]
>>> harmonic_series(1)
['1']
"""
if n_term == "":
return []
series: list = []
for temp in range(int(n_term)):
series.append(f"1/{temp + 1}" if series else "1")
return series
if __name__ == "__main__":
nth_term = input("Enter the last number (nth term) of the Harmonic Series")
print("Formula of Harmonic Series => 1+1/2+1/3 ..... 1/n")
print(harmonic_series(nth_term))
| """
This is a pure Python implementation of the Harmonic Series algorithm
https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
For doctests run following command:
python -m doctest -v harmonic_series.py
or
python3 -m doctest -v harmonic_series.py
For manual testing run:
python3 harmonic_series.py
"""
def harmonic_series(n_term: str) -> list:
"""Pure Python implementation of Harmonic Series algorithm
:param n_term: The last (nth) term of Harmonic Series
:return: The Harmonic Series starting from 1 to last (nth) term
Examples:
>>> harmonic_series(5)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.0)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.1)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(-5)
[]
>>> harmonic_series(0)
[]
>>> harmonic_series(1)
['1']
"""
if n_term == "":
return []
series: list = []
for temp in range(int(n_term)):
series.append(f"1/{temp + 1}" if series else "1")
return series
if __name__ == "__main__":
nth_term = input("Enter the last number (nth term) of the Harmonic Series")
print("Formula of Harmonic Series => 1+1/2+1/3 ..... 1/n")
print(harmonic_series(nth_term))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Implementation of sequential minimal optimization (SMO) for support vector machines
(SVM).
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic
programming (QP) problem that arises during the training of support vector
machines.
It was invented by John Platt in 1998.
Input:
0: type: numpy.ndarray.
1: first column of ndarray must be tags of samples, must be 1 or -1.
2: rows of ndarray represent samples.
Usage:
Command:
python3 sequential_minimum_optimization.py
Code:
from sequential_minimum_optimization import SmoSVM, Kernel
kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5)
init_alphas = np.zeros(train.shape[0])
SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4,
b=0.0, tolerance=0.001)
SVM.fit()
predict = SVM.predict(test_samples)
Reference:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf
"""
import os
import sys
import urllib.request
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs, make_circles
from sklearn.preprocessing import StandardScaler
CANCER_DATASET_URL = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/"
"breast-cancer-wisconsin/wdbc.data"
)
class SmoSVM:
def __init__(
self,
train,
kernel_func,
alpha_list=None,
cost=0.4,
b=0.0,
tolerance=0.001,
auto_norm=True,
):
self._init = True
self._auto_norm = auto_norm
self._c = np.float64(cost)
self._b = np.float64(b)
self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001)
self.tags = train[:, 0]
self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:]
self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0])
self.Kernel = kernel_func
self._eps = 0.001
self._all_samples = list(range(self.length))
self._K_matrix = self._calculate_k_matrix()
self._error = np.zeros(self.length)
self._unbound = []
self.choose_alpha = self._choose_alphas()
# Calculate alphas using SMO algorithm
def fit(self):
k = self._k
state = None
while True:
# 1: Find alpha1, alpha2
try:
i1, i2 = self.choose_alpha.send(state)
state = None
except StopIteration:
print("Optimization done!\nEvery sample satisfy the KKT condition!")
break
# 2: calculate new alpha2 and new alpha1
y1, y2 = self.tags[i1], self.tags[i2]
a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy()
e1, e2 = self._e(i1), self._e(i2)
args = (i1, i2, a1, a2, e1, e2, y1, y2)
a1_new, a2_new = self._get_new_alpha(*args)
if not a1_new and not a2_new:
state = False
continue
self.alphas[i1], self.alphas[i2] = a1_new, a2_new
# 3: update threshold(b)
b1_new = np.float64(
-e1
- y1 * k(i1, i1) * (a1_new - a1)
- y2 * k(i2, i1) * (a2_new - a2)
+ self._b
)
b2_new = np.float64(
-e2
- y2 * k(i2, i2) * (a2_new - a2)
- y1 * k(i1, i2) * (a1_new - a1)
+ self._b
)
if 0.0 < a1_new < self._c:
b = b1_new
if 0.0 < a2_new < self._c:
b = b2_new
if not (np.float64(0) < a2_new < self._c) and not (
np.float64(0) < a1_new < self._c
):
b = (b1_new + b2_new) / 2.0
b_old = self._b
self._b = b
# 4: update error value,here we only calculate those non-bound samples'
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
if s in (i1, i2):
continue
self._error[s] += (
y1 * (a1_new - a1) * k(i1, s)
+ y2 * (a2_new - a2) * k(i2, s)
+ (self._b - b_old)
)
# if i1 or i2 is non-bound,update there error value to zero
if self._is_unbound(i1):
self._error[i1] = 0
if self._is_unbound(i2):
self._error[i2] = 0
# Predict test samples
def predict(self, test_samples, classify=True):
if test_samples.shape[1] > self.samples.shape[1]:
raise ValueError(
"Test samples' feature length does not equal to that of train samples"
)
if self._auto_norm:
test_samples = self._norm(test_samples)
results = []
for test_sample in test_samples:
result = self._predict(test_sample)
if classify:
results.append(1 if result > 0 else -1)
else:
results.append(result)
return np.array(results)
# Check if alpha violate KKT condition
def _check_obey_kkt(self, index):
alphas = self.alphas
tol = self._tol
r = self._e(index) * self.tags[index]
c = self._c
return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0)
# Get value calculated from kernel function
def _k(self, i1, i2):
# for test samples,use Kernel function
if isinstance(i2, np.ndarray):
return self.Kernel(self.samples[i1], i2)
# for train samples,Kernel values have been saved in matrix
else:
return self._K_matrix[i1, i2]
# Get sample's error
def _e(self, index):
"""
Two cases:
1:Sample[index] is non-bound,Fetch error from list: _error
2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi
"""
# get from error data
if self._is_unbound(index):
return self._error[index]
# get by g(xi) - yi
else:
gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b
yi = self.tags[index]
return gx - yi
# Calculate Kernel matrix of all possible i1,i2 ,saving time
def _calculate_k_matrix(self):
k_matrix = np.zeros([self.length, self.length])
for i in self._all_samples:
for j in self._all_samples:
k_matrix[i, j] = np.float64(
self.Kernel(self.samples[i, :], self.samples[j, :])
)
return k_matrix
# Predict test sample's tag
def _predict(self, sample):
k = self._k
predicted_value = (
np.sum(
[
self.alphas[i1] * self.tags[i1] * k(i1, sample)
for i1 in self._all_samples
]
)
+ self._b
)
return predicted_value
# Choose alpha1 and alpha2
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
return None
return locis
def _choose_a1(self):
"""
Choose first alpha ;steps:
1:First loop over all sample
2:Second loop over all non-bound samples till all non-bound samples does not
voilate kkt condition.
3:Repeat this two process endlessly,till all samples does not voilate kkt
condition samples after first loop.
"""
while True:
all_not_obey = True
# all sample
print("scanning all sample!")
for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]:
all_not_obey = False
yield from self._choose_a2(i1)
# non-bound sample
print("scanning non-bound sample!")
while True:
not_obey = True
for i1 in [
i
for i in self._all_samples
if self._check_obey_kkt(i) and self._is_unbound(i)
]:
not_obey = False
yield from self._choose_a2(i1)
if not_obey:
print("all non-bound samples fit the KKT condition!")
break
if all_not_obey:
print("all samples fit the KKT condition! Optimization done!")
break
return False
def _choose_a2(self, i1):
"""
Choose the second alpha by using heuristic algorithm ;steps:
1: Choose alpha2 which gets the maximum step size (|E1 - E2|).
2: Start in a random point,loop over all non-bound samples till alpha1 and
alpha2 are optimized.
3: Start in a random point,loop over all samples till alpha1 and alpha2 are
optimized.
"""
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
if len(self.unbound) > 0:
tmp_error = self._error.copy().tolist()
tmp_error_dict = {
index: value
for index, value in enumerate(tmp_error)
if self._is_unbound(index)
}
if self._e(i1) >= 0:
i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index])
else:
i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index])
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self.unbound, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self._all_samples, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
# Get the new alpha2 and new alpha1
def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2):
k = self._k
if i1 == i2:
return None, None
# calculate L and H which bound the new alpha2
s = y1 * y2
if s == -1:
l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1)
else:
l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1)
if l == h:
return None, None
# calculate eta
k11 = k(i1, i1)
k22 = k(i2, i2)
k12 = k(i1, i2)
# select the new alpha2 which could get the minimal objectives
if (eta := k11 + k22 - 2.0 * k12) > 0.0:
a2_new_unc = a2 + (y2 * (e1 - e2)) / eta
# a2_new has a boundary
if a2_new_unc >= h:
a2_new = h
elif a2_new_unc <= l:
a2_new = l
else:
a2_new = a2_new_unc
else:
b = self._b
l1 = a1 + s * (a2 - l)
h1 = a1 + s * (a2 - h)
# way 1
f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2)
f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2)
ol = (
l1 * f1
+ l * f2
+ 1 / 2 * l1**2 * k(i1, i1)
+ 1 / 2 * l**2 * k(i2, i2)
+ s * l * l1 * k(i1, i2)
)
oh = (
h1 * f1
+ h * f2
+ 1 / 2 * h1**2 * k(i1, i1)
+ 1 / 2 * h**2 * k(i2, i2)
+ s * h * h1 * k(i1, i2)
)
"""
# way 2
Use objective function check which alpha2 new could get the minimal
objectives
"""
if ol < (oh - self._eps):
a2_new = l
elif ol > oh + self._eps:
a2_new = h
else:
a2_new = a2
# a1_new has a boundary too
a1_new = a1 + s * (a2 - a2_new)
if a1_new < 0:
a2_new += s * a1_new
a1_new = 0
if a1_new > self._c:
a2_new += s * (a1_new - self._c)
a1_new = self._c
return a1_new, a2_new
# Normalise data using min_max way
def _norm(self, data):
if self._init:
self._min = np.min(data, axis=0)
self._max = np.max(data, axis=0)
self._init = False
return (data - self._min) / (self._max - self._min)
else:
return (data - self._min) / (self._max - self._min)
def _is_unbound(self, index):
return bool(0.0 < self.alphas[index] < self._c)
def _is_support(self, index):
return bool(self.alphas[index] > 0)
@property
def unbound(self):
return self._unbound
@property
def support(self):
return [i for i in range(self.length) if self._is_support(i)]
@property
def length(self):
return self.samples.shape[0]
class Kernel:
def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0):
self.degree = np.float64(degree)
self.coef0 = np.float64(coef0)
self.gamma = np.float64(gamma)
self._kernel_name = kernel
self._kernel = self._get_kernel(kernel_name=kernel)
self._check()
def _polynomial(self, v1, v2):
return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree
def _linear(self, v1, v2):
return np.inner(v1, v2) + self.coef0
def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
if self._kernel == self._rbf and self.gamma < 0:
raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
return maps[kernel_name]
def __call__(self, v1, v2):
return self._kernel(v1, v2)
def __repr__(self):
return self._kernel_name
def count_time(func):
def call_func(*args, **kwargs):
import time
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print(f"smo algorithm cost {end_time - start_time} seconds")
return call_func
@count_time
def test_cancel_data():
print("Hello!\nStart test svm by smo algorithm!")
# 0: download dataset and load into pandas' dataframe
if not os.path.exists(r"cancel_data.csv"):
request = urllib.request.Request(
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
response = urllib.request.urlopen(request) # noqa: S310
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
data = pd.read_csv(r"cancel_data.csv", header=None)
# 1: pre-processing data
del data[data.columns.tolist()[0]]
data = data.dropna(axis=0)
data = data.replace({"M": np.float64(1), "B": np.float64(-1)})
samples = np.array(data)[:, :]
# 2: dividing data into train_data data and test_data data
train_data, test_data = samples[:328, :], samples[328:, :]
test_tags, test_samples = test_data[:, 0], test_data[:, 1:]
# 3: choose kernel function,and set initial alphas to zero(optional)
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
al = np.zeros(train_data.shape[0])
# 4: calculating best alphas using SMO algorithm and predict test_data samples
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
alpha_list=al,
cost=0.4,
b=0.0,
tolerance=0.001,
)
mysvm.fit()
predict = mysvm.predict(test_samples)
# 5: check accuracy
score = 0
test_num = test_tags.shape[0]
for i in range(test_tags.shape[0]):
if test_tags[i] == predict[i]:
score += 1
print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}")
print(f"Rough Accuracy: {score / test_tags.shape[0]}")
def test_demonstration():
# change stdout
print("\nStart plot,please wait!!!")
sys.stdout = open(os.devnull, "w")
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0))
ax4 = plt.subplot2grid((2, 2), (1, 1))
ax1.set_title("linear svm,cost:0.1")
test_linear_kernel(ax1, cost=0.1)
ax2.set_title("linear svm,cost:500")
test_linear_kernel(ax2, cost=500)
ax3.set_title("rbf kernel svm,cost:0.1")
test_rbf_kernel(ax3, cost=0.1)
ax4.set_title("rbf kernel svm,cost:500")
test_rbf_kernel(ax4, cost=500)
sys.stdout = sys.__stdout__
print("Plot done!!!")
def test_linear_kernel(ax, cost):
train_x, train_y = make_blobs(
n_samples=500, centers=2, n_features=2, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def test_rbf_kernel(ax, cost):
train_x, train_y = make_circles(
n_samples=500, noise=0.1, factor=0.1, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def plot_partition_boundary(
model, train_data, ax, resolution=100, colors=("b", "k", "r")
):
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
prediced values of these points are calculated by using our trained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
train_data_x = train_data[:, 1]
train_data_y = train_data[:, 2]
train_data_tags = train_data[:, 0]
xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution)
yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution)
test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape(
resolution * resolution, 2
)
test_tags = model.predict(test_samples, classify=False)
grid = test_tags.reshape((len(xrange), len(yrange)))
# Plot contour map which represents the partition boundary
ax.contour(
xrange,
yrange,
np.mat(grid).T,
levels=(-1, 0, 1),
linestyles=("--", "-", "--"),
linewidths=(1, 1, 1),
colors=colors,
)
# Plot all train samples
ax.scatter(
train_data_x,
train_data_y,
c=train_data_tags,
cmap=plt.cm.Dark2,
lw=0,
alpha=0.5,
)
# Plot support vectors
support = model.support
ax.scatter(
train_data_x[support],
train_data_y[support],
c=train_data_tags[support],
cmap=plt.cm.Dark2,
)
if __name__ == "__main__":
test_cancel_data()
test_demonstration()
plt.show()
| """
Implementation of sequential minimal optimization (SMO) for support vector machines
(SVM).
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic
programming (QP) problem that arises during the training of support vector
machines.
It was invented by John Platt in 1998.
Input:
0: type: numpy.ndarray.
1: first column of ndarray must be tags of samples, must be 1 or -1.
2: rows of ndarray represent samples.
Usage:
Command:
python3 sequential_minimum_optimization.py
Code:
from sequential_minimum_optimization import SmoSVM, Kernel
kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5)
init_alphas = np.zeros(train.shape[0])
SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4,
b=0.0, tolerance=0.001)
SVM.fit()
predict = SVM.predict(test_samples)
Reference:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf
"""
import os
import sys
import urllib.request
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs, make_circles
from sklearn.preprocessing import StandardScaler
CANCER_DATASET_URL = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/"
"breast-cancer-wisconsin/wdbc.data"
)
class SmoSVM:
def __init__(
self,
train,
kernel_func,
alpha_list=None,
cost=0.4,
b=0.0,
tolerance=0.001,
auto_norm=True,
):
self._init = True
self._auto_norm = auto_norm
self._c = np.float64(cost)
self._b = np.float64(b)
self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001)
self.tags = train[:, 0]
self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:]
self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0])
self.Kernel = kernel_func
self._eps = 0.001
self._all_samples = list(range(self.length))
self._K_matrix = self._calculate_k_matrix()
self._error = np.zeros(self.length)
self._unbound = []
self.choose_alpha = self._choose_alphas()
# Calculate alphas using SMO algorithm
def fit(self):
k = self._k
state = None
while True:
# 1: Find alpha1, alpha2
try:
i1, i2 = self.choose_alpha.send(state)
state = None
except StopIteration:
print("Optimization done!\nEvery sample satisfy the KKT condition!")
break
# 2: calculate new alpha2 and new alpha1
y1, y2 = self.tags[i1], self.tags[i2]
a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy()
e1, e2 = self._e(i1), self._e(i2)
args = (i1, i2, a1, a2, e1, e2, y1, y2)
a1_new, a2_new = self._get_new_alpha(*args)
if not a1_new and not a2_new:
state = False
continue
self.alphas[i1], self.alphas[i2] = a1_new, a2_new
# 3: update threshold(b)
b1_new = np.float64(
-e1
- y1 * k(i1, i1) * (a1_new - a1)
- y2 * k(i2, i1) * (a2_new - a2)
+ self._b
)
b2_new = np.float64(
-e2
- y2 * k(i2, i2) * (a2_new - a2)
- y1 * k(i1, i2) * (a1_new - a1)
+ self._b
)
if 0.0 < a1_new < self._c:
b = b1_new
if 0.0 < a2_new < self._c:
b = b2_new
if not (np.float64(0) < a2_new < self._c) and not (
np.float64(0) < a1_new < self._c
):
b = (b1_new + b2_new) / 2.0
b_old = self._b
self._b = b
# 4: update error value,here we only calculate those non-bound samples'
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
if s in (i1, i2):
continue
self._error[s] += (
y1 * (a1_new - a1) * k(i1, s)
+ y2 * (a2_new - a2) * k(i2, s)
+ (self._b - b_old)
)
# if i1 or i2 is non-bound,update there error value to zero
if self._is_unbound(i1):
self._error[i1] = 0
if self._is_unbound(i2):
self._error[i2] = 0
# Predict test samples
def predict(self, test_samples, classify=True):
if test_samples.shape[1] > self.samples.shape[1]:
raise ValueError(
"Test samples' feature length does not equal to that of train samples"
)
if self._auto_norm:
test_samples = self._norm(test_samples)
results = []
for test_sample in test_samples:
result = self._predict(test_sample)
if classify:
results.append(1 if result > 0 else -1)
else:
results.append(result)
return np.array(results)
# Check if alpha violate KKT condition
def _check_obey_kkt(self, index):
alphas = self.alphas
tol = self._tol
r = self._e(index) * self.tags[index]
c = self._c
return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0)
# Get value calculated from kernel function
def _k(self, i1, i2):
# for test samples,use Kernel function
if isinstance(i2, np.ndarray):
return self.Kernel(self.samples[i1], i2)
# for train samples,Kernel values have been saved in matrix
else:
return self._K_matrix[i1, i2]
# Get sample's error
def _e(self, index):
"""
Two cases:
1:Sample[index] is non-bound,Fetch error from list: _error
2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi
"""
# get from error data
if self._is_unbound(index):
return self._error[index]
# get by g(xi) - yi
else:
gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b
yi = self.tags[index]
return gx - yi
# Calculate Kernel matrix of all possible i1,i2 ,saving time
def _calculate_k_matrix(self):
k_matrix = np.zeros([self.length, self.length])
for i in self._all_samples:
for j in self._all_samples:
k_matrix[i, j] = np.float64(
self.Kernel(self.samples[i, :], self.samples[j, :])
)
return k_matrix
# Predict test sample's tag
def _predict(self, sample):
k = self._k
predicted_value = (
np.sum(
[
self.alphas[i1] * self.tags[i1] * k(i1, sample)
for i1 in self._all_samples
]
)
+ self._b
)
return predicted_value
# Choose alpha1 and alpha2
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
return None
return locis
def _choose_a1(self):
"""
Choose first alpha ;steps:
1:First loop over all sample
2:Second loop over all non-bound samples till all non-bound samples does not
voilate kkt condition.
3:Repeat this two process endlessly,till all samples does not voilate kkt
condition samples after first loop.
"""
while True:
all_not_obey = True
# all sample
print("scanning all sample!")
for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]:
all_not_obey = False
yield from self._choose_a2(i1)
# non-bound sample
print("scanning non-bound sample!")
while True:
not_obey = True
for i1 in [
i
for i in self._all_samples
if self._check_obey_kkt(i) and self._is_unbound(i)
]:
not_obey = False
yield from self._choose_a2(i1)
if not_obey:
print("all non-bound samples fit the KKT condition!")
break
if all_not_obey:
print("all samples fit the KKT condition! Optimization done!")
break
return False
def _choose_a2(self, i1):
"""
Choose the second alpha by using heuristic algorithm ;steps:
1: Choose alpha2 which gets the maximum step size (|E1 - E2|).
2: Start in a random point,loop over all non-bound samples till alpha1 and
alpha2 are optimized.
3: Start in a random point,loop over all samples till alpha1 and alpha2 are
optimized.
"""
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
if len(self.unbound) > 0:
tmp_error = self._error.copy().tolist()
tmp_error_dict = {
index: value
for index, value in enumerate(tmp_error)
if self._is_unbound(index)
}
if self._e(i1) >= 0:
i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index])
else:
i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index])
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self.unbound, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self._all_samples, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
# Get the new alpha2 and new alpha1
def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2):
k = self._k
if i1 == i2:
return None, None
# calculate L and H which bound the new alpha2
s = y1 * y2
if s == -1:
l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1)
else:
l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1)
if l == h:
return None, None
# calculate eta
k11 = k(i1, i1)
k22 = k(i2, i2)
k12 = k(i1, i2)
# select the new alpha2 which could get the minimal objectives
if (eta := k11 + k22 - 2.0 * k12) > 0.0:
a2_new_unc = a2 + (y2 * (e1 - e2)) / eta
# a2_new has a boundary
if a2_new_unc >= h:
a2_new = h
elif a2_new_unc <= l:
a2_new = l
else:
a2_new = a2_new_unc
else:
b = self._b
l1 = a1 + s * (a2 - l)
h1 = a1 + s * (a2 - h)
# way 1
f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2)
f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2)
ol = (
l1 * f1
+ l * f2
+ 1 / 2 * l1**2 * k(i1, i1)
+ 1 / 2 * l**2 * k(i2, i2)
+ s * l * l1 * k(i1, i2)
)
oh = (
h1 * f1
+ h * f2
+ 1 / 2 * h1**2 * k(i1, i1)
+ 1 / 2 * h**2 * k(i2, i2)
+ s * h * h1 * k(i1, i2)
)
"""
# way 2
Use objective function check which alpha2 new could get the minimal
objectives
"""
if ol < (oh - self._eps):
a2_new = l
elif ol > oh + self._eps:
a2_new = h
else:
a2_new = a2
# a1_new has a boundary too
a1_new = a1 + s * (a2 - a2_new)
if a1_new < 0:
a2_new += s * a1_new
a1_new = 0
if a1_new > self._c:
a2_new += s * (a1_new - self._c)
a1_new = self._c
return a1_new, a2_new
# Normalise data using min_max way
def _norm(self, data):
if self._init:
self._min = np.min(data, axis=0)
self._max = np.max(data, axis=0)
self._init = False
return (data - self._min) / (self._max - self._min)
else:
return (data - self._min) / (self._max - self._min)
def _is_unbound(self, index):
return bool(0.0 < self.alphas[index] < self._c)
def _is_support(self, index):
return bool(self.alphas[index] > 0)
@property
def unbound(self):
return self._unbound
@property
def support(self):
return [i for i in range(self.length) if self._is_support(i)]
@property
def length(self):
return self.samples.shape[0]
class Kernel:
def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0):
self.degree = np.float64(degree)
self.coef0 = np.float64(coef0)
self.gamma = np.float64(gamma)
self._kernel_name = kernel
self._kernel = self._get_kernel(kernel_name=kernel)
self._check()
def _polynomial(self, v1, v2):
return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree
def _linear(self, v1, v2):
return np.inner(v1, v2) + self.coef0
def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
if self._kernel == self._rbf and self.gamma < 0:
raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
return maps[kernel_name]
def __call__(self, v1, v2):
return self._kernel(v1, v2)
def __repr__(self):
return self._kernel_name
def count_time(func):
def call_func(*args, **kwargs):
import time
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print(f"smo algorithm cost {end_time - start_time} seconds")
return call_func
@count_time
def test_cancel_data():
print("Hello!\nStart test svm by smo algorithm!")
# 0: download dataset and load into pandas' dataframe
if not os.path.exists(r"cancel_data.csv"):
request = urllib.request.Request(
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
response = urllib.request.urlopen(request) # noqa: S310
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
data = pd.read_csv(r"cancel_data.csv", header=None)
# 1: pre-processing data
del data[data.columns.tolist()[0]]
data = data.dropna(axis=0)
data = data.replace({"M": np.float64(1), "B": np.float64(-1)})
samples = np.array(data)[:, :]
# 2: dividing data into train_data data and test_data data
train_data, test_data = samples[:328, :], samples[328:, :]
test_tags, test_samples = test_data[:, 0], test_data[:, 1:]
# 3: choose kernel function,and set initial alphas to zero(optional)
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
al = np.zeros(train_data.shape[0])
# 4: calculating best alphas using SMO algorithm and predict test_data samples
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
alpha_list=al,
cost=0.4,
b=0.0,
tolerance=0.001,
)
mysvm.fit()
predict = mysvm.predict(test_samples)
# 5: check accuracy
score = 0
test_num = test_tags.shape[0]
for i in range(test_tags.shape[0]):
if test_tags[i] == predict[i]:
score += 1
print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}")
print(f"Rough Accuracy: {score / test_tags.shape[0]}")
def test_demonstration():
# change stdout
print("\nStart plot,please wait!!!")
sys.stdout = open(os.devnull, "w")
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0))
ax4 = plt.subplot2grid((2, 2), (1, 1))
ax1.set_title("linear svm,cost:0.1")
test_linear_kernel(ax1, cost=0.1)
ax2.set_title("linear svm,cost:500")
test_linear_kernel(ax2, cost=500)
ax3.set_title("rbf kernel svm,cost:0.1")
test_rbf_kernel(ax3, cost=0.1)
ax4.set_title("rbf kernel svm,cost:500")
test_rbf_kernel(ax4, cost=500)
sys.stdout = sys.__stdout__
print("Plot done!!!")
def test_linear_kernel(ax, cost):
train_x, train_y = make_blobs(
n_samples=500, centers=2, n_features=2, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def test_rbf_kernel(ax, cost):
train_x, train_y = make_circles(
n_samples=500, noise=0.1, factor=0.1, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def plot_partition_boundary(
model, train_data, ax, resolution=100, colors=("b", "k", "r")
):
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
prediced values of these points are calculated by using our trained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
train_data_x = train_data[:, 1]
train_data_y = train_data[:, 2]
train_data_tags = train_data[:, 0]
xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution)
yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution)
test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape(
resolution * resolution, 2
)
test_tags = model.predict(test_samples, classify=False)
grid = test_tags.reshape((len(xrange), len(yrange)))
# Plot contour map which represents the partition boundary
ax.contour(
xrange,
yrange,
np.mat(grid).T,
levels=(-1, 0, 1),
linestyles=("--", "-", "--"),
linewidths=(1, 1, 1),
colors=colors,
)
# Plot all train samples
ax.scatter(
train_data_x,
train_data_y,
c=train_data_tags,
cmap=plt.cm.Dark2,
lw=0,
alpha=0.5,
)
# Plot support vectors
support = model.support
ax.scatter(
train_data_x[support],
train_data_y[support],
c=train_data_tags[support],
cmap=plt.cm.Dark2,
)
if __name__ == "__main__":
test_cancel_data()
test_demonstration()
plt.show()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import numpy as np
from PIL import Image
def rgb2gray(rgb: np.array) -> np.array:
"""
Return gray image from rgb image
>>> rgb2gray(np.array([[[127, 255, 0]]]))
array([[187.6453]])
>>> rgb2gray(np.array([[[0, 0, 0]]]))
array([[0.]])
>>> rgb2gray(np.array([[[2, 4, 1]]]))
array([[3.0598]])
>>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
array([[159.0524, 90.0635, 117.6989]])
"""
r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2]
return 0.2989 * r + 0.5870 * g + 0.1140 * b
def gray2binary(gray: np.array) -> np.array:
"""
Return binary image from gray image
>>> gray2binary(np.array([[127, 255, 0]]))
array([[False, True, False]])
>>> gray2binary(np.array([[0]]))
array([[False]])
>>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]]))
array([[False, False, False]])
>>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
array([[False, True, False],
[False, True, False],
[False, True, False]])
"""
return (gray > 127) & (gray <= 255)
def erosion(image: np.array, kernel: np.array) -> np.array:
"""
Return eroded image
>>> erosion(np.array([[True, True, False]]), np.array([[0, 1, 0]]))
array([[False, False, False]])
>>> erosion(np.array([[True, False, False]]), np.array([[1, 1, 0]]))
array([[False, False, False]])
"""
output = np.zeros_like(image)
image_padded = np.zeros(
(image.shape[0] + kernel.shape[0] - 1, image.shape[1] + kernel.shape[1] - 1)
)
# Copy image to padded image
image_padded[kernel.shape[0] - 2 : -1 :, kernel.shape[1] - 2 : -1 :] = image
# Iterate over image & apply kernel
for x in range(image.shape[1]):
for y in range(image.shape[0]):
summation = (
kernel * image_padded[y : y + kernel.shape[0], x : x + kernel.shape[1]]
).sum()
output[y, x] = int(summation == 5)
return output
# kernel to be applied
structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
if __name__ == "__main__":
# read original image
image = np.array(Image.open(r"..\image_data\lena.jpg"))
# Apply erosion operation to a binary image
output = erosion(gray2binary(rgb2gray(image)), structuring_element)
# Save the output image
pil_img = Image.fromarray(output).convert("RGB")
pil_img.save("result_erosion.png")
| import numpy as np
from PIL import Image
def rgb2gray(rgb: np.array) -> np.array:
"""
Return gray image from rgb image
>>> rgb2gray(np.array([[[127, 255, 0]]]))
array([[187.6453]])
>>> rgb2gray(np.array([[[0, 0, 0]]]))
array([[0.]])
>>> rgb2gray(np.array([[[2, 4, 1]]]))
array([[3.0598]])
>>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
array([[159.0524, 90.0635, 117.6989]])
"""
r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2]
return 0.2989 * r + 0.5870 * g + 0.1140 * b
def gray2binary(gray: np.array) -> np.array:
"""
Return binary image from gray image
>>> gray2binary(np.array([[127, 255, 0]]))
array([[False, True, False]])
>>> gray2binary(np.array([[0]]))
array([[False]])
>>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]]))
array([[False, False, False]])
>>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
array([[False, True, False],
[False, True, False],
[False, True, False]])
"""
return (gray > 127) & (gray <= 255)
def erosion(image: np.array, kernel: np.array) -> np.array:
"""
Return eroded image
>>> erosion(np.array([[True, True, False]]), np.array([[0, 1, 0]]))
array([[False, False, False]])
>>> erosion(np.array([[True, False, False]]), np.array([[1, 1, 0]]))
array([[False, False, False]])
"""
output = np.zeros_like(image)
image_padded = np.zeros(
(image.shape[0] + kernel.shape[0] - 1, image.shape[1] + kernel.shape[1] - 1)
)
# Copy image to padded image
image_padded[kernel.shape[0] - 2 : -1 :, kernel.shape[1] - 2 : -1 :] = image
# Iterate over image & apply kernel
for x in range(image.shape[1]):
for y in range(image.shape[0]):
summation = (
kernel * image_padded[y : y + kernel.shape[0], x : x + kernel.shape[1]]
).sum()
output[y, x] = int(summation == 5)
return output
# kernel to be applied
structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
if __name__ == "__main__":
# read original image
image = np.array(Image.open(r"..\image_data\lena.jpg"))
# Apply erosion operation to a binary image
output = erosion(gray2binary(rgb2gray(image)), structuring_element)
# Save the output image
pil_img = Image.fromarray(output).convert("RGB")
pil_img.save("result_erosion.png")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 203: https://projecteuler.net/problem=203
The binomial coefficients (n k) can be arranged in triangular form, Pascal's
triangle, like this:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
.........
It can be seen that the first eight rows of Pascal's triangle contain twelve
distinct numbers: 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, 21 and 35.
A positive integer n is called squarefree if no square of a prime divides n.
Of the twelve distinct numbers in the first eight rows of Pascal's triangle,
all except 4 and 20 are squarefree. The sum of the distinct squarefree numbers
in the first eight rows is 105.
Find the sum of the distinct squarefree numbers in the first 51 rows of
Pascal's triangle.
References:
- https://en.wikipedia.org/wiki/Pascal%27s_triangle
"""
from __future__ import annotations
def get_pascal_triangle_unique_coefficients(depth: int) -> set[int]:
"""
Returns the unique coefficients of a Pascal's triangle of depth "depth".
The coefficients of this triangle are symmetric. A further improvement to this
method could be to calculate the coefficients once per level. Nonetheless,
the current implementation is fast enough for the original problem.
>>> get_pascal_triangle_unique_coefficients(1)
{1}
>>> get_pascal_triangle_unique_coefficients(2)
{1}
>>> get_pascal_triangle_unique_coefficients(3)
{1, 2}
>>> get_pascal_triangle_unique_coefficients(8)
{1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}
"""
coefficients = {1}
previous_coefficients = [1]
for _ in range(2, depth + 1):
coefficients_begins_one = [*previous_coefficients, 0]
coefficients_ends_one = [0, *previous_coefficients]
previous_coefficients = []
for x, y in zip(coefficients_begins_one, coefficients_ends_one):
coefficients.add(x + y)
previous_coefficients.append(x + y)
return coefficients
def get_squarefrees(unique_coefficients: set[int]) -> set[int]:
"""
Calculates the squarefree numbers inside unique_coefficients.
Based on the definition of a non-squarefree number, then any non-squarefree
n can be decomposed as n = p*p*r, where p is positive prime number and r
is a positive integer.
Under the previous formula, any coefficient that is lower than p*p is
squarefree as r cannot be negative. On the contrary, if any r exists such
that n = p*p*r, then the number is non-squarefree.
>>> get_squarefrees({1})
{1}
>>> get_squarefrees({1, 2})
{1, 2}
>>> get_squarefrees({1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21})
{1, 2, 3, 5, 6, 7, 35, 10, 15, 21}
"""
non_squarefrees = set()
for number in unique_coefficients:
divisor = 2
copy_number = number
while divisor**2 <= copy_number:
multiplicity = 0
while copy_number % divisor == 0:
copy_number //= divisor
multiplicity += 1
if multiplicity >= 2:
non_squarefrees.add(number)
break
divisor += 1
return unique_coefficients.difference(non_squarefrees)
def solution(n: int = 51) -> int:
"""
Returns the sum of squarefrees for a given Pascal's Triangle of depth n.
>>> solution(1)
1
>>> solution(8)
105
>>> solution(9)
175
"""
unique_coefficients = get_pascal_triangle_unique_coefficients(n)
squarefrees = get_squarefrees(unique_coefficients)
return sum(squarefrees)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 203: https://projecteuler.net/problem=203
The binomial coefficients (n k) can be arranged in triangular form, Pascal's
triangle, like this:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
.........
It can be seen that the first eight rows of Pascal's triangle contain twelve
distinct numbers: 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, 21 and 35.
A positive integer n is called squarefree if no square of a prime divides n.
Of the twelve distinct numbers in the first eight rows of Pascal's triangle,
all except 4 and 20 are squarefree. The sum of the distinct squarefree numbers
in the first eight rows is 105.
Find the sum of the distinct squarefree numbers in the first 51 rows of
Pascal's triangle.
References:
- https://en.wikipedia.org/wiki/Pascal%27s_triangle
"""
from __future__ import annotations
def get_pascal_triangle_unique_coefficients(depth: int) -> set[int]:
"""
Returns the unique coefficients of a Pascal's triangle of depth "depth".
The coefficients of this triangle are symmetric. A further improvement to this
method could be to calculate the coefficients once per level. Nonetheless,
the current implementation is fast enough for the original problem.
>>> get_pascal_triangle_unique_coefficients(1)
{1}
>>> get_pascal_triangle_unique_coefficients(2)
{1}
>>> get_pascal_triangle_unique_coefficients(3)
{1, 2}
>>> get_pascal_triangle_unique_coefficients(8)
{1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}
"""
coefficients = {1}
previous_coefficients = [1]
for _ in range(2, depth + 1):
coefficients_begins_one = [*previous_coefficients, 0]
coefficients_ends_one = [0, *previous_coefficients]
previous_coefficients = []
for x, y in zip(coefficients_begins_one, coefficients_ends_one):
coefficients.add(x + y)
previous_coefficients.append(x + y)
return coefficients
def get_squarefrees(unique_coefficients: set[int]) -> set[int]:
"""
Calculates the squarefree numbers inside unique_coefficients.
Based on the definition of a non-squarefree number, then any non-squarefree
n can be decomposed as n = p*p*r, where p is positive prime number and r
is a positive integer.
Under the previous formula, any coefficient that is lower than p*p is
squarefree as r cannot be negative. On the contrary, if any r exists such
that n = p*p*r, then the number is non-squarefree.
>>> get_squarefrees({1})
{1}
>>> get_squarefrees({1, 2})
{1, 2}
>>> get_squarefrees({1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21})
{1, 2, 3, 5, 6, 7, 35, 10, 15, 21}
"""
non_squarefrees = set()
for number in unique_coefficients:
divisor = 2
copy_number = number
while divisor**2 <= copy_number:
multiplicity = 0
while copy_number % divisor == 0:
copy_number //= divisor
multiplicity += 1
if multiplicity >= 2:
non_squarefrees.add(number)
break
divisor += 1
return unique_coefficients.difference(non_squarefrees)
def solution(n: int = 51) -> int:
"""
Returns the sum of squarefrees for a given Pascal's Triangle of depth n.
>>> solution(1)
1
>>> solution(8)
105
>>> solution(9)
175
"""
unique_coefficients = get_pascal_triangle_unique_coefficients(n)
squarefrees = get_squarefrees(unique_coefficients)
return sum(squarefrees)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
developed by: markmelnic
original repo: https://github.com/markmelnic/Scoring-Algorithm
Analyse data using a range based percentual proximity algorithm
and calculate the linear maximum likelihood estimation.
The basic principle is that all values supplied will be broken
down to a range from 0 to 1 and each column's score will be added
up to get the total score.
==========
Example for data of vehicles
price|mileage|registration_year
20k |60k |2012
22k |50k |2011
23k |90k |2015
16k |210k |2010
We want the vehicle with the lowest price,
lowest mileage but newest registration year.
Thus the weights for each column are as follows:
[0, 0, 1]
"""
def get_data(source_data: list[list[float]]) -> list[list[float]]:
"""
>>> get_data([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]])
[[20.0, 23.0, 22.0], [60.0, 90.0, 50.0], [2012.0, 2015.0, 2011.0]]
"""
data_lists: list[list[float]] = []
for data in source_data:
for i, el in enumerate(data):
if len(data_lists) < i + 1:
data_lists.append([])
data_lists[i].append(float(el))
return data_lists
def calculate_each_score(
data_lists: list[list[float]], weights: list[int]
) -> list[list[float]]:
"""
>>> calculate_each_score([[20, 23, 22], [60, 90, 50], [2012, 2015, 2011]],
... [0, 0, 1])
[[1.0, 0.0, 0.33333333333333337], [0.75, 0.0, 1.0], [0.25, 1.0, 0.0]]
"""
score_lists: list[list[float]] = []
for dlist, weight in zip(data_lists, weights):
mind = min(dlist)
maxd = max(dlist)
score: list[float] = []
# for weight 0 score is 1 - actual score
if weight == 0:
for item in dlist:
try:
score.append(1 - ((item - mind) / (maxd - mind)))
except ZeroDivisionError:
score.append(1)
elif weight == 1:
for item in dlist:
try:
score.append((item - mind) / (maxd - mind))
except ZeroDivisionError:
score.append(0)
# weight not 0 or 1
else:
msg = f"Invalid weight of {weight:f} provided"
raise ValueError(msg)
score_lists.append(score)
return score_lists
def generate_final_scores(score_lists: list[list[float]]) -> list[float]:
"""
>>> generate_final_scores([[1.0, 0.0, 0.33333333333333337],
... [0.75, 0.0, 1.0],
... [0.25, 1.0, 0.0]])
[2.0, 1.0, 1.3333333333333335]
"""
# initialize final scores
final_scores: list[float] = [0 for i in range(len(score_lists[0]))]
for slist in score_lists:
for j, ele in enumerate(slist):
final_scores[j] = final_scores[j] + ele
return final_scores
def procentual_proximity(
source_data: list[list[float]], weights: list[int]
) -> list[list[float]]:
"""
weights - int list
possible values - 0 / 1
0 if lower values have higher weight in the data set
1 if higher values have higher weight in the data set
>>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
[[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
"""
data_lists = get_data(source_data)
score_lists = calculate_each_score(data_lists, weights)
final_scores = generate_final_scores(score_lists)
# append scores to source data
for i, ele in enumerate(final_scores):
source_data[i].append(ele)
return source_data
| """
developed by: markmelnic
original repo: https://github.com/markmelnic/Scoring-Algorithm
Analyse data using a range based percentual proximity algorithm
and calculate the linear maximum likelihood estimation.
The basic principle is that all values supplied will be broken
down to a range from 0 to 1 and each column's score will be added
up to get the total score.
==========
Example for data of vehicles
price|mileage|registration_year
20k |60k |2012
22k |50k |2011
23k |90k |2015
16k |210k |2010
We want the vehicle with the lowest price,
lowest mileage but newest registration year.
Thus the weights for each column are as follows:
[0, 0, 1]
"""
def get_data(source_data: list[list[float]]) -> list[list[float]]:
"""
>>> get_data([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]])
[[20.0, 23.0, 22.0], [60.0, 90.0, 50.0], [2012.0, 2015.0, 2011.0]]
"""
data_lists: list[list[float]] = []
for data in source_data:
for i, el in enumerate(data):
if len(data_lists) < i + 1:
data_lists.append([])
data_lists[i].append(float(el))
return data_lists
def calculate_each_score(
data_lists: list[list[float]], weights: list[int]
) -> list[list[float]]:
"""
>>> calculate_each_score([[20, 23, 22], [60, 90, 50], [2012, 2015, 2011]],
... [0, 0, 1])
[[1.0, 0.0, 0.33333333333333337], [0.75, 0.0, 1.0], [0.25, 1.0, 0.0]]
"""
score_lists: list[list[float]] = []
for dlist, weight in zip(data_lists, weights):
mind = min(dlist)
maxd = max(dlist)
score: list[float] = []
# for weight 0 score is 1 - actual score
if weight == 0:
for item in dlist:
try:
score.append(1 - ((item - mind) / (maxd - mind)))
except ZeroDivisionError:
score.append(1)
elif weight == 1:
for item in dlist:
try:
score.append((item - mind) / (maxd - mind))
except ZeroDivisionError:
score.append(0)
# weight not 0 or 1
else:
msg = f"Invalid weight of {weight:f} provided"
raise ValueError(msg)
score_lists.append(score)
return score_lists
def generate_final_scores(score_lists: list[list[float]]) -> list[float]:
"""
>>> generate_final_scores([[1.0, 0.0, 0.33333333333333337],
... [0.75, 0.0, 1.0],
... [0.25, 1.0, 0.0]])
[2.0, 1.0, 1.3333333333333335]
"""
# initialize final scores
final_scores: list[float] = [0 for i in range(len(score_lists[0]))]
for slist in score_lists:
for j, ele in enumerate(slist):
final_scores[j] = final_scores[j] + ele
return final_scores
def procentual_proximity(
source_data: list[list[float]], weights: list[int]
) -> list[list[float]]:
"""
weights - int list
possible values - 0 / 1
0 if lower values have higher weight in the data set
1 if higher values have higher weight in the data set
>>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
[[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
"""
data_lists = get_data(source_data)
score_lists = calculate_each_score(data_lists, weights)
final_scores = generate_final_scores(score_lists)
# append scores to source data
for i, ele in enumerate(final_scores):
source_data[i].append(ele)
return source_data
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Get book and author data from https://openlibrary.org
ISBN: https://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
from json import JSONDecodeError # Workaround for requests.exceptions.JSONDecodeError
import requests
def get_openlibrary_data(olid: str = "isbn/0140328726") -> dict:
"""
Given an 'isbn/0140328726', return book data from Open Library as a Python dict.
Given an '/authors/OL34184A', return authors data as a Python dict.
This code must work for olids with or without a leading slash ('/').
# Comment out doctests if they take too long or have results that may change
# >>> get_openlibrary_data(olid='isbn/0140328726') # doctest: +ELLIPSIS
{'publishers': ['Puffin'], 'number_of_pages': 96, 'isbn_10': ['0140328726'], ...
# >>> get_openlibrary_data(olid='/authors/OL7353617A') # doctest: +ELLIPSIS
{'name': 'Adrian Brisku', 'created': {'type': '/type/datetime', ...
"""
new_olid = olid.strip().strip("/") # Remove leading/trailing whitespace & slashes
if new_olid.count("/") != 1:
msg = f"{olid} is not a valid Open Library olid"
raise ValueError(msg)
return requests.get(f"https://openlibrary.org/{new_olid}.json").json()
def summarize_book(ol_book_data: dict) -> dict:
"""
Given Open Library book data, return a summary as a Python dict.
"""
desired_keys = {
"title": "Title",
"publish_date": "Publish date",
"authors": "Authors",
"number_of_pages": "Number of pages:",
"first_sentence": "First sentence",
"isbn_10": "ISBN (10)",
"isbn_13": "ISBN (13)",
}
data = {better_key: ol_book_data[key] for key, better_key in desired_keys.items()}
data["Authors"] = [
get_openlibrary_data(author["key"])["name"] for author in data["Authors"]
]
data["First sentence"] = data["First sentence"]["value"]
for key, value in data.items():
if isinstance(value, list):
data[key] = ", ".join(value)
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
while True:
isbn = input("\nEnter the ISBN code to search (or 'quit' to stop): ").strip()
if isbn.lower() in ("", "q", "quit", "exit", "stop"):
break
if len(isbn) not in (10, 13) or not isbn.isdigit():
print(f"Sorry, {isbn} is not a valid ISBN. Please, input a valid ISBN.")
continue
print(f"\nSearching Open Library for ISBN: {isbn}...\n")
try:
book_summary = summarize_book(get_openlibrary_data(f"isbn/{isbn}"))
print("\n".join(f"{key}: {value}" for key, value in book_summary.items()))
except JSONDecodeError: # Workaround for requests.exceptions.RequestException:
print(f"Sorry, there are no results for ISBN: {isbn}.")
| """
Get book and author data from https://openlibrary.org
ISBN: https://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
from json import JSONDecodeError # Workaround for requests.exceptions.JSONDecodeError
import requests
def get_openlibrary_data(olid: str = "isbn/0140328726") -> dict:
"""
Given an 'isbn/0140328726', return book data from Open Library as a Python dict.
Given an '/authors/OL34184A', return authors data as a Python dict.
This code must work for olids with or without a leading slash ('/').
# Comment out doctests if they take too long or have results that may change
# >>> get_openlibrary_data(olid='isbn/0140328726') # doctest: +ELLIPSIS
{'publishers': ['Puffin'], 'number_of_pages': 96, 'isbn_10': ['0140328726'], ...
# >>> get_openlibrary_data(olid='/authors/OL7353617A') # doctest: +ELLIPSIS
{'name': 'Adrian Brisku', 'created': {'type': '/type/datetime', ...
"""
new_olid = olid.strip().strip("/") # Remove leading/trailing whitespace & slashes
if new_olid.count("/") != 1:
msg = f"{olid} is not a valid Open Library olid"
raise ValueError(msg)
return requests.get(f"https://openlibrary.org/{new_olid}.json").json()
def summarize_book(ol_book_data: dict) -> dict:
"""
Given Open Library book data, return a summary as a Python dict.
"""
desired_keys = {
"title": "Title",
"publish_date": "Publish date",
"authors": "Authors",
"number_of_pages": "Number of pages:",
"first_sentence": "First sentence",
"isbn_10": "ISBN (10)",
"isbn_13": "ISBN (13)",
}
data = {better_key: ol_book_data[key] for key, better_key in desired_keys.items()}
data["Authors"] = [
get_openlibrary_data(author["key"])["name"] for author in data["Authors"]
]
data["First sentence"] = data["First sentence"]["value"]
for key, value in data.items():
if isinstance(value, list):
data[key] = ", ".join(value)
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
while True:
isbn = input("\nEnter the ISBN code to search (or 'quit' to stop): ").strip()
if isbn.lower() in ("", "q", "quit", "exit", "stop"):
break
if len(isbn) not in (10, 13) or not isbn.isdigit():
print(f"Sorry, {isbn} is not a valid ISBN. Please, input a valid ISBN.")
continue
print(f"\nSearching Open Library for ISBN: {isbn}...\n")
try:
book_summary = summarize_book(get_openlibrary_data(f"isbn/{isbn}"))
print("\n".join(f"{key}: {value}" for key, value in book_summary.items()))
except JSONDecodeError: # Workaround for requests.exceptions.RequestException:
print(f"Sorry, there are no results for ISBN: {isbn}.")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Title: Graham's Law of Effusion
Description: Graham's law of effusion states that the rate of effusion of a gas is
inversely proportional to the square root of the molar mass of its particles:
r1/r2 = sqrt(m2/m1)
r1 = Rate of effusion for the first gas.
r2 = Rate of effusion for the second gas.
m1 = Molar mass of the first gas.
m2 = Molar mass of the second gas.
(Description adapted from https://en.wikipedia.org/wiki/Graham%27s_law)
"""
from math import pow, sqrt
def validate(*values: float) -> bool:
"""
Input Parameters:
-----------------
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> validate(2.016, 4.002)
True
>>> validate(-2.016, 4.002)
False
>>> validate()
False
"""
result = len(values) > 0 and all(value > 0.0 for value in values)
return result
def effusion_ratio(molar_mass_1: float, molar_mass_2: float) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> effusion_ratio(2.016, 4.002)
1.408943
>>> effusion_ratio(-2.016, 4.002)
ValueError('Input Error: Molar mass values must greater than 0.')
>>> effusion_ratio(2.016)
Traceback (most recent call last):
...
TypeError: effusion_ratio() missing 1 required positional argument: 'molar_mass_2'
"""
return (
round(sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(molar_mass_1, molar_mass_2)
else ValueError("Input Error: Molar mass values must greater than 0.")
)
def first_effusion_rate(
effusion_rate: float, molar_mass_1: float, molar_mass_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> first_effusion_rate(1, 2.016, 4.002)
1.408943
>>> first_effusion_rate(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> first_effusion_rate(1)
Traceback (most recent call last):
...
TypeError: first_effusion_rate() missing 2 required positional arguments: \
'molar_mass_1' and 'molar_mass_2'
>>> first_effusion_rate(1, 2.016)
Traceback (most recent call last):
...
TypeError: first_effusion_rate() missing 1 required positional argument: \
'molar_mass_2'
"""
return (
round(effusion_rate * sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(effusion_rate, molar_mass_1, molar_mass_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def second_effusion_rate(
effusion_rate: float, molar_mass_1: float, molar_mass_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> second_effusion_rate(1, 2.016, 4.002)
0.709752
>>> second_effusion_rate(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> second_effusion_rate(1)
Traceback (most recent call last):
...
TypeError: second_effusion_rate() missing 2 required positional arguments: \
'molar_mass_1' and 'molar_mass_2'
>>> second_effusion_rate(1, 2.016)
Traceback (most recent call last):
...
TypeError: second_effusion_rate() missing 1 required positional argument: \
'molar_mass_2'
"""
return (
round(effusion_rate / sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(effusion_rate, molar_mass_1, molar_mass_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def first_molar_mass(
molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
Returns:
--------
>>> first_molar_mass(2, 1.408943, 0.709752)
0.507524
>>> first_molar_mass(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> first_molar_mass(1)
Traceback (most recent call last):
...
TypeError: first_molar_mass() missing 2 required positional arguments: \
'effusion_rate_1' and 'effusion_rate_2'
>>> first_molar_mass(1, 2.016)
Traceback (most recent call last):
...
TypeError: first_molar_mass() missing 1 required positional argument: \
'effusion_rate_2'
"""
return (
round(molar_mass / pow(effusion_rate_1 / effusion_rate_2, 2), 6)
if validate(molar_mass, effusion_rate_1, effusion_rate_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def second_molar_mass(
molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
Returns:
--------
>>> second_molar_mass(2, 1.408943, 0.709752)
1.970351
>>> second_molar_mass(-2, 1.408943, 0.709752)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> second_molar_mass(1)
Traceback (most recent call last):
...
TypeError: second_molar_mass() missing 2 required positional arguments: \
'effusion_rate_1' and 'effusion_rate_2'
>>> second_molar_mass(1, 2.016)
Traceback (most recent call last):
...
TypeError: second_molar_mass() missing 1 required positional argument: \
'effusion_rate_2'
"""
return (
round(pow(effusion_rate_1 / effusion_rate_2, 2) / molar_mass, 6)
if validate(molar_mass, effusion_rate_1, effusion_rate_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
| """
Title: Graham's Law of Effusion
Description: Graham's law of effusion states that the rate of effusion of a gas is
inversely proportional to the square root of the molar mass of its particles:
r1/r2 = sqrt(m2/m1)
r1 = Rate of effusion for the first gas.
r2 = Rate of effusion for the second gas.
m1 = Molar mass of the first gas.
m2 = Molar mass of the second gas.
(Description adapted from https://en.wikipedia.org/wiki/Graham%27s_law)
"""
from math import pow, sqrt
def validate(*values: float) -> bool:
"""
Input Parameters:
-----------------
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> validate(2.016, 4.002)
True
>>> validate(-2.016, 4.002)
False
>>> validate()
False
"""
result = len(values) > 0 and all(value > 0.0 for value in values)
return result
def effusion_ratio(molar_mass_1: float, molar_mass_2: float) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> effusion_ratio(2.016, 4.002)
1.408943
>>> effusion_ratio(-2.016, 4.002)
ValueError('Input Error: Molar mass values must greater than 0.')
>>> effusion_ratio(2.016)
Traceback (most recent call last):
...
TypeError: effusion_ratio() missing 1 required positional argument: 'molar_mass_2'
"""
return (
round(sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(molar_mass_1, molar_mass_2)
else ValueError("Input Error: Molar mass values must greater than 0.")
)
def first_effusion_rate(
effusion_rate: float, molar_mass_1: float, molar_mass_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> first_effusion_rate(1, 2.016, 4.002)
1.408943
>>> first_effusion_rate(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> first_effusion_rate(1)
Traceback (most recent call last):
...
TypeError: first_effusion_rate() missing 2 required positional arguments: \
'molar_mass_1' and 'molar_mass_2'
>>> first_effusion_rate(1, 2.016)
Traceback (most recent call last):
...
TypeError: first_effusion_rate() missing 1 required positional argument: \
'molar_mass_2'
"""
return (
round(effusion_rate * sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(effusion_rate, molar_mass_1, molar_mass_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def second_effusion_rate(
effusion_rate: float, molar_mass_1: float, molar_mass_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
Returns:
--------
>>> second_effusion_rate(1, 2.016, 4.002)
0.709752
>>> second_effusion_rate(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> second_effusion_rate(1)
Traceback (most recent call last):
...
TypeError: second_effusion_rate() missing 2 required positional arguments: \
'molar_mass_1' and 'molar_mass_2'
>>> second_effusion_rate(1, 2.016)
Traceback (most recent call last):
...
TypeError: second_effusion_rate() missing 1 required positional argument: \
'molar_mass_2'
"""
return (
round(effusion_rate / sqrt(molar_mass_2 / molar_mass_1), 6)
if validate(effusion_rate, molar_mass_1, molar_mass_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def first_molar_mass(
molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
Returns:
--------
>>> first_molar_mass(2, 1.408943, 0.709752)
0.507524
>>> first_molar_mass(-1, 2.016, 4.002)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> first_molar_mass(1)
Traceback (most recent call last):
...
TypeError: first_molar_mass() missing 2 required positional arguments: \
'effusion_rate_1' and 'effusion_rate_2'
>>> first_molar_mass(1, 2.016)
Traceback (most recent call last):
...
TypeError: first_molar_mass() missing 1 required positional argument: \
'effusion_rate_2'
"""
return (
round(molar_mass / pow(effusion_rate_1 / effusion_rate_2, 2), 6)
if validate(molar_mass, effusion_rate_1, effusion_rate_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
def second_molar_mass(
molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
) -> float | ValueError:
"""
Input Parameters:
-----------------
molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
Returns:
--------
>>> second_molar_mass(2, 1.408943, 0.709752)
1.970351
>>> second_molar_mass(-2, 1.408943, 0.709752)
ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
>>> second_molar_mass(1)
Traceback (most recent call last):
...
TypeError: second_molar_mass() missing 2 required positional arguments: \
'effusion_rate_1' and 'effusion_rate_2'
>>> second_molar_mass(1, 2.016)
Traceback (most recent call last):
...
TypeError: second_molar_mass() missing 1 required positional argument: \
'effusion_rate_2'
"""
return (
round(pow(effusion_rate_1 / effusion_rate_2, 2) / molar_mass, 6)
if validate(molar_mass, effusion_rate_1, effusion_rate_2)
else ValueError(
"Input Error: Molar mass and effusion rate values must greater than 0."
)
)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
from math import pi
from typing import Protocol
import matplotlib.pyplot as plt
import numpy as np
class FilterType(Protocol):
def process(self, sample: float) -> float:
"""
Calculate y[n]
>>> issubclass(FilterType, Protocol)
True
"""
return 0.0
def get_bounds(
fft_results: np.ndarray, samplerate: int
) -> tuple[int | float, int | float]:
"""
Get bounds for printing fft results
>>> import numpy
>>> array = numpy.linspace(-20.0, 20.0, 1000)
>>> get_bounds(array, 1000)
(-20, 20)
"""
lowest = min([-20, np.min(fft_results[1 : samplerate // 2 - 1])])
highest = max([20, np.max(fft_results[1 : samplerate // 2 - 1])])
return lowest, highest
def show_frequency_response(filter_type: FilterType, samplerate: int) -> None:
"""
Show frequency response of a filter
>>> from audio_filters.iir_filter import IIRFilter
>>> filt = IIRFilter(4)
>>> show_frequency_response(filt, 48000)
"""
size = 512
inputs = [1] + [0] * (size - 1)
outputs = [filter_type.process(item) for item in inputs]
filler = [0] * (samplerate - size) # zero-padding
outputs += filler
fft_out = np.abs(np.fft.fft(outputs))
fft_db = 20 * np.log10(fft_out)
# Frequencies on log scale from 24 to nyquist frequency
plt.xlim(24, samplerate / 2 - 1)
plt.xlabel("Frequency (Hz)")
plt.xscale("log")
# Display within reasonable bounds
bounds = get_bounds(fft_db, samplerate)
plt.ylim(max([-80, bounds[0]]), min([80, bounds[1]]))
plt.ylabel("Gain (dB)")
plt.plot(fft_db)
plt.show()
def show_phase_response(filter_type: FilterType, samplerate: int) -> None:
"""
Show phase response of a filter
>>> from audio_filters.iir_filter import IIRFilter
>>> filt = IIRFilter(4)
>>> show_phase_response(filt, 48000)
"""
size = 512
inputs = [1] + [0] * (size - 1)
outputs = [filter_type.process(item) for item in inputs]
filler = [0] * (samplerate - size) # zero-padding
outputs += filler
fft_out = np.angle(np.fft.fft(outputs))
# Frequencies on log scale from 24 to nyquist frequency
plt.xlim(24, samplerate / 2 - 1)
plt.xlabel("Frequency (Hz)")
plt.xscale("log")
plt.ylim(-2 * pi, 2 * pi)
plt.ylabel("Phase shift (Radians)")
plt.plot(np.unwrap(fft_out, -2 * pi))
plt.show()
| from __future__ import annotations
from math import pi
from typing import Protocol
import matplotlib.pyplot as plt
import numpy as np
class FilterType(Protocol):
def process(self, sample: float) -> float:
"""
Calculate y[n]
>>> issubclass(FilterType, Protocol)
True
"""
return 0.0
def get_bounds(
fft_results: np.ndarray, samplerate: int
) -> tuple[int | float, int | float]:
"""
Get bounds for printing fft results
>>> import numpy
>>> array = numpy.linspace(-20.0, 20.0, 1000)
>>> get_bounds(array, 1000)
(-20, 20)
"""
lowest = min([-20, np.min(fft_results[1 : samplerate // 2 - 1])])
highest = max([20, np.max(fft_results[1 : samplerate // 2 - 1])])
return lowest, highest
def show_frequency_response(filter_type: FilterType, samplerate: int) -> None:
"""
Show frequency response of a filter
>>> from audio_filters.iir_filter import IIRFilter
>>> filt = IIRFilter(4)
>>> show_frequency_response(filt, 48000)
"""
size = 512
inputs = [1] + [0] * (size - 1)
outputs = [filter_type.process(item) for item in inputs]
filler = [0] * (samplerate - size) # zero-padding
outputs += filler
fft_out = np.abs(np.fft.fft(outputs))
fft_db = 20 * np.log10(fft_out)
# Frequencies on log scale from 24 to nyquist frequency
plt.xlim(24, samplerate / 2 - 1)
plt.xlabel("Frequency (Hz)")
plt.xscale("log")
# Display within reasonable bounds
bounds = get_bounds(fft_db, samplerate)
plt.ylim(max([-80, bounds[0]]), min([80, bounds[1]]))
plt.ylabel("Gain (dB)")
plt.plot(fft_db)
plt.show()
def show_phase_response(filter_type: FilterType, samplerate: int) -> None:
"""
Show phase response of a filter
>>> from audio_filters.iir_filter import IIRFilter
>>> filt = IIRFilter(4)
>>> show_phase_response(filt, 48000)
"""
size = 512
inputs = [1] + [0] * (size - 1)
outputs = [filter_type.process(item) for item in inputs]
filler = [0] * (samplerate - size) # zero-padding
outputs += filler
fft_out = np.angle(np.fft.fft(outputs))
# Frequencies on log scale from 24 to nyquist frequency
plt.xlim(24, samplerate / 2 - 1)
plt.xlabel("Frequency (Hz)")
plt.xscale("log")
plt.ylim(-2 * pi, 2 * pi)
plt.ylabel("Phase shift (Radians)")
plt.plot(np.unwrap(fft_out, -2 * pi))
plt.show()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/env python3
# This Python program implements an optimal binary search tree (abbreviated BST)
# building dynamic programming algorithm that delivers O(n^2) performance.
#
# The goal of the optimal BST problem is to build a low-cost BST for a
# given set of nodes, each with its own key and frequency. The frequency
# of the node is defined as how many time the node is being searched.
# The search cost of binary search tree is given by this formula:
#
# cost(1, n) = sum{i = 1 to n}((depth(node_i) + 1) * node_i_freq)
#
# where n is number of nodes in the BST. The characteristic of low-cost
# BSTs is having a faster overall search time than other implementations.
# The reason for their fast search time is that the nodes with high
# frequencies will be placed near the root of the tree while the nodes
# with low frequencies will be placed near the leaves of the tree thus
# reducing search time in the most frequent instances.
import sys
from random import randint
class Node:
"""Binary Search Tree Node"""
def __init__(self, key, freq):
self.key = key
self.freq = freq
def __str__(self):
"""
>>> str(Node(1, 2))
'Node(key=1, freq=2)'
"""
return f"Node(key={self.key}, freq={self.freq})"
def print_binary_search_tree(root, key, i, j, parent, is_left):
"""
Recursive function to print a BST from a root table.
>>> key = [3, 8, 9, 10, 17, 21]
>>> root = [[0, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 3], [0, 0, 2, 3, 3, 3], \
[0, 0, 0, 3, 3, 3], [0, 0, 0, 0, 4, 5], [0, 0, 0, 0, 0, 5]]
>>> print_binary_search_tree(root, key, 0, 5, -1, False)
8 is the root of the binary search tree.
3 is the left child of key 8.
10 is the right child of key 8.
9 is the left child of key 10.
21 is the right child of key 10.
17 is the left child of key 21.
"""
if i > j or i < 0 or j > len(root) - 1:
return
node = root[i][j]
if parent == -1: # root does not have a parent
print(f"{key[node]} is the root of the binary search tree.")
elif is_left:
print(f"{key[node]} is the left child of key {parent}.")
else:
print(f"{key[node]} is the right child of key {parent}.")
print_binary_search_tree(root, key, i, node - 1, key[node], True)
print_binary_search_tree(root, key, node + 1, j, key[node], False)
def find_optimal_binary_search_tree(nodes):
"""
This function calculates and prints the optimal binary search tree.
The dynamic programming algorithm below runs in O(n^2) time.
Implemented from CLRS (Introduction to Algorithms) book.
https://en.wikipedia.org/wiki/Introduction_to_Algorithms
>>> find_optimal_binary_search_tree([Node(12, 8), Node(10, 34), Node(20, 50), \
Node(42, 3), Node(25, 40), Node(37, 30)])
Binary search tree nodes:
Node(key=10, freq=34)
Node(key=12, freq=8)
Node(key=20, freq=50)
Node(key=25, freq=40)
Node(key=37, freq=30)
Node(key=42, freq=3)
<BLANKLINE>
The cost of optimal BST for given tree nodes is 324.
20 is the root of the binary search tree.
10 is the left child of key 20.
12 is the right child of key 10.
25 is the right child of key 20.
37 is the right child of key 25.
42 is the right child of key 37.
"""
# Tree nodes must be sorted first, the code below sorts the keys in
# increasing order and rearrange its frequencies accordingly.
nodes.sort(key=lambda node: node.key)
n = len(nodes)
keys = [nodes[i].key for i in range(n)]
freqs = [nodes[i].freq for i in range(n)]
# This 2D array stores the overall tree cost (which's as minimized as possible);
# for a single key, cost is equal to frequency of the key.
dp = [[freqs[i] if i == j else 0 for j in range(n)] for i in range(n)]
# sum[i][j] stores the sum of key frequencies between i and j inclusive in nodes
# array
total = [[freqs[i] if i == j else 0 for j in range(n)] for i in range(n)]
# stores tree roots that will be used later for constructing binary search tree
root = [[i if i == j else 0 for j in range(n)] for i in range(n)]
for interval_length in range(2, n + 1):
for i in range(n - interval_length + 1):
j = i + interval_length - 1
dp[i][j] = sys.maxsize # set the value to "infinity"
total[i][j] = total[i][j - 1] + freqs[j]
# Apply Knuth's optimization
# Loop without optimization: for r in range(i, j + 1):
for r in range(root[i][j - 1], root[i + 1][j] + 1): # r is a temporal root
left = dp[i][r - 1] if r != i else 0 # optimal cost for left subtree
right = dp[r + 1][j] if r != j else 0 # optimal cost for right subtree
cost = left + total[i][j] + right
if dp[i][j] > cost:
dp[i][j] = cost
root[i][j] = r
print("Binary search tree nodes:")
for node in nodes:
print(node)
print(f"\nThe cost of optimal BST for given tree nodes is {dp[0][n - 1]}.")
print_binary_search_tree(root, keys, 0, n - 1, -1, False)
def main():
# A sample binary search tree
nodes = [Node(i, randint(1, 50)) for i in range(10, 0, -1)]
find_optimal_binary_search_tree(nodes)
if __name__ == "__main__":
main()
| #!/usr/bin/env python3
# This Python program implements an optimal binary search tree (abbreviated BST)
# building dynamic programming algorithm that delivers O(n^2) performance.
#
# The goal of the optimal BST problem is to build a low-cost BST for a
# given set of nodes, each with its own key and frequency. The frequency
# of the node is defined as how many time the node is being searched.
# The search cost of binary search tree is given by this formula:
#
# cost(1, n) = sum{i = 1 to n}((depth(node_i) + 1) * node_i_freq)
#
# where n is number of nodes in the BST. The characteristic of low-cost
# BSTs is having a faster overall search time than other implementations.
# The reason for their fast search time is that the nodes with high
# frequencies will be placed near the root of the tree while the nodes
# with low frequencies will be placed near the leaves of the tree thus
# reducing search time in the most frequent instances.
import sys
from random import randint
class Node:
"""Binary Search Tree Node"""
def __init__(self, key, freq):
self.key = key
self.freq = freq
def __str__(self):
"""
>>> str(Node(1, 2))
'Node(key=1, freq=2)'
"""
return f"Node(key={self.key}, freq={self.freq})"
def print_binary_search_tree(root, key, i, j, parent, is_left):
"""
Recursive function to print a BST from a root table.
>>> key = [3, 8, 9, 10, 17, 21]
>>> root = [[0, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 3], [0, 0, 2, 3, 3, 3], \
[0, 0, 0, 3, 3, 3], [0, 0, 0, 0, 4, 5], [0, 0, 0, 0, 0, 5]]
>>> print_binary_search_tree(root, key, 0, 5, -1, False)
8 is the root of the binary search tree.
3 is the left child of key 8.
10 is the right child of key 8.
9 is the left child of key 10.
21 is the right child of key 10.
17 is the left child of key 21.
"""
if i > j or i < 0 or j > len(root) - 1:
return
node = root[i][j]
if parent == -1: # root does not have a parent
print(f"{key[node]} is the root of the binary search tree.")
elif is_left:
print(f"{key[node]} is the left child of key {parent}.")
else:
print(f"{key[node]} is the right child of key {parent}.")
print_binary_search_tree(root, key, i, node - 1, key[node], True)
print_binary_search_tree(root, key, node + 1, j, key[node], False)
def find_optimal_binary_search_tree(nodes):
"""
This function calculates and prints the optimal binary search tree.
The dynamic programming algorithm below runs in O(n^2) time.
Implemented from CLRS (Introduction to Algorithms) book.
https://en.wikipedia.org/wiki/Introduction_to_Algorithms
>>> find_optimal_binary_search_tree([Node(12, 8), Node(10, 34), Node(20, 50), \
Node(42, 3), Node(25, 40), Node(37, 30)])
Binary search tree nodes:
Node(key=10, freq=34)
Node(key=12, freq=8)
Node(key=20, freq=50)
Node(key=25, freq=40)
Node(key=37, freq=30)
Node(key=42, freq=3)
<BLANKLINE>
The cost of optimal BST for given tree nodes is 324.
20 is the root of the binary search tree.
10 is the left child of key 20.
12 is the right child of key 10.
25 is the right child of key 20.
37 is the right child of key 25.
42 is the right child of key 37.
"""
# Tree nodes must be sorted first, the code below sorts the keys in
# increasing order and rearrange its frequencies accordingly.
nodes.sort(key=lambda node: node.key)
n = len(nodes)
keys = [nodes[i].key for i in range(n)]
freqs = [nodes[i].freq for i in range(n)]
# This 2D array stores the overall tree cost (which's as minimized as possible);
# for a single key, cost is equal to frequency of the key.
dp = [[freqs[i] if i == j else 0 for j in range(n)] for i in range(n)]
# sum[i][j] stores the sum of key frequencies between i and j inclusive in nodes
# array
total = [[freqs[i] if i == j else 0 for j in range(n)] for i in range(n)]
# stores tree roots that will be used later for constructing binary search tree
root = [[i if i == j else 0 for j in range(n)] for i in range(n)]
for interval_length in range(2, n + 1):
for i in range(n - interval_length + 1):
j = i + interval_length - 1
dp[i][j] = sys.maxsize # set the value to "infinity"
total[i][j] = total[i][j - 1] + freqs[j]
# Apply Knuth's optimization
# Loop without optimization: for r in range(i, j + 1):
for r in range(root[i][j - 1], root[i + 1][j] + 1): # r is a temporal root
left = dp[i][r - 1] if r != i else 0 # optimal cost for left subtree
right = dp[r + 1][j] if r != j else 0 # optimal cost for right subtree
cost = left + total[i][j] + right
if dp[i][j] > cost:
dp[i][j] = cost
root[i][j] = r
print("Binary search tree nodes:")
for node in nodes:
print(node)
print(f"\nThe cost of optimal BST for given tree nodes is {dp[0][n - 1]}.")
print_binary_search_tree(root, keys, 0, n - 1, -1, False)
def main():
# A sample binary search tree
nodes = [Node(i, randint(1, 50)) for i in range(10, 0, -1)]
find_optimal_binary_search_tree(nodes)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
DIRECTIONS = [
[-1, 0], # left
[0, -1], # down
[1, 0], # right
[0, 1], # up
]
# function to search the path
def search(
grid: list[list[int]],
init: list[int],
goal: list[int],
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
closed[init[0]][init[1]] = 1
action = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the action grid
x = init[0]
y = init[1]
g = 0
f = g + heuristic[x][y] # cost from starting cell to destination cell
cell = [[f, g, x, y]]
found = False # flag that is set when search is complete
resign = False # flag set if we can't find expand
while not found and not resign:
if len(cell) == 0:
raise ValueError("Algorithm is unable to find solution")
else: # to choose the least costliest action so as to move closer to the goal
cell.sort()
cell.reverse()
next_cell = cell.pop()
x = next_cell[2]
y = next_cell[3]
g = next_cell[1]
if x == goal[0] and y == goal[1]:
found = True
else:
for i in range(len(DIRECTIONS)): # to try out different valid actions
x2 = x + DIRECTIONS[i][0]
y2 = y + DIRECTIONS[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
f2 = g2 + heuristic[x2][y2]
cell.append([f2, g2, x2, y2])
closed[x2][y2] = 1
action[x2][y2] = i
invpath = []
x = goal[0]
y = goal[1]
invpath.append([x, y]) # we get the reverse path from here
while x != init[0] or y != init[1]:
x2 = x - DIRECTIONS[action[x][y]][0]
y2 = y - DIRECTIONS[action[x][y]][1]
x = x2
y = y2
invpath.append([x, y])
path = []
for i in range(len(invpath)):
path.append(invpath[len(invpath) - 1 - i])
return path, action
if __name__ == "__main__":
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
]
init = [0, 0]
# all coordinates are given in format [y,x]
goal = [len(grid) - 1, len(grid[0]) - 1]
cost = 1
# the cost map which pushes the path closer to the goal
heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
for i in range(len(grid)):
for j in range(len(grid[0])):
heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
if grid[i][j] == 1:
# added extra penalty in the heuristic map
heuristic[i][j] = 99
path, action = search(grid, init, goal, cost, heuristic)
print("ACTION MAP")
for i in range(len(action)):
print(action[i])
for i in range(len(path)):
print(path[i])
| from __future__ import annotations
DIRECTIONS = [
[-1, 0], # left
[0, -1], # down
[1, 0], # right
[0, 1], # up
]
# function to search the path
def search(
grid: list[list[int]],
init: list[int],
goal: list[int],
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
closed[init[0]][init[1]] = 1
action = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the action grid
x = init[0]
y = init[1]
g = 0
f = g + heuristic[x][y] # cost from starting cell to destination cell
cell = [[f, g, x, y]]
found = False # flag that is set when search is complete
resign = False # flag set if we can't find expand
while not found and not resign:
if len(cell) == 0:
raise ValueError("Algorithm is unable to find solution")
else: # to choose the least costliest action so as to move closer to the goal
cell.sort()
cell.reverse()
next_cell = cell.pop()
x = next_cell[2]
y = next_cell[3]
g = next_cell[1]
if x == goal[0] and y == goal[1]:
found = True
else:
for i in range(len(DIRECTIONS)): # to try out different valid actions
x2 = x + DIRECTIONS[i][0]
y2 = y + DIRECTIONS[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
f2 = g2 + heuristic[x2][y2]
cell.append([f2, g2, x2, y2])
closed[x2][y2] = 1
action[x2][y2] = i
invpath = []
x = goal[0]
y = goal[1]
invpath.append([x, y]) # we get the reverse path from here
while x != init[0] or y != init[1]:
x2 = x - DIRECTIONS[action[x][y]][0]
y2 = y - DIRECTIONS[action[x][y]][1]
x = x2
y = y2
invpath.append([x, y])
path = []
for i in range(len(invpath)):
path.append(invpath[len(invpath) - 1 - i])
return path, action
if __name__ == "__main__":
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
]
init = [0, 0]
# all coordinates are given in format [y,x]
goal = [len(grid) - 1, len(grid[0]) - 1]
cost = 1
# the cost map which pushes the path closer to the goal
heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
for i in range(len(grid)):
for j in range(len(grid[0])):
heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
if grid[i][j] == 1:
# added extra penalty in the heuristic map
heuristic[i][j] = 99
path, action = search(grid, init, goal, cost, heuristic)
print("ACTION MAP")
for i in range(len(action)):
print(action[i])
for i in range(len(path)):
print(path[i])
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from collections import deque
class Process:
def __init__(self, process_name: str, arrival_time: int, burst_time: int) -> None:
self.process_name = process_name # process name
self.arrival_time = arrival_time # arrival time of the process
# completion time of finished process or last interrupted time
self.stop_time = arrival_time
self.burst_time = burst_time # remaining burst time
self.waiting_time = 0 # total time of the process wait in ready queue
self.turnaround_time = 0 # time from arrival time to completion time
class MLFQ:
"""
MLFQ(Multi Level Feedback Queue)
https://en.wikipedia.org/wiki/Multilevel_feedback_queue
MLFQ has a lot of queues that have different priority
In this MLFQ,
The first Queue(0) to last second Queue(N-2) of MLFQ have Round Robin Algorithm
The last Queue(N-1) has First Come, First Served Algorithm
"""
def __init__(
self,
number_of_queues: int,
time_slices: list[int],
queue: deque[Process],
current_time: int,
) -> None:
# total number of mlfq's queues
self.number_of_queues = number_of_queues
# time slice of queues that round robin algorithm applied
self.time_slices = time_slices
# unfinished process is in this ready_queue
self.ready_queue = queue
# current time
self.current_time = current_time
# finished process is in this sequence queue
self.finish_queue: deque[Process] = deque()
def calculate_sequence_of_finish_queue(self) -> list[str]:
"""
This method returns the sequence of finished processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_sequence_of_finish_queue()
['P2', 'P4', 'P1', 'P3']
"""
sequence = []
for i in range(len(self.finish_queue)):
sequence.append(self.finish_queue[i].process_name)
return sequence
def calculate_waiting_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates waiting time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_waiting_time([P1, P2, P3, P4])
[83, 17, 94, 101]
"""
waiting_times = []
for i in range(len(queue)):
waiting_times.append(queue[i].waiting_time)
return waiting_times
def calculate_turnaround_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates turnaround time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_turnaround_time([P1, P2, P3, P4])
[136, 34, 162, 125]
"""
turnaround_times = []
for i in range(len(queue)):
turnaround_times.append(queue[i].turnaround_time)
return turnaround_times
def calculate_completion_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates completion time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_turnaround_time([P1, P2, P3, P4])
[136, 34, 162, 125]
"""
completion_times = []
for i in range(len(queue)):
completion_times.append(queue[i].stop_time)
return completion_times
def calculate_remaining_burst_time_of_processes(
self, queue: deque[Process]
) -> list[int]:
"""
This method calculate remaining burst time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue, ready_queue = mlfq.round_robin(deque([P1, P2, P3, P4]), 17)
>>> mlfq.calculate_remaining_burst_time_of_processes(mlfq.finish_queue)
[0]
>>> mlfq.calculate_remaining_burst_time_of_processes(ready_queue)
[36, 51, 7]
>>> finish_queue, ready_queue = mlfq.round_robin(ready_queue, 25)
>>> mlfq.calculate_remaining_burst_time_of_processes(mlfq.finish_queue)
[0, 0]
>>> mlfq.calculate_remaining_burst_time_of_processes(ready_queue)
[11, 26]
"""
return [q.burst_time for q in queue]
def update_waiting_time(self, process: Process) -> int:
"""
This method updates waiting times of unfinished processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> mlfq.current_time = 10
>>> P1.stop_time = 5
>>> mlfq.update_waiting_time(P1)
5
"""
process.waiting_time += self.current_time - process.stop_time
return process.waiting_time
def first_come_first_served(self, ready_queue: deque[Process]) -> deque[Process]:
"""
FCFS(First Come, First Served)
FCFS will be applied to MLFQ's last queue
A first came process will be finished at first
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.first_come_first_served(mlfq.ready_queue)
>>> mlfq.calculate_sequence_of_finish_queue()
['P1', 'P2', 'P3', 'P4']
"""
finished: deque[Process] = deque() # sequence deque of finished process
while len(ready_queue) != 0:
cp = ready_queue.popleft() # current process
# if process's arrival time is later than current time, update current time
if self.current_time < cp.arrival_time:
self.current_time += cp.arrival_time
# update waiting time of current process
self.update_waiting_time(cp)
# update current time
self.current_time += cp.burst_time
# finish the process and set the process's burst-time 0
cp.burst_time = 0
# set the process's turnaround time because it is finished
cp.turnaround_time = self.current_time - cp.arrival_time
# set the completion time
cp.stop_time = self.current_time
# add the process to queue that has finished queue
finished.append(cp)
self.finish_queue.extend(finished) # add finished process to finish queue
# FCFS will finish all remaining processes
return finished
def round_robin(
self, ready_queue: deque[Process], time_slice: int
) -> tuple[deque[Process], deque[Process]]:
"""
RR(Round Robin)
RR will be applied to MLFQ's all queues except last queue
All processes can't use CPU for time more than time_slice
If the process consume CPU up to time_slice, it will go back to ready queue
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue, ready_queue = mlfq.round_robin(mlfq.ready_queue, 17)
>>> mlfq.calculate_sequence_of_finish_queue()
['P2']
"""
finished: deque[Process] = deque() # sequence deque of terminated process
# just for 1 cycle and unfinished processes will go back to queue
for _ in range(len(ready_queue)):
cp = ready_queue.popleft() # current process
# if process's arrival time is later than current time, update current time
if self.current_time < cp.arrival_time:
self.current_time += cp.arrival_time
# update waiting time of unfinished processes
self.update_waiting_time(cp)
# if the burst time of process is bigger than time-slice
if cp.burst_time > time_slice:
# use CPU for only time-slice
self.current_time += time_slice
# update remaining burst time
cp.burst_time -= time_slice
# update end point time
cp.stop_time = self.current_time
# locate the process behind the queue because it is not finished
ready_queue.append(cp)
else:
# use CPU for remaining burst time
self.current_time += cp.burst_time
# set burst time 0 because the process is finished
cp.burst_time = 0
# set the finish time
cp.stop_time = self.current_time
# update the process' turnaround time because it is finished
cp.turnaround_time = self.current_time - cp.arrival_time
# add the process to queue that has finished queue
finished.append(cp)
self.finish_queue.extend(finished) # add finished process to finish queue
# return finished processes queue and remaining processes queue
return finished, ready_queue
def multi_level_feedback_queue(self) -> deque[Process]:
"""
MLFQ(Multi Level Feedback Queue)
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_sequence_of_finish_queue()
['P2', 'P4', 'P1', 'P3']
"""
# all queues except last one have round_robin algorithm
for i in range(self.number_of_queues - 1):
finished, self.ready_queue = self.round_robin(
self.ready_queue, self.time_slices[i]
)
# the last queue has first_come_first_served algorithm
self.first_come_first_served(self.ready_queue)
return self.finish_queue
if __name__ == "__main__":
import doctest
P1 = Process("P1", 0, 53)
P2 = Process("P2", 0, 17)
P3 = Process("P3", 0, 68)
P4 = Process("P4", 0, 24)
number_of_queues = 3
time_slices = [17, 25]
queue = deque([P1, P2, P3, P4])
if len(time_slices) != number_of_queues - 1:
raise SystemExit(0)
doctest.testmod(extraglobs={"queue": deque([P1, P2, P3, P4])})
P1 = Process("P1", 0, 53)
P2 = Process("P2", 0, 17)
P3 = Process("P3", 0, 68)
P4 = Process("P4", 0, 24)
number_of_queues = 3
time_slices = [17, 25]
queue = deque([P1, P2, P3, P4])
mlfq = MLFQ(number_of_queues, time_slices, queue, 0)
finish_queue = mlfq.multi_level_feedback_queue()
# print total waiting times of processes(P1, P2, P3, P4)
print(
f"waiting time:\
\t\t\t{MLFQ.calculate_waiting_time(mlfq, [P1, P2, P3, P4])}"
)
# print completion times of processes(P1, P2, P3, P4)
print(
f"completion time:\
\t\t{MLFQ.calculate_completion_time(mlfq, [P1, P2, P3, P4])}"
)
# print total turnaround times of processes(P1, P2, P3, P4)
print(
f"turnaround time:\
\t\t{MLFQ.calculate_turnaround_time(mlfq, [P1, P2, P3, P4])}"
)
# print sequence of finished processes
print(
f"sequence of finished processes:\
{mlfq.calculate_sequence_of_finish_queue()}"
)
| from collections import deque
class Process:
def __init__(self, process_name: str, arrival_time: int, burst_time: int) -> None:
self.process_name = process_name # process name
self.arrival_time = arrival_time # arrival time of the process
# completion time of finished process or last interrupted time
self.stop_time = arrival_time
self.burst_time = burst_time # remaining burst time
self.waiting_time = 0 # total time of the process wait in ready queue
self.turnaround_time = 0 # time from arrival time to completion time
class MLFQ:
"""
MLFQ(Multi Level Feedback Queue)
https://en.wikipedia.org/wiki/Multilevel_feedback_queue
MLFQ has a lot of queues that have different priority
In this MLFQ,
The first Queue(0) to last second Queue(N-2) of MLFQ have Round Robin Algorithm
The last Queue(N-1) has First Come, First Served Algorithm
"""
def __init__(
self,
number_of_queues: int,
time_slices: list[int],
queue: deque[Process],
current_time: int,
) -> None:
# total number of mlfq's queues
self.number_of_queues = number_of_queues
# time slice of queues that round robin algorithm applied
self.time_slices = time_slices
# unfinished process is in this ready_queue
self.ready_queue = queue
# current time
self.current_time = current_time
# finished process is in this sequence queue
self.finish_queue: deque[Process] = deque()
def calculate_sequence_of_finish_queue(self) -> list[str]:
"""
This method returns the sequence of finished processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_sequence_of_finish_queue()
['P2', 'P4', 'P1', 'P3']
"""
sequence = []
for i in range(len(self.finish_queue)):
sequence.append(self.finish_queue[i].process_name)
return sequence
def calculate_waiting_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates waiting time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_waiting_time([P1, P2, P3, P4])
[83, 17, 94, 101]
"""
waiting_times = []
for i in range(len(queue)):
waiting_times.append(queue[i].waiting_time)
return waiting_times
def calculate_turnaround_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates turnaround time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_turnaround_time([P1, P2, P3, P4])
[136, 34, 162, 125]
"""
turnaround_times = []
for i in range(len(queue)):
turnaround_times.append(queue[i].turnaround_time)
return turnaround_times
def calculate_completion_time(self, queue: list[Process]) -> list[int]:
"""
This method calculates completion time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_turnaround_time([P1, P2, P3, P4])
[136, 34, 162, 125]
"""
completion_times = []
for i in range(len(queue)):
completion_times.append(queue[i].stop_time)
return completion_times
def calculate_remaining_burst_time_of_processes(
self, queue: deque[Process]
) -> list[int]:
"""
This method calculate remaining burst time of processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue, ready_queue = mlfq.round_robin(deque([P1, P2, P3, P4]), 17)
>>> mlfq.calculate_remaining_burst_time_of_processes(mlfq.finish_queue)
[0]
>>> mlfq.calculate_remaining_burst_time_of_processes(ready_queue)
[36, 51, 7]
>>> finish_queue, ready_queue = mlfq.round_robin(ready_queue, 25)
>>> mlfq.calculate_remaining_burst_time_of_processes(mlfq.finish_queue)
[0, 0]
>>> mlfq.calculate_remaining_burst_time_of_processes(ready_queue)
[11, 26]
"""
return [q.burst_time for q in queue]
def update_waiting_time(self, process: Process) -> int:
"""
This method updates waiting times of unfinished processes
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> mlfq.current_time = 10
>>> P1.stop_time = 5
>>> mlfq.update_waiting_time(P1)
5
"""
process.waiting_time += self.current_time - process.stop_time
return process.waiting_time
def first_come_first_served(self, ready_queue: deque[Process]) -> deque[Process]:
"""
FCFS(First Come, First Served)
FCFS will be applied to MLFQ's last queue
A first came process will be finished at first
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> _ = mlfq.first_come_first_served(mlfq.ready_queue)
>>> mlfq.calculate_sequence_of_finish_queue()
['P1', 'P2', 'P3', 'P4']
"""
finished: deque[Process] = deque() # sequence deque of finished process
while len(ready_queue) != 0:
cp = ready_queue.popleft() # current process
# if process's arrival time is later than current time, update current time
if self.current_time < cp.arrival_time:
self.current_time += cp.arrival_time
# update waiting time of current process
self.update_waiting_time(cp)
# update current time
self.current_time += cp.burst_time
# finish the process and set the process's burst-time 0
cp.burst_time = 0
# set the process's turnaround time because it is finished
cp.turnaround_time = self.current_time - cp.arrival_time
# set the completion time
cp.stop_time = self.current_time
# add the process to queue that has finished queue
finished.append(cp)
self.finish_queue.extend(finished) # add finished process to finish queue
# FCFS will finish all remaining processes
return finished
def round_robin(
self, ready_queue: deque[Process], time_slice: int
) -> tuple[deque[Process], deque[Process]]:
"""
RR(Round Robin)
RR will be applied to MLFQ's all queues except last queue
All processes can't use CPU for time more than time_slice
If the process consume CPU up to time_slice, it will go back to ready queue
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue, ready_queue = mlfq.round_robin(mlfq.ready_queue, 17)
>>> mlfq.calculate_sequence_of_finish_queue()
['P2']
"""
finished: deque[Process] = deque() # sequence deque of terminated process
# just for 1 cycle and unfinished processes will go back to queue
for _ in range(len(ready_queue)):
cp = ready_queue.popleft() # current process
# if process's arrival time is later than current time, update current time
if self.current_time < cp.arrival_time:
self.current_time += cp.arrival_time
# update waiting time of unfinished processes
self.update_waiting_time(cp)
# if the burst time of process is bigger than time-slice
if cp.burst_time > time_slice:
# use CPU for only time-slice
self.current_time += time_slice
# update remaining burst time
cp.burst_time -= time_slice
# update end point time
cp.stop_time = self.current_time
# locate the process behind the queue because it is not finished
ready_queue.append(cp)
else:
# use CPU for remaining burst time
self.current_time += cp.burst_time
# set burst time 0 because the process is finished
cp.burst_time = 0
# set the finish time
cp.stop_time = self.current_time
# update the process' turnaround time because it is finished
cp.turnaround_time = self.current_time - cp.arrival_time
# add the process to queue that has finished queue
finished.append(cp)
self.finish_queue.extend(finished) # add finished process to finish queue
# return finished processes queue and remaining processes queue
return finished, ready_queue
def multi_level_feedback_queue(self) -> deque[Process]:
"""
MLFQ(Multi Level Feedback Queue)
>>> P1 = Process("P1", 0, 53)
>>> P2 = Process("P2", 0, 17)
>>> P3 = Process("P3", 0, 68)
>>> P4 = Process("P4", 0, 24)
>>> mlfq = MLFQ(3, [17, 25], deque([P1, P2, P3, P4]), 0)
>>> finish_queue = mlfq.multi_level_feedback_queue()
>>> mlfq.calculate_sequence_of_finish_queue()
['P2', 'P4', 'P1', 'P3']
"""
# all queues except last one have round_robin algorithm
for i in range(self.number_of_queues - 1):
finished, self.ready_queue = self.round_robin(
self.ready_queue, self.time_slices[i]
)
# the last queue has first_come_first_served algorithm
self.first_come_first_served(self.ready_queue)
return self.finish_queue
if __name__ == "__main__":
import doctest
P1 = Process("P1", 0, 53)
P2 = Process("P2", 0, 17)
P3 = Process("P3", 0, 68)
P4 = Process("P4", 0, 24)
number_of_queues = 3
time_slices = [17, 25]
queue = deque([P1, P2, P3, P4])
if len(time_slices) != number_of_queues - 1:
raise SystemExit(0)
doctest.testmod(extraglobs={"queue": deque([P1, P2, P3, P4])})
P1 = Process("P1", 0, 53)
P2 = Process("P2", 0, 17)
P3 = Process("P3", 0, 68)
P4 = Process("P4", 0, 24)
number_of_queues = 3
time_slices = [17, 25]
queue = deque([P1, P2, P3, P4])
mlfq = MLFQ(number_of_queues, time_slices, queue, 0)
finish_queue = mlfq.multi_level_feedback_queue()
# print total waiting times of processes(P1, P2, P3, P4)
print(
f"waiting time:\
\t\t\t{MLFQ.calculate_waiting_time(mlfq, [P1, P2, P3, P4])}"
)
# print completion times of processes(P1, P2, P3, P4)
print(
f"completion time:\
\t\t{MLFQ.calculate_completion_time(mlfq, [P1, P2, P3, P4])}"
)
# print total turnaround times of processes(P1, P2, P3, P4)
print(
f"turnaround time:\
\t\t{MLFQ.calculate_turnaround_time(mlfq, [P1, P2, P3, P4])}"
)
# print sequence of finished processes
print(
f"sequence of finished processes:\
{mlfq.calculate_sequence_of_finish_queue()}"
)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Author: Abhijeeth S
import math
def res(x, y):
if 0 not in (x, y):
# We use the relation x^y = y*log10(x), where 10 is the base.
return y * math.log10(x)
else:
if x == 0: # 0 raised to any number is 0
return 0
elif y == 0:
return 1 # any number raised to 0 is 1
raise AssertionError("This should never happen")
if __name__ == "__main__": # Main function
# Read two numbers from input and typecast them to int using map function.
# Here x is the base and y is the power.
prompt = "Enter the base and the power separated by a comma: "
x1, y1 = map(int, input(prompt).split(","))
x2, y2 = map(int, input(prompt).split(","))
# We find the log of each number, using the function res(), which takes two
# arguments.
res1 = res(x1, y1)
res2 = res(x2, y2)
# We check for the largest number
if res1 > res2:
print("Largest number is", x1, "^", y1)
elif res2 > res1:
print("Largest number is", x2, "^", y2)
else:
print("Both are equal")
| # Author: Abhijeeth S
import math
def res(x, y):
if 0 not in (x, y):
# We use the relation x^y = y*log10(x), where 10 is the base.
return y * math.log10(x)
else:
if x == 0: # 0 raised to any number is 0
return 0
elif y == 0:
return 1 # any number raised to 0 is 1
raise AssertionError("This should never happen")
if __name__ == "__main__": # Main function
# Read two numbers from input and typecast them to int using map function.
# Here x is the base and y is the power.
prompt = "Enter the base and the power separated by a comma: "
x1, y1 = map(int, input(prompt).split(","))
x2, y2 = map(int, input(prompt).split(","))
# We find the log of each number, using the function res(), which takes two
# arguments.
res1 = res(x1, y1)
res2 = res(x2, y2)
# We check for the largest number
if res1 > res2:
print("Largest number is", x1, "^", y1)
elif res2 > res1:
print("Largest number is", x2, "^", y2)
else:
print("Both are equal")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def is_even(number: int) -> bool:
"""
return true if the input integer is even
Explanation: Lets take a look at the following deicmal to binary conversions
2 => 10
14 => 1110
100 => 1100100
3 => 11
13 => 1101
101 => 1100101
from the above examples we can observe that
for all the odd integers there is always 1 set bit at the end
also, 1 in binary can be represented as 001, 00001, or 0000001
so for any odd integer n => n&1 is always equals 1 else the integer is even
>>> is_even(1)
False
>>> is_even(4)
True
>>> is_even(9)
False
>>> is_even(15)
False
>>> is_even(40)
True
>>> is_even(100)
True
>>> is_even(101)
False
"""
return number & 1 == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| def is_even(number: int) -> bool:
"""
return true if the input integer is even
Explanation: Lets take a look at the following deicmal to binary conversions
2 => 10
14 => 1110
100 => 1100100
3 => 11
13 => 1101
101 => 1100101
from the above examples we can observe that
for all the odd integers there is always 1 set bit at the end
also, 1 in binary can be represented as 001, 00001, or 0000001
so for any odd integer n => n&1 is always equals 1 else the integer is even
>>> is_even(1)
False
>>> is_even(4)
True
>>> is_even(9)
False
>>> is_even(15)
False
>>> is_even(40)
True
>>> is_even(100)
True
>>> is_even(101)
False
"""
return number & 1 == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class DisjointSetTreeNode(Generic[T]):
# Disjoint Set Node to store the parent and rank
def __init__(self, data: T) -> None:
self.data = data
self.parent = self
self.rank = 0
class DisjointSetTree(Generic[T]):
# Disjoint Set DataStructure
def __init__(self) -> None:
# map from node name to the node object
self.map: dict[T, DisjointSetTreeNode[T]] = {}
def make_set(self, data: T) -> None:
# create a new set with x as its member
self.map[data] = DisjointSetTreeNode(data)
def find_set(self, data: T) -> DisjointSetTreeNode[T]:
# find the set x belongs to (with path-compression)
elem_ref = self.map[data]
if elem_ref != elem_ref.parent:
elem_ref.parent = self.find_set(elem_ref.parent.data)
return elem_ref.parent
def link(
self, node1: DisjointSetTreeNode[T], node2: DisjointSetTreeNode[T]
) -> None:
# helper function for union operation
if node1.rank > node2.rank:
node2.parent = node1
else:
node1.parent = node2
if node1.rank == node2.rank:
node2.rank += 1
def union(self, data1: T, data2: T) -> None:
# merge 2 disjoint sets
self.link(self.find_set(data1), self.find_set(data2))
class GraphUndirectedWeighted(Generic[T]):
def __init__(self) -> None:
# connections: map from the node to the neighbouring nodes (with weights)
self.connections: dict[T, dict[T, int]] = {}
def add_node(self, node: T) -> None:
# add a node ONLY if its not present in the graph
if node not in self.connections:
self.connections[node] = {}
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# add an edge with the given weight
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def kruskal(self) -> GraphUndirectedWeighted[T]:
# Kruskal's Algorithm to generate a Minimum Spanning Tree (MST) of a graph
"""
Details: https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Example:
>>> g1 = GraphUndirectedWeighted[int]()
>>> g1.add_edge(1, 2, 1)
>>> g1.add_edge(2, 3, 2)
>>> g1.add_edge(3, 4, 1)
>>> g1.add_edge(3, 5, 100) # Removed in MST
>>> g1.add_edge(4, 5, 5)
>>> assert 5 in g1.connections[3]
>>> mst = g1.kruskal()
>>> assert 5 not in mst.connections[3]
>>> g2 = GraphUndirectedWeighted[str]()
>>> g2.add_edge('A', 'B', 1)
>>> g2.add_edge('B', 'C', 2)
>>> g2.add_edge('C', 'D', 1)
>>> g2.add_edge('C', 'E', 100) # Removed in MST
>>> g2.add_edge('D', 'E', 5)
>>> assert 'E' in g2.connections["C"]
>>> mst = g2.kruskal()
>>> assert 'E' not in mst.connections['C']
"""
# getting the edges in ascending order of weights
edges = []
seen = set()
for start in self.connections:
for end in self.connections[start]:
if (start, end) not in seen:
seen.add((end, start))
edges.append((start, end, self.connections[start][end]))
edges.sort(key=lambda x: x[2])
# creating the disjoint set
disjoint_set = DisjointSetTree[T]()
for node in self.connections:
disjoint_set.make_set(node)
# MST generation
num_edges = 0
index = 0
graph = GraphUndirectedWeighted[T]()
while num_edges < len(self.connections) - 1:
u, v, w = edges[index]
index += 1
parent_u = disjoint_set.find_set(u)
parent_v = disjoint_set.find_set(v)
if parent_u != parent_v:
num_edges += 1
graph.add_edge(u, v, w)
disjoint_set.union(u, v)
return graph
| from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class DisjointSetTreeNode(Generic[T]):
# Disjoint Set Node to store the parent and rank
def __init__(self, data: T) -> None:
self.data = data
self.parent = self
self.rank = 0
class DisjointSetTree(Generic[T]):
# Disjoint Set DataStructure
def __init__(self) -> None:
# map from node name to the node object
self.map: dict[T, DisjointSetTreeNode[T]] = {}
def make_set(self, data: T) -> None:
# create a new set with x as its member
self.map[data] = DisjointSetTreeNode(data)
def find_set(self, data: T) -> DisjointSetTreeNode[T]:
# find the set x belongs to (with path-compression)
elem_ref = self.map[data]
if elem_ref != elem_ref.parent:
elem_ref.parent = self.find_set(elem_ref.parent.data)
return elem_ref.parent
def link(
self, node1: DisjointSetTreeNode[T], node2: DisjointSetTreeNode[T]
) -> None:
# helper function for union operation
if node1.rank > node2.rank:
node2.parent = node1
else:
node1.parent = node2
if node1.rank == node2.rank:
node2.rank += 1
def union(self, data1: T, data2: T) -> None:
# merge 2 disjoint sets
self.link(self.find_set(data1), self.find_set(data2))
class GraphUndirectedWeighted(Generic[T]):
def __init__(self) -> None:
# connections: map from the node to the neighbouring nodes (with weights)
self.connections: dict[T, dict[T, int]] = {}
def add_node(self, node: T) -> None:
# add a node ONLY if its not present in the graph
if node not in self.connections:
self.connections[node] = {}
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# add an edge with the given weight
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def kruskal(self) -> GraphUndirectedWeighted[T]:
# Kruskal's Algorithm to generate a Minimum Spanning Tree (MST) of a graph
"""
Details: https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Example:
>>> g1 = GraphUndirectedWeighted[int]()
>>> g1.add_edge(1, 2, 1)
>>> g1.add_edge(2, 3, 2)
>>> g1.add_edge(3, 4, 1)
>>> g1.add_edge(3, 5, 100) # Removed in MST
>>> g1.add_edge(4, 5, 5)
>>> assert 5 in g1.connections[3]
>>> mst = g1.kruskal()
>>> assert 5 not in mst.connections[3]
>>> g2 = GraphUndirectedWeighted[str]()
>>> g2.add_edge('A', 'B', 1)
>>> g2.add_edge('B', 'C', 2)
>>> g2.add_edge('C', 'D', 1)
>>> g2.add_edge('C', 'E', 100) # Removed in MST
>>> g2.add_edge('D', 'E', 5)
>>> assert 'E' in g2.connections["C"]
>>> mst = g2.kruskal()
>>> assert 'E' not in mst.connections['C']
"""
# getting the edges in ascending order of weights
edges = []
seen = set()
for start in self.connections:
for end in self.connections[start]:
if (start, end) not in seen:
seen.add((end, start))
edges.append((start, end, self.connections[start][end]))
edges.sort(key=lambda x: x[2])
# creating the disjoint set
disjoint_set = DisjointSetTree[T]()
for node in self.connections:
disjoint_set.make_set(node)
# MST generation
num_edges = 0
index = 0
graph = GraphUndirectedWeighted[T]()
while num_edges < len(self.connections) - 1:
u, v, w = edges[index]
index += 1
parent_u = disjoint_set.find_set(u)
parent_v = disjoint_set.find_set(v)
if parent_u != parent_v:
num_edges += 1
graph.add_edge(u, v, w)
disjoint_set.union(u, v)
return graph
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return sum(1 for _ in self)
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
return None
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for _ in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
test_input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in test_input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return sum(1 for _ in self)
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
return None
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for _ in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
test_input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in test_input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Sum of all nodes in a binary tree.
Python implementation:
O(n) time complexity - Recurses through :meth:`depth_first_search`
with each element.
O(n) space complexity - At any point in time maximum number of stack
frames that could be in memory is `n`
"""
from __future__ import annotations
from collections.abc import Iterator
class Node:
"""
A Node has a value variable and pointers to Nodes to its left and right.
"""
def __init__(self, value: int) -> None:
self.value = value
self.left: Node | None = None
self.right: Node | None = None
class BinaryTreeNodeSum:
r"""
The below tree looks like this
10
/ \
5 -3
/ / \
12 8 0
>>> tree = Node(10)
>>> sum(BinaryTreeNodeSum(tree))
10
>>> tree.left = Node(5)
>>> sum(BinaryTreeNodeSum(tree))
15
>>> tree.right = Node(-3)
>>> sum(BinaryTreeNodeSum(tree))
12
>>> tree.left.left = Node(12)
>>> sum(BinaryTreeNodeSum(tree))
24
>>> tree.right.left = Node(8)
>>> tree.right.right = Node(0)
>>> sum(BinaryTreeNodeSum(tree))
32
"""
def __init__(self, tree: Node) -> None:
self.tree = tree
def depth_first_search(self, node: Node | None) -> int:
if node is None:
return 0
return node.value + (
self.depth_first_search(node.left) + self.depth_first_search(node.right)
)
def __iter__(self) -> Iterator[int]:
yield self.depth_first_search(self.tree)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Sum of all nodes in a binary tree.
Python implementation:
O(n) time complexity - Recurses through :meth:`depth_first_search`
with each element.
O(n) space complexity - At any point in time maximum number of stack
frames that could be in memory is `n`
"""
from __future__ import annotations
from collections.abc import Iterator
class Node:
"""
A Node has a value variable and pointers to Nodes to its left and right.
"""
def __init__(self, value: int) -> None:
self.value = value
self.left: Node | None = None
self.right: Node | None = None
class BinaryTreeNodeSum:
r"""
The below tree looks like this
10
/ \
5 -3
/ / \
12 8 0
>>> tree = Node(10)
>>> sum(BinaryTreeNodeSum(tree))
10
>>> tree.left = Node(5)
>>> sum(BinaryTreeNodeSum(tree))
15
>>> tree.right = Node(-3)
>>> sum(BinaryTreeNodeSum(tree))
12
>>> tree.left.left = Node(12)
>>> sum(BinaryTreeNodeSum(tree))
24
>>> tree.right.left = Node(8)
>>> tree.right.right = Node(0)
>>> sum(BinaryTreeNodeSum(tree))
32
"""
def __init__(self, tree: Node) -> None:
self.tree = tree
def depth_first_search(self, node: Node | None) -> int:
if node is None:
return 0
return node.value + (
self.depth_first_search(node.left) + self.depth_first_search(node.right)
)
def __iter__(self) -> Iterator[int]:
yield self.depth_first_search(self.tree)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Given an two dimensional binary matrix grid. An island is a group of 1's (representing
land) connected 4-directionally (horizontal or vertical.) You may assume all four edges
of the grid are surrounded by water. The area of an island is the number of cells with
a value 1 in the island. Return the maximum area of an island in a grid. If there is no
island, return 0.
"""
matrix = [
[0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0],
]
def is_safe(row: int, col: int, rows: int, cols: int) -> bool:
"""
Checking whether coordinate (row, col) is valid or not.
>>> is_safe(0, 0, 5, 5)
True
>>> is_safe(-1,-1, 5, 5)
False
"""
return 0 <= row < rows and 0 <= col < cols
def depth_first_search(row: int, col: int, seen: set, mat: list[list[int]]) -> int:
"""
Returns the current area of the island
>>> depth_first_search(0, 0, set(), matrix)
0
"""
rows = len(mat)
cols = len(mat[0])
if is_safe(row, col, rows, cols) and (row, col) not in seen and mat[row][col] == 1:
seen.add((row, col))
return (
1
+ depth_first_search(row + 1, col, seen, mat)
+ depth_first_search(row - 1, col, seen, mat)
+ depth_first_search(row, col + 1, seen, mat)
+ depth_first_search(row, col - 1, seen, mat)
)
else:
return 0
def find_max_area(mat: list[list[int]]) -> int:
"""
Finds the area of all islands and returns the maximum area.
>>> find_max_area(matrix)
6
"""
seen: set = set()
max_area = 0
for row, line in enumerate(mat):
for col, item in enumerate(line):
if item == 1 and (row, col) not in seen:
# Maximizing the area
max_area = max(max_area, depth_first_search(row, col, seen, mat))
return max_area
if __name__ == "__main__":
import doctest
print(find_max_area(matrix)) # Output -> 6
"""
Explanation:
We are allowed to move in four directions (horizontal or vertical) so the possible
in a matrix if we are at x and y position the possible moving are
Directions are [(x, y+1), (x, y-1), (x+1, y), (x-1, y)] but we need to take care of
boundary cases as well which are x and y can not be smaller than 0 and greater than
the number of rows and columns respectively.
Visualization
mat = [
[0,0,A,0,0,0,0,B,0,0,0,0,0],
[0,0,0,0,0,0,0,B,B,B,0,0,0],
[0,C,C,0,D,0,0,0,0,0,0,0,0],
[0,C,0,0,D,D,0,0,E,0,E,0,0],
[0,C,0,0,D,D,0,0,E,E,E,0,0],
[0,0,0,0,0,0,0,0,0,0,E,0,0],
[0,0,0,0,0,0,0,F,F,F,0,0,0],
[0,0,0,0,0,0,0,F,F,0,0,0,0]
]
For visualization, I have defined the connected island with letters
by observation, we can see that
A island is of area 1
B island is of area 4
C island is of area 4
D island is of area 5
E island is of area 6 and
F island is of area 5
it has 6 unique islands of mentioned areas
and the maximum of all of them is 6 so we return 6.
"""
doctest.testmod()
| """
Given an two dimensional binary matrix grid. An island is a group of 1's (representing
land) connected 4-directionally (horizontal or vertical.) You may assume all four edges
of the grid are surrounded by water. The area of an island is the number of cells with
a value 1 in the island. Return the maximum area of an island in a grid. If there is no
island, return 0.
"""
matrix = [
[0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0],
]
def is_safe(row: int, col: int, rows: int, cols: int) -> bool:
"""
Checking whether coordinate (row, col) is valid or not.
>>> is_safe(0, 0, 5, 5)
True
>>> is_safe(-1,-1, 5, 5)
False
"""
return 0 <= row < rows and 0 <= col < cols
def depth_first_search(row: int, col: int, seen: set, mat: list[list[int]]) -> int:
"""
Returns the current area of the island
>>> depth_first_search(0, 0, set(), matrix)
0
"""
rows = len(mat)
cols = len(mat[0])
if is_safe(row, col, rows, cols) and (row, col) not in seen and mat[row][col] == 1:
seen.add((row, col))
return (
1
+ depth_first_search(row + 1, col, seen, mat)
+ depth_first_search(row - 1, col, seen, mat)
+ depth_first_search(row, col + 1, seen, mat)
+ depth_first_search(row, col - 1, seen, mat)
)
else:
return 0
def find_max_area(mat: list[list[int]]) -> int:
"""
Finds the area of all islands and returns the maximum area.
>>> find_max_area(matrix)
6
"""
seen: set = set()
max_area = 0
for row, line in enumerate(mat):
for col, item in enumerate(line):
if item == 1 and (row, col) not in seen:
# Maximizing the area
max_area = max(max_area, depth_first_search(row, col, seen, mat))
return max_area
if __name__ == "__main__":
import doctest
print(find_max_area(matrix)) # Output -> 6
"""
Explanation:
We are allowed to move in four directions (horizontal or vertical) so the possible
in a matrix if we are at x and y position the possible moving are
Directions are [(x, y+1), (x, y-1), (x+1, y), (x-1, y)] but we need to take care of
boundary cases as well which are x and y can not be smaller than 0 and greater than
the number of rows and columns respectively.
Visualization
mat = [
[0,0,A,0,0,0,0,B,0,0,0,0,0],
[0,0,0,0,0,0,0,B,B,B,0,0,0],
[0,C,C,0,D,0,0,0,0,0,0,0,0],
[0,C,0,0,D,D,0,0,E,0,E,0,0],
[0,C,0,0,D,D,0,0,E,E,E,0,0],
[0,0,0,0,0,0,0,0,0,0,E,0,0],
[0,0,0,0,0,0,0,F,F,F,0,0,0],
[0,0,0,0,0,0,0,F,F,0,0,0,0]
]
For visualization, I have defined the connected island with letters
by observation, we can see that
A island is of area 1
B island is of area 4
C island is of area 4
D island is of area 5
E island is of area 6 and
F island is of area 5
it has 6 unique islands of mentioned areas
and the maximum of all of them is 6 so we return 6.
"""
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Author : Alexander Pantyukhin
Date : December 12, 2022
Task:
Given a string and a list of words, return true if the string can be
segmented into a space-separated sequence of one or more words.
Note that the same word may be reused
multiple times in the segmentation.
Implementation notes: Trie + Dynamic programming up -> down.
The Trie will be used to store the words. It will be useful for scanning
available words for the current position in the string.
Leetcode:
https://leetcode.com/problems/word-break/description/
Runtime: O(n * n)
Space: O(n)
"""
import functools
from typing import Any
def word_break(string: str, words: list[str]) -> bool:
"""
Return True if numbers have opposite signs False otherwise.
>>> word_break("applepenapple", ["apple","pen"])
True
>>> word_break("catsandog", ["cats","dog","sand","and","cat"])
False
>>> word_break("cars", ["car","ca","rs"])
True
>>> word_break('abc', [])
False
>>> word_break(123, ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('', ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('abc', [123])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
>>> word_break('abc', [''])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
"""
# Validation
if not isinstance(string, str) or len(string) == 0:
raise ValueError("the string should be not empty string")
if not isinstance(words, list) or not all(
isinstance(item, str) and len(item) > 0 for item in words
):
raise ValueError("the words should be a list of non-empty strings")
# Build trie
trie: dict[str, Any] = {}
word_keeper_key = "WORD_KEEPER"
for word in words:
trie_node = trie
for c in word:
if c not in trie_node:
trie_node[c] = {}
trie_node = trie_node[c]
trie_node[word_keeper_key] = True
len_string = len(string)
# Dynamic programming method
@functools.cache
def is_breakable(index: int) -> bool:
"""
>>> string = 'a'
>>> is_breakable(1)
True
"""
if index == len_string:
return True
trie_node = trie
for i in range(index, len_string):
trie_node = trie_node.get(string[i], None)
if trie_node is None:
return False
if trie_node.get(word_keeper_key, False) and is_breakable(i + 1):
return True
return False
return is_breakable(0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : December 12, 2022
Task:
Given a string and a list of words, return true if the string can be
segmented into a space-separated sequence of one or more words.
Note that the same word may be reused
multiple times in the segmentation.
Implementation notes: Trie + Dynamic programming up -> down.
The Trie will be used to store the words. It will be useful for scanning
available words for the current position in the string.
Leetcode:
https://leetcode.com/problems/word-break/description/
Runtime: O(n * n)
Space: O(n)
"""
import functools
from typing import Any
def word_break(string: str, words: list[str]) -> bool:
"""
Return True if numbers have opposite signs False otherwise.
>>> word_break("applepenapple", ["apple","pen"])
True
>>> word_break("catsandog", ["cats","dog","sand","and","cat"])
False
>>> word_break("cars", ["car","ca","rs"])
True
>>> word_break('abc', [])
False
>>> word_break(123, ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('', ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('abc', [123])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
>>> word_break('abc', [''])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
"""
# Validation
if not isinstance(string, str) or len(string) == 0:
raise ValueError("the string should be not empty string")
if not isinstance(words, list) or not all(
isinstance(item, str) and len(item) > 0 for item in words
):
raise ValueError("the words should be a list of non-empty strings")
# Build trie
trie: dict[str, Any] = {}
word_keeper_key = "WORD_KEEPER"
for word in words:
trie_node = trie
for c in word:
if c not in trie_node:
trie_node[c] = {}
trie_node = trie_node[c]
trie_node[word_keeper_key] = True
len_string = len(string)
# Dynamic programming method
@functools.cache
def is_breakable(index: int) -> bool:
"""
>>> string = 'a'
>>> is_breakable(1)
True
"""
if index == len_string:
return True
trie_node = trie
for i in range(index, len_string):
trie_node = trie_node.get(string[i], None)
if trie_node is None:
return False
if trie_node.get(word_keeper_key, False) and is_breakable(i + 1):
return True
return False
return is_breakable(0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Implementation of an auto-balanced binary tree!
For doctests run following command:
python3 -m doctest -v avl_tree.py
For testing run:
python avl_tree.py
"""
from __future__ import annotations
import math
import random
from typing import Any
class MyQueue:
def __init__(self) -> None:
self.data: list[Any] = []
self.head: int = 0
self.tail: int = 0
def is_empty(self) -> bool:
return self.head == self.tail
def push(self, data: Any) -> None:
self.data.append(data)
self.tail = self.tail + 1
def pop(self) -> Any:
ret = self.data[self.head]
self.head = self.head + 1
return ret
def count(self) -> int:
return self.tail - self.head
def print_queue(self) -> None:
print(self.data)
print("**************")
print(self.data[self.head : self.tail])
class MyNode:
def __init__(self, data: Any) -> None:
self.data = data
self.left: MyNode | None = None
self.right: MyNode | None = None
self.height: int = 1
def get_data(self) -> Any:
return self.data
def get_left(self) -> MyNode | None:
return self.left
def get_right(self) -> MyNode | None:
return self.right
def get_height(self) -> int:
return self.height
def set_data(self, data: Any) -> None:
self.data = data
def set_left(self, node: MyNode | None) -> None:
self.left = node
def set_right(self, node: MyNode | None) -> None:
self.right = node
def set_height(self, height: int) -> None:
self.height = height
def get_height(node: MyNode | None) -> int:
if node is None:
return 0
return node.get_height()
def my_max(a: int, b: int) -> int:
if a > b:
return a
return b
def right_rotation(node: MyNode) -> MyNode:
r"""
A B
/ \ / \
B C Bl A
/ \ --> / / \
Bl Br UB Br C
/
UB
UB = unbalanced node
"""
print("left rotation node:", node.get_data())
ret = node.get_left()
assert ret is not None
node.set_left(ret.get_right())
ret.set_right(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def left_rotation(node: MyNode) -> MyNode:
"""
a mirror symmetry rotation of the left_rotation
"""
print("right rotation node:", node.get_data())
ret = node.get_right()
assert ret is not None
node.set_right(ret.get_left())
ret.set_left(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def lr_rotation(node: MyNode) -> MyNode:
r"""
A A Br
/ \ / \ / \
B C LR Br C RR B A
/ \ --> / \ --> / / \
Bl Br B UB Bl UB C
\ /
UB Bl
RR = right_rotation LR = left_rotation
"""
left_child = node.get_left()
assert left_child is not None
node.set_left(left_rotation(left_child))
return right_rotation(node)
def rl_rotation(node: MyNode) -> MyNode:
right_child = node.get_right()
assert right_child is not None
node.set_right(right_rotation(right_child))
return left_rotation(node)
def insert_node(node: MyNode | None, data: Any) -> MyNode | None:
if node is None:
return MyNode(data)
if data < node.get_data():
node.set_left(insert_node(node.get_left(), data))
if (
get_height(node.get_left()) - get_height(node.get_right()) == 2
): # an unbalance detected
left_child = node.get_left()
assert left_child is not None
if (
data < left_child.get_data()
): # new node is the left child of the left child
node = right_rotation(node)
else:
node = lr_rotation(node)
else:
node.set_right(insert_node(node.get_right(), data))
if get_height(node.get_right()) - get_height(node.get_left()) == 2:
right_child = node.get_right()
assert right_child is not None
if data < right_child.get_data():
node = rl_rotation(node)
else:
node = left_rotation(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
return node
def get_right_most(root: MyNode) -> Any:
while True:
right_child = root.get_right()
if right_child is None:
break
root = right_child
return root.get_data()
def get_left_most(root: MyNode) -> Any:
while True:
left_child = root.get_left()
if left_child is None:
break
root = left_child
return root.get_data()
def del_node(root: MyNode, data: Any) -> MyNode | None:
left_child = root.get_left()
right_child = root.get_right()
if root.get_data() == data:
if left_child is not None and right_child is not None:
temp_data = get_left_most(right_child)
root.set_data(temp_data)
root.set_right(del_node(right_child, temp_data))
elif left_child is not None:
root = left_child
elif right_child is not None:
root = right_child
else:
return None
elif root.get_data() > data:
if left_child is None:
print("No such data")
return root
else:
root.set_left(del_node(left_child, data))
else: # root.get_data() < data
if right_child is None:
return root
else:
root.set_right(del_node(right_child, data))
if get_height(right_child) - get_height(left_child) == 2:
assert right_child is not None
if get_height(right_child.get_right()) > get_height(right_child.get_left()):
root = left_rotation(root)
else:
root = rl_rotation(root)
elif get_height(right_child) - get_height(left_child) == -2:
assert left_child is not None
if get_height(left_child.get_left()) > get_height(left_child.get_right()):
root = right_rotation(root)
else:
root = lr_rotation(root)
height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1
root.set_height(height)
return root
class AVLtree:
"""
An AVL tree doctest
Examples:
>>> t = AVLtree()
>>> t.insert(4)
insert:4
>>> print(str(t).replace(" \\n","\\n"))
4
*************************************
>>> t.insert(2)
insert:2
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
>>> t.insert(3)
insert:3
right rotation node: 2
left rotation node: 4
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
3
2 4
*************************************
>>> t.get_height()
2
>>> t.del_node(3)
delete:3
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
"""
def __init__(self) -> None:
self.root: MyNode | None = None
def get_height(self) -> int:
return get_height(self.root)
def insert(self, data: Any) -> None:
print("insert:" + str(data))
self.root = insert_node(self.root, data)
def del_node(self, data: Any) -> None:
print("delete:" + str(data))
if self.root is None:
print("Tree is empty!")
return
self.root = del_node(self.root, data)
def __str__(
self,
) -> str: # a level traversale, gives a more intuitive look on the tree
output = ""
q = MyQueue()
q.push(self.root)
layer = self.get_height()
if layer == 0:
return output
cnt = 0
while not q.is_empty():
node = q.pop()
space = " " * int(math.pow(2, layer - 1))
output += space
if node is None:
output += "*"
q.push(None)
q.push(None)
else:
output += str(node.get_data())
q.push(node.get_left())
q.push(node.get_right())
output += space
cnt = cnt + 1
for i in range(100):
if cnt == math.pow(2, i) - 1:
layer = layer - 1
if layer == 0:
output += "\n*************************************"
return output
output += "\n"
break
output += "\n*************************************"
return output
def _test() -> None:
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
t = AVLtree()
lst = list(range(10))
random.shuffle(lst)
for i in lst:
t.insert(i)
print(str(t))
random.shuffle(lst)
for i in lst:
t.del_node(i)
print(str(t))
| """
Implementation of an auto-balanced binary tree!
For doctests run following command:
python3 -m doctest -v avl_tree.py
For testing run:
python avl_tree.py
"""
from __future__ import annotations
import math
import random
from typing import Any
class MyQueue:
def __init__(self) -> None:
self.data: list[Any] = []
self.head: int = 0
self.tail: int = 0
def is_empty(self) -> bool:
return self.head == self.tail
def push(self, data: Any) -> None:
self.data.append(data)
self.tail = self.tail + 1
def pop(self) -> Any:
ret = self.data[self.head]
self.head = self.head + 1
return ret
def count(self) -> int:
return self.tail - self.head
def print_queue(self) -> None:
print(self.data)
print("**************")
print(self.data[self.head : self.tail])
class MyNode:
def __init__(self, data: Any) -> None:
self.data = data
self.left: MyNode | None = None
self.right: MyNode | None = None
self.height: int = 1
def get_data(self) -> Any:
return self.data
def get_left(self) -> MyNode | None:
return self.left
def get_right(self) -> MyNode | None:
return self.right
def get_height(self) -> int:
return self.height
def set_data(self, data: Any) -> None:
self.data = data
def set_left(self, node: MyNode | None) -> None:
self.left = node
def set_right(self, node: MyNode | None) -> None:
self.right = node
def set_height(self, height: int) -> None:
self.height = height
def get_height(node: MyNode | None) -> int:
if node is None:
return 0
return node.get_height()
def my_max(a: int, b: int) -> int:
if a > b:
return a
return b
def right_rotation(node: MyNode) -> MyNode:
r"""
A B
/ \ / \
B C Bl A
/ \ --> / / \
Bl Br UB Br C
/
UB
UB = unbalanced node
"""
print("left rotation node:", node.get_data())
ret = node.get_left()
assert ret is not None
node.set_left(ret.get_right())
ret.set_right(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def left_rotation(node: MyNode) -> MyNode:
"""
a mirror symmetry rotation of the left_rotation
"""
print("right rotation node:", node.get_data())
ret = node.get_right()
assert ret is not None
node.set_right(ret.get_left())
ret.set_left(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def lr_rotation(node: MyNode) -> MyNode:
r"""
A A Br
/ \ / \ / \
B C LR Br C RR B A
/ \ --> / \ --> / / \
Bl Br B UB Bl UB C
\ /
UB Bl
RR = right_rotation LR = left_rotation
"""
left_child = node.get_left()
assert left_child is not None
node.set_left(left_rotation(left_child))
return right_rotation(node)
def rl_rotation(node: MyNode) -> MyNode:
right_child = node.get_right()
assert right_child is not None
node.set_right(right_rotation(right_child))
return left_rotation(node)
def insert_node(node: MyNode | None, data: Any) -> MyNode | None:
if node is None:
return MyNode(data)
if data < node.get_data():
node.set_left(insert_node(node.get_left(), data))
if (
get_height(node.get_left()) - get_height(node.get_right()) == 2
): # an unbalance detected
left_child = node.get_left()
assert left_child is not None
if (
data < left_child.get_data()
): # new node is the left child of the left child
node = right_rotation(node)
else:
node = lr_rotation(node)
else:
node.set_right(insert_node(node.get_right(), data))
if get_height(node.get_right()) - get_height(node.get_left()) == 2:
right_child = node.get_right()
assert right_child is not None
if data < right_child.get_data():
node = rl_rotation(node)
else:
node = left_rotation(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
return node
def get_right_most(root: MyNode) -> Any:
while True:
right_child = root.get_right()
if right_child is None:
break
root = right_child
return root.get_data()
def get_left_most(root: MyNode) -> Any:
while True:
left_child = root.get_left()
if left_child is None:
break
root = left_child
return root.get_data()
def del_node(root: MyNode, data: Any) -> MyNode | None:
left_child = root.get_left()
right_child = root.get_right()
if root.get_data() == data:
if left_child is not None and right_child is not None:
temp_data = get_left_most(right_child)
root.set_data(temp_data)
root.set_right(del_node(right_child, temp_data))
elif left_child is not None:
root = left_child
elif right_child is not None:
root = right_child
else:
return None
elif root.get_data() > data:
if left_child is None:
print("No such data")
return root
else:
root.set_left(del_node(left_child, data))
else: # root.get_data() < data
if right_child is None:
return root
else:
root.set_right(del_node(right_child, data))
if get_height(right_child) - get_height(left_child) == 2:
assert right_child is not None
if get_height(right_child.get_right()) > get_height(right_child.get_left()):
root = left_rotation(root)
else:
root = rl_rotation(root)
elif get_height(right_child) - get_height(left_child) == -2:
assert left_child is not None
if get_height(left_child.get_left()) > get_height(left_child.get_right()):
root = right_rotation(root)
else:
root = lr_rotation(root)
height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1
root.set_height(height)
return root
class AVLtree:
"""
An AVL tree doctest
Examples:
>>> t = AVLtree()
>>> t.insert(4)
insert:4
>>> print(str(t).replace(" \\n","\\n"))
4
*************************************
>>> t.insert(2)
insert:2
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
>>> t.insert(3)
insert:3
right rotation node: 2
left rotation node: 4
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
3
2 4
*************************************
>>> t.get_height()
2
>>> t.del_node(3)
delete:3
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
"""
def __init__(self) -> None:
self.root: MyNode | None = None
def get_height(self) -> int:
return get_height(self.root)
def insert(self, data: Any) -> None:
print("insert:" + str(data))
self.root = insert_node(self.root, data)
def del_node(self, data: Any) -> None:
print("delete:" + str(data))
if self.root is None:
print("Tree is empty!")
return
self.root = del_node(self.root, data)
def __str__(
self,
) -> str: # a level traversale, gives a more intuitive look on the tree
output = ""
q = MyQueue()
q.push(self.root)
layer = self.get_height()
if layer == 0:
return output
cnt = 0
while not q.is_empty():
node = q.pop()
space = " " * int(math.pow(2, layer - 1))
output += space
if node is None:
output += "*"
q.push(None)
q.push(None)
else:
output += str(node.get_data())
q.push(node.get_left())
q.push(node.get_right())
output += space
cnt = cnt + 1
for i in range(100):
if cnt == math.pow(2, i) - 1:
layer = layer - 1
if layer == 0:
output += "\n*************************************"
return output
output += "\n"
break
output += "\n*************************************"
return output
def _test() -> None:
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
t = AVLtree()
lst = list(range(10))
random.shuffle(lst)
for i in lst:
t.insert(i)
print(str(t))
random.shuffle(lst)
for i in lst:
t.del_node(i)
print(str(t))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Frequency Finder
import string
# frequency taken from https://en.wikipedia.org/wiki/Letter_frequency
english_letter_freq = {
"E": 12.70,
"T": 9.06,
"A": 8.17,
"O": 7.51,
"I": 6.97,
"N": 6.75,
"S": 6.33,
"H": 6.09,
"R": 5.99,
"D": 4.25,
"L": 4.03,
"C": 2.78,
"U": 2.76,
"M": 2.41,
"W": 2.36,
"F": 2.23,
"G": 2.02,
"Y": 1.97,
"P": 1.93,
"B": 1.29,
"V": 0.98,
"K": 0.77,
"J": 0.15,
"X": 0.15,
"Q": 0.10,
"Z": 0.07,
}
ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ"
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def get_letter_count(message: str) -> dict[str, int]:
letter_count = {letter: 0 for letter in string.ascii_uppercase}
for letter in message.upper():
if letter in LETTERS:
letter_count[letter] += 1
return letter_count
def get_item_at_index_zero(x: tuple) -> str:
return x[0]
def get_frequency_order(message: str) -> str:
letter_to_freq = get_letter_count(message)
freq_to_letter: dict[int, list[str]] = {
freq: [] for letter, freq in letter_to_freq.items()
}
for letter in LETTERS:
freq_to_letter[letter_to_freq[letter]].append(letter)
freq_to_letter_str: dict[int, str] = {}
for freq in freq_to_letter:
freq_to_letter[freq].sort(key=ETAOIN.find, reverse=True)
freq_to_letter_str[freq] = "".join(freq_to_letter[freq])
freq_pairs = list(freq_to_letter_str.items())
freq_pairs.sort(key=get_item_at_index_zero, reverse=True)
freq_order: list[str] = [freq_pair[1] for freq_pair in freq_pairs]
return "".join(freq_order)
def english_freq_match_score(message: str) -> int:
"""
>>> english_freq_match_score('Hello World')
1
"""
freq_order = get_frequency_order(message)
match_score = 0
for common_letter in ETAOIN[:6]:
if common_letter in freq_order[:6]:
match_score += 1
for uncommon_letter in ETAOIN[-6:]:
if uncommon_letter in freq_order[-6:]:
match_score += 1
return match_score
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Frequency Finder
import string
# frequency taken from https://en.wikipedia.org/wiki/Letter_frequency
english_letter_freq = {
"E": 12.70,
"T": 9.06,
"A": 8.17,
"O": 7.51,
"I": 6.97,
"N": 6.75,
"S": 6.33,
"H": 6.09,
"R": 5.99,
"D": 4.25,
"L": 4.03,
"C": 2.78,
"U": 2.76,
"M": 2.41,
"W": 2.36,
"F": 2.23,
"G": 2.02,
"Y": 1.97,
"P": 1.93,
"B": 1.29,
"V": 0.98,
"K": 0.77,
"J": 0.15,
"X": 0.15,
"Q": 0.10,
"Z": 0.07,
}
ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ"
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def get_letter_count(message: str) -> dict[str, int]:
letter_count = {letter: 0 for letter in string.ascii_uppercase}
for letter in message.upper():
if letter in LETTERS:
letter_count[letter] += 1
return letter_count
def get_item_at_index_zero(x: tuple) -> str:
return x[0]
def get_frequency_order(message: str) -> str:
letter_to_freq = get_letter_count(message)
freq_to_letter: dict[int, list[str]] = {
freq: [] for letter, freq in letter_to_freq.items()
}
for letter in LETTERS:
freq_to_letter[letter_to_freq[letter]].append(letter)
freq_to_letter_str: dict[int, str] = {}
for freq in freq_to_letter:
freq_to_letter[freq].sort(key=ETAOIN.find, reverse=True)
freq_to_letter_str[freq] = "".join(freq_to_letter[freq])
freq_pairs = list(freq_to_letter_str.items())
freq_pairs.sort(key=get_item_at_index_zero, reverse=True)
freq_order: list[str] = [freq_pair[1] for freq_pair in freq_pairs]
return "".join(freq_order)
def english_freq_match_score(message: str) -> int:
"""
>>> english_freq_match_score('Hello World')
1
"""
freq_order = get_frequency_order(message)
match_score = 0
for common_letter in ETAOIN[:6]:
if common_letter in freq_order[:6]:
match_score += 1
for uncommon_letter in ETAOIN[-6:]:
if uncommon_letter in freq_order[-6:]:
match_score += 1
return match_score
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| INF = float("inf")
class Dinic:
def __init__(self, n):
self.lvl = [0] * n
self.ptr = [0] * n
self.q = [0] * n
self.adj = [[] for _ in range(n)]
"""
Here we will add our edges containing with the following parameters:
vertex closest to source, vertex closest to sink and flow capacity
through that edge ...
"""
def add_edge(self, a, b, c, rcap=0):
self.adj[a].append([b, len(self.adj[b]), c, 0])
self.adj[b].append([a, len(self.adj[a]) - 1, rcap, 0])
# This is a sample depth first search to be used at max_flow
def depth_first_search(self, vertex, sink, flow):
if vertex == sink or not flow:
return flow
for i in range(self.ptr[vertex], len(self.adj[vertex])):
e = self.adj[vertex][i]
if self.lvl[e[0]] == self.lvl[vertex] + 1:
p = self.depth_first_search(e[0], sink, min(flow, e[2] - e[3]))
if p:
self.adj[vertex][i][3] += p
self.adj[e[0]][e[1]][3] -= p
return p
self.ptr[vertex] = self.ptr[vertex] + 1
return 0
# Here we calculate the flow that reaches the sink
def max_flow(self, source, sink):
flow, self.q[0] = 0, source
for l in range(31): # noqa: E741 l = 30 maybe faster for random data
while True:
self.lvl, self.ptr = [0] * len(self.q), [0] * len(self.q)
qi, qe, self.lvl[source] = 0, 1, 1
while qi < qe and not self.lvl[sink]:
v = self.q[qi]
qi += 1
for e in self.adj[v]:
if not self.lvl[e[0]] and (e[2] - e[3]) >> (30 - l):
self.q[qe] = e[0]
qe += 1
self.lvl[e[0]] = self.lvl[v] + 1
p = self.depth_first_search(source, sink, INF)
while p:
flow += p
p = self.depth_first_search(source, sink, INF)
if not self.lvl[sink]:
break
return flow
# Example to use
"""
Will be a bipartite graph, than it has the vertices near the source(4)
and the vertices near the sink(4)
"""
# Here we make a graphs with 10 vertex(source and sink includes)
graph = Dinic(10)
source = 0
sink = 9
"""
Now we add the vertices next to the font in the font with 1 capacity in this edge
(source -> source vertices)
"""
for vertex in range(1, 5):
graph.add_edge(source, vertex, 1)
"""
We will do the same thing for the vertices near the sink, but from vertex to sink
(sink vertices -> sink)
"""
for vertex in range(5, 9):
graph.add_edge(vertex, sink, 1)
"""
Finally we add the verices near the sink to the vertices near the source.
(source vertices -> sink vertices)
"""
for vertex in range(1, 5):
graph.add_edge(vertex, vertex + 4, 1)
# Now we can know that is the maximum flow(source -> sink)
print(graph.max_flow(source, sink))
| INF = float("inf")
class Dinic:
def __init__(self, n):
self.lvl = [0] * n
self.ptr = [0] * n
self.q = [0] * n
self.adj = [[] for _ in range(n)]
"""
Here we will add our edges containing with the following parameters:
vertex closest to source, vertex closest to sink and flow capacity
through that edge ...
"""
def add_edge(self, a, b, c, rcap=0):
self.adj[a].append([b, len(self.adj[b]), c, 0])
self.adj[b].append([a, len(self.adj[a]) - 1, rcap, 0])
# This is a sample depth first search to be used at max_flow
def depth_first_search(self, vertex, sink, flow):
if vertex == sink or not flow:
return flow
for i in range(self.ptr[vertex], len(self.adj[vertex])):
e = self.adj[vertex][i]
if self.lvl[e[0]] == self.lvl[vertex] + 1:
p = self.depth_first_search(e[0], sink, min(flow, e[2] - e[3]))
if p:
self.adj[vertex][i][3] += p
self.adj[e[0]][e[1]][3] -= p
return p
self.ptr[vertex] = self.ptr[vertex] + 1
return 0
# Here we calculate the flow that reaches the sink
def max_flow(self, source, sink):
flow, self.q[0] = 0, source
for l in range(31): # noqa: E741 l = 30 maybe faster for random data
while True:
self.lvl, self.ptr = [0] * len(self.q), [0] * len(self.q)
qi, qe, self.lvl[source] = 0, 1, 1
while qi < qe and not self.lvl[sink]:
v = self.q[qi]
qi += 1
for e in self.adj[v]:
if not self.lvl[e[0]] and (e[2] - e[3]) >> (30 - l):
self.q[qe] = e[0]
qe += 1
self.lvl[e[0]] = self.lvl[v] + 1
p = self.depth_first_search(source, sink, INF)
while p:
flow += p
p = self.depth_first_search(source, sink, INF)
if not self.lvl[sink]:
break
return flow
# Example to use
"""
Will be a bipartite graph, than it has the vertices near the source(4)
and the vertices near the sink(4)
"""
# Here we make a graphs with 10 vertex(source and sink includes)
graph = Dinic(10)
source = 0
sink = 9
"""
Now we add the vertices next to the font in the font with 1 capacity in this edge
(source -> source vertices)
"""
for vertex in range(1, 5):
graph.add_edge(source, vertex, 1)
"""
We will do the same thing for the vertices near the sink, but from vertex to sink
(sink vertices -> sink)
"""
for vertex in range(5, 9):
graph.add_edge(vertex, sink, 1)
"""
Finally we add the verices near the sink to the vertices near the source.
(source vertices -> sink vertices)
"""
for vertex in range(1, 5):
graph.add_edge(vertex, vertex + 4, 1)
# Now we can know that is the maximum flow(source -> sink)
print(graph.max_flow(source, sink))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, n: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if n == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(n / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif n == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(n / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, n: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if n == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(n / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif n == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(n / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 114: https://projecteuler.net/problem=114
A row measuring seven units in length has red blocks with a minimum length
of three units placed on it, such that any two red blocks
(which are allowed to be different lengths) are separated by at least one grey square.
There are exactly seventeen ways of doing this.
|g|g|g|g|g|g|g| |r,r,r|g|g|g|g|
|g|r,r,r|g|g|g| |g|g|r,r,r|g|g|
|g|g|g|r,r,r|g| |g|g|g|g|r,r,r|
|r,r,r|g|r,r,r| |r,r,r,r|g|g|g|
|g|r,r,r,r|g|g| |g|g|r,r,r,r|g|
|g|g|g|r,r,r,r| |r,r,r,r,r|g|g|
|g|r,r,r,r,r|g| |g|g|r,r,r,r,r|
|r,r,r,r,r,r|g| |g|r,r,r,r,r,r|
|r,r,r,r,r,r,r|
How many ways can a row measuring fifty units in length be filled?
NOTE: Although the example above does not lend itself to the possibility,
in general it is permitted to mix block sizes. For example,
on a row measuring eight units in length you could use red (3), grey (1), and red (4).
"""
def solution(length: int = 50) -> int:
"""
Returns the number of ways a row of the given length can be filled
>>> solution(7)
17
"""
ways_number = [1] * (length + 1)
for row_length in range(3, length + 1):
for block_length in range(3, row_length + 1):
for block_start in range(row_length - block_length):
ways_number[row_length] += ways_number[
row_length - block_start - block_length - 1
]
ways_number[row_length] += 1
return ways_number[length]
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 114: https://projecteuler.net/problem=114
A row measuring seven units in length has red blocks with a minimum length
of three units placed on it, such that any two red blocks
(which are allowed to be different lengths) are separated by at least one grey square.
There are exactly seventeen ways of doing this.
|g|g|g|g|g|g|g| |r,r,r|g|g|g|g|
|g|r,r,r|g|g|g| |g|g|r,r,r|g|g|
|g|g|g|r,r,r|g| |g|g|g|g|r,r,r|
|r,r,r|g|r,r,r| |r,r,r,r|g|g|g|
|g|r,r,r,r|g|g| |g|g|r,r,r,r|g|
|g|g|g|r,r,r,r| |r,r,r,r,r|g|g|
|g|r,r,r,r,r|g| |g|g|r,r,r,r,r|
|r,r,r,r,r,r|g| |g|r,r,r,r,r,r|
|r,r,r,r,r,r,r|
How many ways can a row measuring fifty units in length be filled?
NOTE: Although the example above does not lend itself to the possibility,
in general it is permitted to mix block sizes. For example,
on a row measuring eight units in length you could use red (3), grey (1), and red (4).
"""
def solution(length: int = 50) -> int:
"""
Returns the number of ways a row of the given length can be filled
>>> solution(7)
17
"""
ways_number = [1] * (length + 1)
for row_length in range(3, length + 1):
for block_length in range(3, row_length + 1):
for block_start in range(row_length - block_length):
ways_number[row_length] += ways_number[
row_length - block_start - block_length - 1
]
ways_number[row_length] += 1
return ways_number[length]
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
def merge(left_half: list, right_half: list) -> list:
"""Helper function for mergesort.
>>> left_half = [-2]
>>> right_half = [-1]
>>> merge(left_half, right_half)
[-2, -1]
>>> left_half = [1,2,3]
>>> right_half = [4,5,6]
>>> merge(left_half, right_half)
[1, 2, 3, 4, 5, 6]
>>> left_half = [-2]
>>> right_half = [-1]
>>> merge(left_half, right_half)
[-2, -1]
>>> left_half = [12, 15]
>>> right_half = [13, 14]
>>> merge(left_half, right_half)
[12, 13, 14, 15]
>>> left_half = []
>>> right_half = []
>>> merge(left_half, right_half)
[]
"""
sorted_array = [None] * (len(right_half) + len(left_half))
pointer1 = 0 # pointer to current index for left Half
pointer2 = 0 # pointer to current index for the right Half
index = 0 # pointer to current index for the sorted array Half
while pointer1 < len(left_half) and pointer2 < len(right_half):
if left_half[pointer1] < right_half[pointer2]:
sorted_array[index] = left_half[pointer1]
pointer1 += 1
index += 1
else:
sorted_array[index] = right_half[pointer2]
pointer2 += 1
index += 1
while pointer1 < len(left_half):
sorted_array[index] = left_half[pointer1]
pointer1 += 1
index += 1
while pointer2 < len(right_half):
sorted_array[index] = right_half[pointer2]
pointer2 += 1
index += 1
return sorted_array
def merge_sort(array: list) -> list:
"""Returns a list of sorted array elements using merge sort.
>>> from random import shuffle
>>> array = [-2, 3, -10, 11, 99, 100000, 100, -200]
>>> shuffle(array)
>>> merge_sort(array)
[-200, -10, -2, 3, 11, 99, 100, 100000]
>>> shuffle(array)
>>> merge_sort(array)
[-200, -10, -2, 3, 11, 99, 100, 100000]
>>> array = [-200]
>>> merge_sort(array)
[-200]
>>> array = [-2, 3, -10, 11, 99, 100000, 100, -200]
>>> shuffle(array)
>>> sorted(array) == merge_sort(array)
True
>>> array = [-2]
>>> merge_sort(array)
[-2]
>>> array = []
>>> merge_sort(array)
[]
>>> array = [10000000, 1, -1111111111, 101111111112, 9000002]
>>> sorted(array) == merge_sort(array)
True
"""
if len(array) <= 1:
return array
# the actual formula to calculate the middle element = left + (right - left) // 2
# this avoids integer overflow in case of large N
middle = 0 + (len(array) - 0) // 2
# Split the array into halves till the array length becomes equal to One
# merge the arrays of single length returned by mergeSort function and
# pass them into the merge arrays function which merges the array
left_half = array[:middle]
right_half = array[middle:]
return merge(merge_sort(left_half), merge_sort(right_half))
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
def merge(left_half: list, right_half: list) -> list:
"""Helper function for mergesort.
>>> left_half = [-2]
>>> right_half = [-1]
>>> merge(left_half, right_half)
[-2, -1]
>>> left_half = [1,2,3]
>>> right_half = [4,5,6]
>>> merge(left_half, right_half)
[1, 2, 3, 4, 5, 6]
>>> left_half = [-2]
>>> right_half = [-1]
>>> merge(left_half, right_half)
[-2, -1]
>>> left_half = [12, 15]
>>> right_half = [13, 14]
>>> merge(left_half, right_half)
[12, 13, 14, 15]
>>> left_half = []
>>> right_half = []
>>> merge(left_half, right_half)
[]
"""
sorted_array = [None] * (len(right_half) + len(left_half))
pointer1 = 0 # pointer to current index for left Half
pointer2 = 0 # pointer to current index for the right Half
index = 0 # pointer to current index for the sorted array Half
while pointer1 < len(left_half) and pointer2 < len(right_half):
if left_half[pointer1] < right_half[pointer2]:
sorted_array[index] = left_half[pointer1]
pointer1 += 1
index += 1
else:
sorted_array[index] = right_half[pointer2]
pointer2 += 1
index += 1
while pointer1 < len(left_half):
sorted_array[index] = left_half[pointer1]
pointer1 += 1
index += 1
while pointer2 < len(right_half):
sorted_array[index] = right_half[pointer2]
pointer2 += 1
index += 1
return sorted_array
def merge_sort(array: list) -> list:
"""Returns a list of sorted array elements using merge sort.
>>> from random import shuffle
>>> array = [-2, 3, -10, 11, 99, 100000, 100, -200]
>>> shuffle(array)
>>> merge_sort(array)
[-200, -10, -2, 3, 11, 99, 100, 100000]
>>> shuffle(array)
>>> merge_sort(array)
[-200, -10, -2, 3, 11, 99, 100, 100000]
>>> array = [-200]
>>> merge_sort(array)
[-200]
>>> array = [-2, 3, -10, 11, 99, 100000, 100, -200]
>>> shuffle(array)
>>> sorted(array) == merge_sort(array)
True
>>> array = [-2]
>>> merge_sort(array)
[-2]
>>> array = []
>>> merge_sort(array)
[]
>>> array = [10000000, 1, -1111111111, 101111111112, 9000002]
>>> sorted(array) == merge_sort(array)
True
"""
if len(array) <= 1:
return array
# the actual formula to calculate the middle element = left + (right - left) // 2
# this avoids integer overflow in case of large N
middle = 0 + (len(array) - 0) // 2
# Split the array into halves till the array length becomes equal to One
# merge the arrays of single length returned by mergeSort function and
# pass them into the merge arrays function which merges the array
left_half = array[:middle]
right_half = array[middle:]
return merge(merge_sort(left_half), merge_sort(right_half))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Perceptron
w = w + N * (d(k) - y) * x(k)
Using perceptron network for oil analysis, with Measuring of 3 parameters
that represent chemical characteristics we can classify the oil, in p1 or p2
p1 = -1
p2 = 1
"""
import random
class Perceptron:
def __init__(
self,
sample: list[list[float]],
target: list[int],
learning_rate: float = 0.01,
epoch_number: int = 1000,
bias: float = -1,
) -> None:
"""
Initializes a Perceptron network for oil analysis
:param sample: sample dataset of 3 parameters with shape [30,3]
:param target: variable for classification with two possible states -1 or 1
:param learning_rate: learning rate used in optimizing.
:param epoch_number: number of epochs to train network on.
:param bias: bias value for the network.
>>> p = Perceptron([], (0, 1, 2))
Traceback (most recent call last):
...
ValueError: Sample data can not be empty
>>> p = Perceptron(([0], 1, 2), [])
Traceback (most recent call last):
...
ValueError: Target data can not be empty
>>> p = Perceptron(([0], 1, 2), (0, 1))
Traceback (most recent call last):
...
ValueError: Sample data and Target data do not have matching lengths
"""
self.sample = sample
if len(self.sample) == 0:
raise ValueError("Sample data can not be empty")
self.target = target
if len(self.target) == 0:
raise ValueError("Target data can not be empty")
if len(self.sample) != len(self.target):
raise ValueError("Sample data and Target data do not have matching lengths")
self.learning_rate = learning_rate
self.epoch_number = epoch_number
self.bias = bias
self.number_sample = len(sample)
self.col_sample = len(sample[0]) # number of columns in dataset
self.weight: list = []
def training(self) -> None:
"""
Trains perceptron for epochs <= given number of epochs
:return: None
>>> data = [[2.0149, 0.6192, 10.9263]]
>>> targets = [-1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.training() # doctest: +ELLIPSIS
('\\nEpoch:\\n', ...)
...
"""
for sample in self.sample:
sample.insert(0, self.bias)
for _ in range(self.col_sample):
self.weight.append(random.random())
self.weight.insert(0, self.bias)
epoch_count = 0
while True:
has_misclassified = False
for i in range(self.number_sample):
u = 0
for j in range(self.col_sample + 1):
u = u + self.weight[j] * self.sample[i][j]
y = self.sign(u)
if y != self.target[i]:
for j in range(self.col_sample + 1):
self.weight[j] = (
self.weight[j]
+ self.learning_rate
* (self.target[i] - y)
* self.sample[i][j]
)
has_misclassified = True
# print('Epoch: \n',epoch_count)
epoch_count = epoch_count + 1
# if you want control the epoch or just by error
if not has_misclassified:
print(("\nEpoch:\n", epoch_count))
print("------------------------\n")
# if epoch_count > self.epoch_number or not error:
break
def sort(self, sample: list[float]) -> None:
"""
:param sample: example row to classify as P1 or P2
:return: None
>>> data = [[2.0149, 0.6192, 10.9263]]
>>> targets = [-1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.training() # doctest: +ELLIPSIS
('\\nEpoch:\\n', ...)
...
>>> perceptron.sort([-0.6508, 0.1097, 4.0009]) # doctest: +ELLIPSIS
('Sample: ', ...)
classification: P...
"""
if len(self.sample) == 0:
raise ValueError("Sample data can not be empty")
sample.insert(0, self.bias)
u = 0
for i in range(self.col_sample + 1):
u = u + self.weight[i] * sample[i]
y = self.sign(u)
if y == -1:
print(("Sample: ", sample))
print("classification: P1")
else:
print(("Sample: ", sample))
print("classification: P2")
def sign(self, u: float) -> int:
"""
threshold function for classification
:param u: input number
:return: 1 if the input is greater than 0, otherwise -1
>>> data = [[0],[-0.5],[0.5]]
>>> targets = [1,-1,1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.sign(0)
1
>>> perceptron.sign(-0.5)
-1
>>> perceptron.sign(0.5)
1
"""
return 1 if u >= 0 else -1
samples = [
[-0.6508, 0.1097, 4.0009],
[-1.4492, 0.8896, 4.4005],
[2.0850, 0.6876, 12.0710],
[0.2626, 1.1476, 7.7985],
[0.6418, 1.0234, 7.0427],
[0.2569, 0.6730, 8.3265],
[1.1155, 0.6043, 7.4446],
[0.0914, 0.3399, 7.0677],
[0.0121, 0.5256, 4.6316],
[-0.0429, 0.4660, 5.4323],
[0.4340, 0.6870, 8.2287],
[0.2735, 1.0287, 7.1934],
[0.4839, 0.4851, 7.4850],
[0.4089, -0.1267, 5.5019],
[1.4391, 0.1614, 8.5843],
[-0.9115, -0.1973, 2.1962],
[0.3654, 1.0475, 7.4858],
[0.2144, 0.7515, 7.1699],
[0.2013, 1.0014, 6.5489],
[0.6483, 0.2183, 5.8991],
[-0.1147, 0.2242, 7.2435],
[-0.7970, 0.8795, 3.8762],
[-1.0625, 0.6366, 2.4707],
[0.5307, 0.1285, 5.6883],
[-1.2200, 0.7777, 1.7252],
[0.3957, 0.1076, 5.6623],
[-0.1013, 0.5989, 7.1812],
[2.4482, 0.9455, 11.2095],
[2.0149, 0.6192, 10.9263],
[0.2012, 0.2611, 5.4631],
]
target = [
-1,
-1,
-1,
1,
1,
-1,
1,
-1,
1,
1,
-1,
1,
-1,
-1,
-1,
-1,
1,
1,
1,
1,
-1,
1,
1,
1,
1,
-1,
-1,
1,
-1,
1,
]
if __name__ == "__main__":
import doctest
doctest.testmod()
network = Perceptron(
sample=samples, target=target, learning_rate=0.01, epoch_number=1000, bias=-1
)
network.training()
print("Finished training perceptron")
print("Enter values to predict or q to exit")
while True:
sample: list = []
for i in range(len(samples[0])):
user_input = input("value: ").strip()
if user_input == "q":
break
observation = float(user_input)
sample.insert(i, observation)
network.sort(sample)
| """
Perceptron
w = w + N * (d(k) - y) * x(k)
Using perceptron network for oil analysis, with Measuring of 3 parameters
that represent chemical characteristics we can classify the oil, in p1 or p2
p1 = -1
p2 = 1
"""
import random
class Perceptron:
def __init__(
self,
sample: list[list[float]],
target: list[int],
learning_rate: float = 0.01,
epoch_number: int = 1000,
bias: float = -1,
) -> None:
"""
Initializes a Perceptron network for oil analysis
:param sample: sample dataset of 3 parameters with shape [30,3]
:param target: variable for classification with two possible states -1 or 1
:param learning_rate: learning rate used in optimizing.
:param epoch_number: number of epochs to train network on.
:param bias: bias value for the network.
>>> p = Perceptron([], (0, 1, 2))
Traceback (most recent call last):
...
ValueError: Sample data can not be empty
>>> p = Perceptron(([0], 1, 2), [])
Traceback (most recent call last):
...
ValueError: Target data can not be empty
>>> p = Perceptron(([0], 1, 2), (0, 1))
Traceback (most recent call last):
...
ValueError: Sample data and Target data do not have matching lengths
"""
self.sample = sample
if len(self.sample) == 0:
raise ValueError("Sample data can not be empty")
self.target = target
if len(self.target) == 0:
raise ValueError("Target data can not be empty")
if len(self.sample) != len(self.target):
raise ValueError("Sample data and Target data do not have matching lengths")
self.learning_rate = learning_rate
self.epoch_number = epoch_number
self.bias = bias
self.number_sample = len(sample)
self.col_sample = len(sample[0]) # number of columns in dataset
self.weight: list = []
def training(self) -> None:
"""
Trains perceptron for epochs <= given number of epochs
:return: None
>>> data = [[2.0149, 0.6192, 10.9263]]
>>> targets = [-1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.training() # doctest: +ELLIPSIS
('\\nEpoch:\\n', ...)
...
"""
for sample in self.sample:
sample.insert(0, self.bias)
for _ in range(self.col_sample):
self.weight.append(random.random())
self.weight.insert(0, self.bias)
epoch_count = 0
while True:
has_misclassified = False
for i in range(self.number_sample):
u = 0
for j in range(self.col_sample + 1):
u = u + self.weight[j] * self.sample[i][j]
y = self.sign(u)
if y != self.target[i]:
for j in range(self.col_sample + 1):
self.weight[j] = (
self.weight[j]
+ self.learning_rate
* (self.target[i] - y)
* self.sample[i][j]
)
has_misclassified = True
# print('Epoch: \n',epoch_count)
epoch_count = epoch_count + 1
# if you want control the epoch or just by error
if not has_misclassified:
print(("\nEpoch:\n", epoch_count))
print("------------------------\n")
# if epoch_count > self.epoch_number or not error:
break
def sort(self, sample: list[float]) -> None:
"""
:param sample: example row to classify as P1 or P2
:return: None
>>> data = [[2.0149, 0.6192, 10.9263]]
>>> targets = [-1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.training() # doctest: +ELLIPSIS
('\\nEpoch:\\n', ...)
...
>>> perceptron.sort([-0.6508, 0.1097, 4.0009]) # doctest: +ELLIPSIS
('Sample: ', ...)
classification: P...
"""
if len(self.sample) == 0:
raise ValueError("Sample data can not be empty")
sample.insert(0, self.bias)
u = 0
for i in range(self.col_sample + 1):
u = u + self.weight[i] * sample[i]
y = self.sign(u)
if y == -1:
print(("Sample: ", sample))
print("classification: P1")
else:
print(("Sample: ", sample))
print("classification: P2")
def sign(self, u: float) -> int:
"""
threshold function for classification
:param u: input number
:return: 1 if the input is greater than 0, otherwise -1
>>> data = [[0],[-0.5],[0.5]]
>>> targets = [1,-1,1]
>>> perceptron = Perceptron(data,targets)
>>> perceptron.sign(0)
1
>>> perceptron.sign(-0.5)
-1
>>> perceptron.sign(0.5)
1
"""
return 1 if u >= 0 else -1
samples = [
[-0.6508, 0.1097, 4.0009],
[-1.4492, 0.8896, 4.4005],
[2.0850, 0.6876, 12.0710],
[0.2626, 1.1476, 7.7985],
[0.6418, 1.0234, 7.0427],
[0.2569, 0.6730, 8.3265],
[1.1155, 0.6043, 7.4446],
[0.0914, 0.3399, 7.0677],
[0.0121, 0.5256, 4.6316],
[-0.0429, 0.4660, 5.4323],
[0.4340, 0.6870, 8.2287],
[0.2735, 1.0287, 7.1934],
[0.4839, 0.4851, 7.4850],
[0.4089, -0.1267, 5.5019],
[1.4391, 0.1614, 8.5843],
[-0.9115, -0.1973, 2.1962],
[0.3654, 1.0475, 7.4858],
[0.2144, 0.7515, 7.1699],
[0.2013, 1.0014, 6.5489],
[0.6483, 0.2183, 5.8991],
[-0.1147, 0.2242, 7.2435],
[-0.7970, 0.8795, 3.8762],
[-1.0625, 0.6366, 2.4707],
[0.5307, 0.1285, 5.6883],
[-1.2200, 0.7777, 1.7252],
[0.3957, 0.1076, 5.6623],
[-0.1013, 0.5989, 7.1812],
[2.4482, 0.9455, 11.2095],
[2.0149, 0.6192, 10.9263],
[0.2012, 0.2611, 5.4631],
]
target = [
-1,
-1,
-1,
1,
1,
-1,
1,
-1,
1,
1,
-1,
1,
-1,
-1,
-1,
-1,
1,
1,
1,
1,
-1,
1,
1,
1,
1,
-1,
-1,
1,
-1,
1,
]
if __name__ == "__main__":
import doctest
doctest.testmod()
network = Perceptron(
sample=samples, target=target, learning_rate=0.01, epoch_number=1000, bias=-1
)
network.training()
print("Finished training perceptron")
print("Enter values to predict or q to exit")
while True:
sample: list = []
for i in range(len(samples[0])):
user_input = input("value: ").strip()
if user_input == "q":
break
observation = float(user_input)
sample.insert(i, observation)
network.sort(sample)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
The Jaccard similarity coefficient is a commonly used indicator of the
similarity between two sets. Let U be a set and A and B be subsets of U,
then the Jaccard index/similarity is defined to be the ratio of the number
of elements of their intersection and the number of elements of their union.
Inspired from Wikipedia and
the book Mining of Massive Datasets [MMDS 2nd Edition, Chapter 3]
https://en.wikipedia.org/wiki/Jaccard_index
https://mmds.org
Jaccard similarity is widely used with MinHashing.
"""
def jaccard_similarity(set_a, set_b, alternative_union=False):
"""
Finds the jaccard similarity between two sets.
Essentially, its intersection over union.
The alternative way to calculate this is to take union as sum of the
number of items in the two sets. This will lead to jaccard similarity
of a set with itself be 1/2 instead of 1. [MMDS 2nd Edition, Page 77]
Parameters:
:set_a (set,list,tuple): A non-empty set/list
:set_b (set,list,tuple): A non-empty set/list
:alternativeUnion (boolean): If True, use sum of number of
items as union
Output:
(float) The jaccard similarity between the two sets.
Examples:
>>> set_a = {'a', 'b', 'c', 'd', 'e'}
>>> set_b = {'c', 'd', 'e', 'f', 'h', 'i'}
>>> jaccard_similarity(set_a, set_b)
0.375
>>> jaccard_similarity(set_a, set_a)
1.0
>>> jaccard_similarity(set_a, set_a, True)
0.5
>>> set_a = ['a', 'b', 'c', 'd', 'e']
>>> set_b = ('c', 'd', 'e', 'f', 'h', 'i')
>>> jaccard_similarity(set_a, set_b)
0.375
"""
if isinstance(set_a, set) and isinstance(set_b, set):
intersection = len(set_a.intersection(set_b))
if alternative_union:
union = len(set_a) + len(set_b)
else:
union = len(set_a.union(set_b))
return intersection / union
if isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
intersection = [element for element in set_a if element in set_b]
if alternative_union:
union = len(set_a) + len(set_b)
return len(intersection) / union
else:
union = set_a + [element for element in set_b if element not in set_a]
return len(intersection) / len(union)
return len(intersection) / len(union)
return None
if __name__ == "__main__":
set_a = {"a", "b", "c", "d", "e"}
set_b = {"c", "d", "e", "f", "h", "i"}
print(jaccard_similarity(set_a, set_b))
| """
The Jaccard similarity coefficient is a commonly used indicator of the
similarity between two sets. Let U be a set and A and B be subsets of U,
then the Jaccard index/similarity is defined to be the ratio of the number
of elements of their intersection and the number of elements of their union.
Inspired from Wikipedia and
the book Mining of Massive Datasets [MMDS 2nd Edition, Chapter 3]
https://en.wikipedia.org/wiki/Jaccard_index
https://mmds.org
Jaccard similarity is widely used with MinHashing.
"""
def jaccard_similarity(set_a, set_b, alternative_union=False):
"""
Finds the jaccard similarity between two sets.
Essentially, its intersection over union.
The alternative way to calculate this is to take union as sum of the
number of items in the two sets. This will lead to jaccard similarity
of a set with itself be 1/2 instead of 1. [MMDS 2nd Edition, Page 77]
Parameters:
:set_a (set,list,tuple): A non-empty set/list
:set_b (set,list,tuple): A non-empty set/list
:alternativeUnion (boolean): If True, use sum of number of
items as union
Output:
(float) The jaccard similarity between the two sets.
Examples:
>>> set_a = {'a', 'b', 'c', 'd', 'e'}
>>> set_b = {'c', 'd', 'e', 'f', 'h', 'i'}
>>> jaccard_similarity(set_a, set_b)
0.375
>>> jaccard_similarity(set_a, set_a)
1.0
>>> jaccard_similarity(set_a, set_a, True)
0.5
>>> set_a = ['a', 'b', 'c', 'd', 'e']
>>> set_b = ('c', 'd', 'e', 'f', 'h', 'i')
>>> jaccard_similarity(set_a, set_b)
0.375
"""
if isinstance(set_a, set) and isinstance(set_b, set):
intersection = len(set_a.intersection(set_b))
if alternative_union:
union = len(set_a) + len(set_b)
else:
union = len(set_a.union(set_b))
return intersection / union
if isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
intersection = [element for element in set_a if element in set_b]
if alternative_union:
union = len(set_a) + len(set_b)
return len(intersection) / union
else:
union = set_a + [element for element in set_b if element not in set_a]
return len(intersection) / len(union)
return len(intersection) / len(union)
return None
if __name__ == "__main__":
set_a = {"a", "b", "c", "d", "e"}
set_b = {"c", "d", "e", "f", "h", "i"}
print(jaccard_similarity(set_a, set_b))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Author : Alexander Pantyukhin
Date : November 24, 2022
Task:
Given an m x n grid of characters board and a string word,
return true if word exists in the grid.
The word can be constructed from letters of sequentially adjacent cells,
where adjacent cells are horizontally or vertically neighboring.
The same letter cell may not be used more than once.
Example:
Matrix:
---------
|A|B|C|E|
|S|F|C|S|
|A|D|E|E|
---------
Word:
"ABCCED"
Result:
True
Implementation notes: Use backtracking approach.
At each point, check all neighbors to try to find the next letter of the word.
leetcode: https://leetcode.com/problems/word-search/
"""
def get_point_key(len_board: int, len_board_column: int, row: int, column: int) -> int:
"""
Returns the hash key of matrix indexes.
>>> get_point_key(10, 20, 1, 0)
200
"""
return len_board * len_board_column * row + column
def exits_word(
board: list[list[str]],
word: str,
row: int,
column: int,
word_index: int,
visited_points_set: set[int],
) -> bool:
"""
Return True if it's possible to search the word suffix
starting from the word_index.
>>> exits_word([["A"]], "B", 0, 0, 0, set())
False
"""
if board[row][column] != word[word_index]:
return False
if word_index == len(word) - 1:
return True
traverts_directions = [(0, 1), (0, -1), (-1, 0), (1, 0)]
len_board = len(board)
len_board_column = len(board[0])
for direction in traverts_directions:
next_i = row + direction[0]
next_j = column + direction[1]
if not (0 <= next_i < len_board and 0 <= next_j < len_board_column):
continue
key = get_point_key(len_board, len_board_column, next_i, next_j)
if key in visited_points_set:
continue
visited_points_set.add(key)
if exits_word(board, word, next_i, next_j, word_index + 1, visited_points_set):
return True
visited_points_set.remove(key)
return False
def word_exists(board: list[list[str]], word: str) -> bool:
"""
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCCED")
True
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "SEE")
True
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCB")
False
>>> word_exists([["A"]], "A")
True
>>> word_exists([["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","B"],
... ["A","A","A","A","B","A"]],
... "AAAAAAAAAAAAABB")
False
>>> word_exists([["A"]], 123)
Traceback (most recent call last):
...
ValueError: The word parameter should be a string of length greater than 0.
>>> word_exists([["A"]], "")
Traceback (most recent call last):
...
ValueError: The word parameter should be a string of length greater than 0.
>>> word_exists([[]], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
>>> word_exists([], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
>>> word_exists([["A"], [21]], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
"""
# Validate board
board_error_message = (
"The board should be a non empty matrix of single chars strings."
)
len_board = len(board)
if not isinstance(board, list) or len(board) == 0:
raise ValueError(board_error_message)
for row in board:
if not isinstance(row, list) or len(row) == 0:
raise ValueError(board_error_message)
for item in row:
if not isinstance(item, str) or len(item) != 1:
raise ValueError(board_error_message)
# Validate word
if not isinstance(word, str) or len(word) == 0:
raise ValueError(
"The word parameter should be a string of length greater than 0."
)
len_board_column = len(board[0])
for i in range(len_board):
for j in range(len_board_column):
if exits_word(
board, word, i, j, 0, {get_point_key(len_board, len_board_column, i, j)}
):
return True
return False
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : November 24, 2022
Task:
Given an m x n grid of characters board and a string word,
return true if word exists in the grid.
The word can be constructed from letters of sequentially adjacent cells,
where adjacent cells are horizontally or vertically neighboring.
The same letter cell may not be used more than once.
Example:
Matrix:
---------
|A|B|C|E|
|S|F|C|S|
|A|D|E|E|
---------
Word:
"ABCCED"
Result:
True
Implementation notes: Use backtracking approach.
At each point, check all neighbors to try to find the next letter of the word.
leetcode: https://leetcode.com/problems/word-search/
"""
def get_point_key(len_board: int, len_board_column: int, row: int, column: int) -> int:
"""
Returns the hash key of matrix indexes.
>>> get_point_key(10, 20, 1, 0)
200
"""
return len_board * len_board_column * row + column
def exits_word(
board: list[list[str]],
word: str,
row: int,
column: int,
word_index: int,
visited_points_set: set[int],
) -> bool:
"""
Return True if it's possible to search the word suffix
starting from the word_index.
>>> exits_word([["A"]], "B", 0, 0, 0, set())
False
"""
if board[row][column] != word[word_index]:
return False
if word_index == len(word) - 1:
return True
traverts_directions = [(0, 1), (0, -1), (-1, 0), (1, 0)]
len_board = len(board)
len_board_column = len(board[0])
for direction in traverts_directions:
next_i = row + direction[0]
next_j = column + direction[1]
if not (0 <= next_i < len_board and 0 <= next_j < len_board_column):
continue
key = get_point_key(len_board, len_board_column, next_i, next_j)
if key in visited_points_set:
continue
visited_points_set.add(key)
if exits_word(board, word, next_i, next_j, word_index + 1, visited_points_set):
return True
visited_points_set.remove(key)
return False
def word_exists(board: list[list[str]], word: str) -> bool:
"""
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCCED")
True
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "SEE")
True
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCB")
False
>>> word_exists([["A"]], "A")
True
>>> word_exists([["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","A"],
... ["A","A","A","A","A","B"],
... ["A","A","A","A","B","A"]],
... "AAAAAAAAAAAAABB")
False
>>> word_exists([["A"]], 123)
Traceback (most recent call last):
...
ValueError: The word parameter should be a string of length greater than 0.
>>> word_exists([["A"]], "")
Traceback (most recent call last):
...
ValueError: The word parameter should be a string of length greater than 0.
>>> word_exists([[]], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
>>> word_exists([], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
>>> word_exists([["A"], [21]], "AB")
Traceback (most recent call last):
...
ValueError: The board should be a non empty matrix of single chars strings.
"""
# Validate board
board_error_message = (
"The board should be a non empty matrix of single chars strings."
)
len_board = len(board)
if not isinstance(board, list) or len(board) == 0:
raise ValueError(board_error_message)
for row in board:
if not isinstance(row, list) or len(row) == 0:
raise ValueError(board_error_message)
for item in row:
if not isinstance(item, str) or len(item) != 1:
raise ValueError(board_error_message)
# Validate word
if not isinstance(word, str) or len(word) == 0:
raise ValueError(
"The word parameter should be a string of length greater than 0."
)
len_board_column = len(board[0])
for i in range(len_board):
for j in range(len_board_column):
if exits_word(
board, word, i, j, 0, {get_point_key(len_board, len_board_column, i, j)}
):
return True
return False
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
msg = f"Input values must either be float or int: {list(locals().values())}"
raise TypeError(msg)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
msg = (
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
raise TypeError(msg)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
msg = f"Input values must either be float or int: {list(locals().values())}"
raise TypeError(msg)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
msg = (
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
raise TypeError(msg)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 206: https://projecteuler.net/problem=206
Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0,
where each “_” is a single digit.
-----
Instead of computing every single permutation of that number and going
through a 10^9 search space, we can narrow it down considerably.
If the square ends in a 0, then the square root must also end in a 0. Thus,
the last missing digit must be 0 and the square root is a multiple of 10.
We can narrow the search space down to the first 8 digits and multiply the
result of that by 10 at the end.
Now the last digit is a 9, which can only happen if the square root ends
in a 3 or 7. From this point, we can try one of two different methods to find
the answer:
1. Start at the lowest possible base number whose square would be in the
format, and count up. The base we would start at is 101010103, whose square is
the closest number to 10203040506070809. Alternate counting up by 4 and 6 so
the last digit of the base is always a 3 or 7.
2. Start at the highest possible base number whose square would be in the
format, and count down. That base would be 138902663, whose square is the
closest number to 1929394959697989. Alternate counting down by 6 and 4 so the
last digit of the base is always a 3 or 7.
The solution does option 2 because the answer happens to be much closer to the
starting point.
"""
def is_square_form(num: int) -> bool:
"""
Determines if num is in the form 1_2_3_4_5_6_7_8_9
>>> is_square_form(1)
False
>>> is_square_form(112233445566778899)
True
>>> is_square_form(123456789012345678)
False
"""
digit = 9
while num > 0:
if num % 10 != digit:
return False
num //= 100
digit -= 1
return True
def solution() -> int:
"""
Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0
"""
num = 138902663
while not is_square_form(num * num):
if num % 10 == 3:
num -= 6 # (3 - 6) % 10 = 7
else:
num -= 4 # (7 - 4) % 10 = 3
return num * 10
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 206: https://projecteuler.net/problem=206
Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0,
where each “_” is a single digit.
-----
Instead of computing every single permutation of that number and going
through a 10^9 search space, we can narrow it down considerably.
If the square ends in a 0, then the square root must also end in a 0. Thus,
the last missing digit must be 0 and the square root is a multiple of 10.
We can narrow the search space down to the first 8 digits and multiply the
result of that by 10 at the end.
Now the last digit is a 9, which can only happen if the square root ends
in a 3 or 7. From this point, we can try one of two different methods to find
the answer:
1. Start at the lowest possible base number whose square would be in the
format, and count up. The base we would start at is 101010103, whose square is
the closest number to 10203040506070809. Alternate counting up by 4 and 6 so
the last digit of the base is always a 3 or 7.
2. Start at the highest possible base number whose square would be in the
format, and count down. That base would be 138902663, whose square is the
closest number to 1929394959697989. Alternate counting down by 6 and 4 so the
last digit of the base is always a 3 or 7.
The solution does option 2 because the answer happens to be much closer to the
starting point.
"""
def is_square_form(num: int) -> bool:
"""
Determines if num is in the form 1_2_3_4_5_6_7_8_9
>>> is_square_form(1)
False
>>> is_square_form(112233445566778899)
True
>>> is_square_form(123456789012345678)
False
"""
digit = 9
while num > 0:
if num % 10 != digit:
return False
num //= 100
digit -= 1
return True
def solution() -> int:
"""
Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0
"""
num = 138902663
while not is_square_form(num * num):
if num % 10 == 3:
num -= 6 # (3 - 6) % 10 = 7
else:
num -= 4 # (7 - 4) % 10 = 3
return num * 10
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, n: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, n - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g**t) % n
y = math.gcd(x - 1, n)
if x > 1 and y > 1:
p = y
q = n // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, n: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, n - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g**t) % n
y = math.gcd(x - 1, n)
if x > 1 and y > 1:
p = y
q = n // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
from collections.abc import Generator
def fibonacci_generator() -> Generator[int, None, None]:
"""
A generator that produces numbers in the Fibonacci sequence
>>> generator = fibonacci_generator()
>>> next(generator)
1
>>> next(generator)
2
>>> next(generator)
3
>>> next(generator)
5
>>> next(generator)
8
"""
a, b = 0, 1
while True:
a, b = b, a + b
yield b
def solution(n: int = 1000) -> int:
"""Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
answer = 1
gen = fibonacci_generator()
while len(str(next(gen))) < n:
answer += 1
return answer + 1
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
from collections.abc import Generator
def fibonacci_generator() -> Generator[int, None, None]:
"""
A generator that produces numbers in the Fibonacci sequence
>>> generator = fibonacci_generator()
>>> next(generator)
1
>>> next(generator)
2
>>> next(generator)
3
>>> next(generator)
5
>>> next(generator)
8
"""
a, b = 0, 1
while True:
a, b = b, a + b
yield b
def solution(n: int = 1000) -> int:
"""Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
answer = 1
gen = fibonacci_generator()
while len(str(next(gen))) < n:
answer += 1
return answer + 1
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 6: https://projecteuler.net/problem=6
Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten
natural numbers and the square of the sum is 3025 - 385 = 2640.
Find the difference between the sum of the squares of the first one
hundred natural numbers and the square of the sum.
"""
def solution(n: int = 100) -> int:
"""
Returns the difference between the sum of the squares of the first n
natural numbers and the square of the sum.
>>> solution(10)
2640
>>> solution(15)
13160
>>> solution(20)
41230
>>> solution(50)
1582700
"""
sum_of_squares = 0
sum_of_ints = 0
for i in range(1, n + 1):
sum_of_squares += i**2
sum_of_ints += i
return sum_of_ints**2 - sum_of_squares
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 6: https://projecteuler.net/problem=6
Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten
natural numbers and the square of the sum is 3025 - 385 = 2640.
Find the difference between the sum of the squares of the first one
hundred natural numbers and the square of the sum.
"""
def solution(n: int = 100) -> int:
"""
Returns the difference between the sum of the squares of the first n
natural numbers and the square of the sum.
>>> solution(10)
2640
>>> solution(15)
13160
>>> solution(20)
41230
>>> solution(50)
1582700
"""
sum_of_squares = 0
sum_of_ints = 0
for i in range(1, n + 1):
sum_of_squares += i**2
sum_of_ints += i
return sum_of_ints**2 - sum_of_squares
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import sys
from collections import defaultdict
class Heap:
def __init__(self):
self.node_position = []
def get_position(self, vertex):
return self.node_position[vertex]
def set_position(self, vertex, pos):
self.node_position[vertex] = pos
def top_to_bottom(self, heap, start, size, positions):
if start > size // 2 - 1:
return
else:
if 2 * start + 2 >= size:
smallest_child = 2 * start + 1
else:
if heap[2 * start + 1] < heap[2 * start + 2]:
smallest_child = 2 * start + 1
else:
smallest_child = 2 * start + 2
if heap[smallest_child] < heap[start]:
temp, temp1 = heap[smallest_child], positions[smallest_child]
heap[smallest_child], positions[smallest_child] = (
heap[start],
positions[start],
)
heap[start], positions[start] = temp, temp1
temp = self.get_position(positions[smallest_child])
self.set_position(
positions[smallest_child], self.get_position(positions[start])
)
self.set_position(positions[start], temp)
self.top_to_bottom(heap, smallest_child, size, positions)
# Update function if value of any node in min-heap decreases
def bottom_to_top(self, val, index, heap, position):
temp = position[index]
while index != 0:
parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2)
if val < heap[parent]:
heap[index] = heap[parent]
position[index] = position[parent]
self.set_position(position[parent], index)
else:
heap[index] = val
position[index] = temp
self.set_position(temp, index)
break
index = parent
else:
heap[0] = val
position[0] = temp
self.set_position(temp, 0)
def heapify(self, heap, positions):
start = len(heap) // 2 - 1
for i in range(start, -1, -1):
self.top_to_bottom(heap, i, len(heap), positions)
def delete_minimum(self, heap, positions):
temp = positions[0]
heap[0] = sys.maxsize
self.top_to_bottom(heap, 0, len(heap), positions)
return temp
def prisms_algorithm(adjacency_list):
"""
>>> adjacency_list = {0: [[1, 1], [3, 3]],
... 1: [[0, 1], [2, 6], [3, 5], [4, 1]],
... 2: [[1, 6], [4, 5], [5, 2]],
... 3: [[0, 3], [1, 5], [4, 1]],
... 4: [[1, 1], [2, 5], [3, 1], [5, 4]],
... 5: [[2, 2], [4, 4]]}
>>> prisms_algorithm(adjacency_list)
[(0, 1), (1, 4), (4, 3), (4, 5), (5, 2)]
"""
heap = Heap()
visited = [0] * len(adjacency_list)
nbr_tv = [-1] * len(adjacency_list) # Neighboring Tree Vertex of selected vertex
# Minimum Distance of explored vertex with neighboring vertex of partial tree
# formed in graph
distance_tv = [] # Heap of Distance of vertices from their neighboring vertex
positions = []
for vertex in range(len(adjacency_list)):
distance_tv.append(sys.maxsize)
positions.append(vertex)
heap.node_position.append(vertex)
tree_edges = []
visited[0] = 1
distance_tv[0] = sys.maxsize
for neighbor, distance in adjacency_list[0]:
nbr_tv[neighbor] = 0
distance_tv[neighbor] = distance
heap.heapify(distance_tv, positions)
for _ in range(1, len(adjacency_list)):
vertex = heap.delete_minimum(distance_tv, positions)
if visited[vertex] == 0:
tree_edges.append((nbr_tv[vertex], vertex))
visited[vertex] = 1
for neighbor, distance in adjacency_list[vertex]:
if (
visited[neighbor] == 0
and distance < distance_tv[heap.get_position(neighbor)]
):
distance_tv[heap.get_position(neighbor)] = distance
heap.bottom_to_top(
distance, heap.get_position(neighbor), distance_tv, positions
)
nbr_tv[neighbor] = vertex
return tree_edges
if __name__ == "__main__": # pragma: no cover
# < --------- Prims Algorithm --------- >
edges_number = int(input("Enter number of edges: ").strip())
adjacency_list = defaultdict(list)
for _ in range(edges_number):
edge = [int(x) for x in input().strip().split()]
adjacency_list[edge[0]].append([edge[1], edge[2]])
adjacency_list[edge[1]].append([edge[0], edge[2]])
print(prisms_algorithm(adjacency_list))
| import sys
from collections import defaultdict
class Heap:
def __init__(self):
self.node_position = []
def get_position(self, vertex):
return self.node_position[vertex]
def set_position(self, vertex, pos):
self.node_position[vertex] = pos
def top_to_bottom(self, heap, start, size, positions):
if start > size // 2 - 1:
return
else:
if 2 * start + 2 >= size:
smallest_child = 2 * start + 1
else:
if heap[2 * start + 1] < heap[2 * start + 2]:
smallest_child = 2 * start + 1
else:
smallest_child = 2 * start + 2
if heap[smallest_child] < heap[start]:
temp, temp1 = heap[smallest_child], positions[smallest_child]
heap[smallest_child], positions[smallest_child] = (
heap[start],
positions[start],
)
heap[start], positions[start] = temp, temp1
temp = self.get_position(positions[smallest_child])
self.set_position(
positions[smallest_child], self.get_position(positions[start])
)
self.set_position(positions[start], temp)
self.top_to_bottom(heap, smallest_child, size, positions)
# Update function if value of any node in min-heap decreases
def bottom_to_top(self, val, index, heap, position):
temp = position[index]
while index != 0:
parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2)
if val < heap[parent]:
heap[index] = heap[parent]
position[index] = position[parent]
self.set_position(position[parent], index)
else:
heap[index] = val
position[index] = temp
self.set_position(temp, index)
break
index = parent
else:
heap[0] = val
position[0] = temp
self.set_position(temp, 0)
def heapify(self, heap, positions):
start = len(heap) // 2 - 1
for i in range(start, -1, -1):
self.top_to_bottom(heap, i, len(heap), positions)
def delete_minimum(self, heap, positions):
temp = positions[0]
heap[0] = sys.maxsize
self.top_to_bottom(heap, 0, len(heap), positions)
return temp
def prisms_algorithm(adjacency_list):
"""
>>> adjacency_list = {0: [[1, 1], [3, 3]],
... 1: [[0, 1], [2, 6], [3, 5], [4, 1]],
... 2: [[1, 6], [4, 5], [5, 2]],
... 3: [[0, 3], [1, 5], [4, 1]],
... 4: [[1, 1], [2, 5], [3, 1], [5, 4]],
... 5: [[2, 2], [4, 4]]}
>>> prisms_algorithm(adjacency_list)
[(0, 1), (1, 4), (4, 3), (4, 5), (5, 2)]
"""
heap = Heap()
visited = [0] * len(adjacency_list)
nbr_tv = [-1] * len(adjacency_list) # Neighboring Tree Vertex of selected vertex
# Minimum Distance of explored vertex with neighboring vertex of partial tree
# formed in graph
distance_tv = [] # Heap of Distance of vertices from their neighboring vertex
positions = []
for vertex in range(len(adjacency_list)):
distance_tv.append(sys.maxsize)
positions.append(vertex)
heap.node_position.append(vertex)
tree_edges = []
visited[0] = 1
distance_tv[0] = sys.maxsize
for neighbor, distance in adjacency_list[0]:
nbr_tv[neighbor] = 0
distance_tv[neighbor] = distance
heap.heapify(distance_tv, positions)
for _ in range(1, len(adjacency_list)):
vertex = heap.delete_minimum(distance_tv, positions)
if visited[vertex] == 0:
tree_edges.append((nbr_tv[vertex], vertex))
visited[vertex] = 1
for neighbor, distance in adjacency_list[vertex]:
if (
visited[neighbor] == 0
and distance < distance_tv[heap.get_position(neighbor)]
):
distance_tv[heap.get_position(neighbor)] = distance
heap.bottom_to_top(
distance, heap.get_position(neighbor), distance_tv, positions
)
nbr_tv[neighbor] = vertex
return tree_edges
if __name__ == "__main__": # pragma: no cover
# < --------- Prims Algorithm --------- >
edges_number = int(input("Enter number of edges: ").strip())
adjacency_list = defaultdict(list)
for _ in range(edges_number):
edge = [int(x) for x in input().strip().split()]
adjacency_list[edge[0]].append([edge[1], edge[2]])
adjacency_list[edge[1]].append([edge[0], edge[2]])
print(prisms_algorithm(adjacency_list))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
wiki: https://en.wikipedia.org/wiki/Pangram
"""
def is_pangram(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
A Pangram String contains all the alphabets at least once.
>>> is_pangram("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram("My name is Unknown")
False
>>> is_pangram("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram()
True
"""
# Declare frequency as a set to have unique occurrences of letters
frequency = set()
# Replace all the whitespace in our sentence
input_str = input_str.replace(" ", "")
for alpha in input_str:
if "a" <= alpha.lower() <= "z":
frequency.add(alpha.lower())
return len(frequency) == 26
def is_pangram_faster(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
>>> is_pangram_faster("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram_faster("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram_faster("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram_faster("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram_faster()
True
"""
flag = [False] * 26
for char in input_str:
if char.islower():
flag[ord(char) - 97] = True
elif char.isupper():
flag[ord(char) - 65] = True
return all(flag)
def is_pangram_fastest(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
>>> is_pangram_fastest("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram_fastest("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram_fastest("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram_fastest("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram_fastest()
True
"""
return len({char for char in input_str.lower() if char.isalpha()}) == 26
def benchmark() -> None:
"""
Benchmark code comparing different version.
"""
from timeit import timeit
setup = "from __main__ import is_pangram, is_pangram_faster, is_pangram_fastest"
print(timeit("is_pangram()", setup=setup))
print(timeit("is_pangram_faster()", setup=setup))
print(timeit("is_pangram_fastest()", setup=setup))
# 5.348480500048026, 2.6477354579837993, 1.8470395830227062
# 5.036091582966037, 2.644472333951853, 1.8869528750656173
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark()
| """
wiki: https://en.wikipedia.org/wiki/Pangram
"""
def is_pangram(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
A Pangram String contains all the alphabets at least once.
>>> is_pangram("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram("My name is Unknown")
False
>>> is_pangram("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram()
True
"""
# Declare frequency as a set to have unique occurrences of letters
frequency = set()
# Replace all the whitespace in our sentence
input_str = input_str.replace(" ", "")
for alpha in input_str:
if "a" <= alpha.lower() <= "z":
frequency.add(alpha.lower())
return len(frequency) == 26
def is_pangram_faster(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
>>> is_pangram_faster("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram_faster("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram_faster("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram_faster("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram_faster()
True
"""
flag = [False] * 26
for char in input_str:
if char.islower():
flag[ord(char) - 97] = True
elif char.isupper():
flag[ord(char) - 65] = True
return all(flag)
def is_pangram_fastest(
input_str: str = "The quick brown fox jumps over the lazy dog",
) -> bool:
"""
>>> is_pangram_fastest("The quick brown fox jumps over the lazy dog")
True
>>> is_pangram_fastest("Waltz, bad nymph, for quick jigs vex.")
True
>>> is_pangram_fastest("Jived fox nymph grabs quick waltz.")
True
>>> is_pangram_fastest("The quick brown fox jumps over the la_y dog")
False
>>> is_pangram_fastest()
True
"""
return len({char for char in input_str.lower() if char.isalpha()}) == 26
def benchmark() -> None:
"""
Benchmark code comparing different version.
"""
from timeit import timeit
setup = "from __main__ import is_pangram, is_pangram_faster, is_pangram_fastest"
print(timeit("is_pangram()", setup=setup))
print(timeit("is_pangram_faster()", setup=setup))
print(timeit("is_pangram_fastest()", setup=setup))
# 5.348480500048026, 2.6477354579837993, 1.8470395830227062
# 5.036091582966037, 2.644472333951853, 1.8869528750656173
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Get the citation from google scholar
using title and year of publication, and volume and pages of journal.
"""
import requests
from bs4 import BeautifulSoup
def get_citation(base_url: str, params: dict) -> str:
"""
Return the citation number.
"""
soup = BeautifulSoup(requests.get(base_url, params=params).content, "html.parser")
div = soup.find("div", attrs={"class": "gs_ri"})
anchors = div.find("div", attrs={"class": "gs_fl"}).find_all("a")
return anchors[2].get_text()
if __name__ == "__main__":
params = {
"title": (
"Precisely geometry controlled microsupercapacitors for ultrahigh areal "
"capacitance, volumetric capacitance, and energy density"
),
"journal": "Chem. Mater.",
"volume": 30,
"pages": "3979-3990",
"year": 2018,
"hl": "en",
}
print(get_citation("https://scholar.google.com/scholar_lookup", params=params))
| """
Get the citation from google scholar
using title and year of publication, and volume and pages of journal.
"""
import requests
from bs4 import BeautifulSoup
def get_citation(base_url: str, params: dict) -> str:
"""
Return the citation number.
"""
soup = BeautifulSoup(requests.get(base_url, params=params).content, "html.parser")
div = soup.find("div", attrs={"class": "gs_ri"})
anchors = div.find("div", attrs={"class": "gs_fl"}).find_all("a")
return anchors[2].get_text()
if __name__ == "__main__":
params = {
"title": (
"Precisely geometry controlled microsupercapacitors for ultrahigh areal "
"capacitance, volumetric capacitance, and energy density"
),
"journal": "Chem. Mater.",
"volume": 30,
"pages": "3979-3990",
"year": 2018,
"hl": "en",
}
print(get_citation("https://scholar.google.com/scholar_lookup", params=params))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
- - - - - -- - - - - - - - - - - - - - - - - - - - - - -
Name - - CNN - Convolution Neural Network For Photo Recognizing
Goal - - Recognize Handing Writing Word Photo
Detail:Total 5 layers neural network
* Convolution layer
* Pooling layer
* Input layer layer of BP
* Hidden layer of BP
* Output layer of BP
Author: Stephen Lee
Github: [email protected]
Date: 2017.9.20
- - - - - -- - - - - - - - - - - - - - - - - - - - - - -
"""
import pickle
import numpy as np
from matplotlib import pyplot as plt
class CNN:
def __init__(
self, conv1_get, size_p1, bp_num1, bp_num2, bp_num3, rate_w=0.2, rate_t=0.2
):
"""
:param conv1_get: [a,c,d],size, number, step of convolution kernel
:param size_p1: pooling size
:param bp_num1: units number of flatten layer
:param bp_num2: units number of hidden layer
:param bp_num3: units number of output layer
:param rate_w: rate of weight learning
:param rate_t: rate of threshold learning
"""
self.num_bp1 = bp_num1
self.num_bp2 = bp_num2
self.num_bp3 = bp_num3
self.conv1 = conv1_get[:2]
self.step_conv1 = conv1_get[2]
self.size_pooling1 = size_p1
self.rate_weight = rate_w
self.rate_thre = rate_t
self.w_conv1 = [
np.mat(-1 * np.random.rand(self.conv1[0], self.conv1[0]) + 0.5)
for i in range(self.conv1[1])
]
self.wkj = np.mat(-1 * np.random.rand(self.num_bp3, self.num_bp2) + 0.5)
self.vji = np.mat(-1 * np.random.rand(self.num_bp2, self.num_bp1) + 0.5)
self.thre_conv1 = -2 * np.random.rand(self.conv1[1]) + 1
self.thre_bp2 = -2 * np.random.rand(self.num_bp2) + 1
self.thre_bp3 = -2 * np.random.rand(self.num_bp3) + 1
def save_model(self, save_path):
# save model dict with pickle
model_dic = {
"num_bp1": self.num_bp1,
"num_bp2": self.num_bp2,
"num_bp3": self.num_bp3,
"conv1": self.conv1,
"step_conv1": self.step_conv1,
"size_pooling1": self.size_pooling1,
"rate_weight": self.rate_weight,
"rate_thre": self.rate_thre,
"w_conv1": self.w_conv1,
"wkj": self.wkj,
"vji": self.vji,
"thre_conv1": self.thre_conv1,
"thre_bp2": self.thre_bp2,
"thre_bp3": self.thre_bp3,
}
with open(save_path, "wb") as f:
pickle.dump(model_dic, f)
print(f"Model saved: {save_path}")
@classmethod
def read_model(cls, model_path):
# read saved model
with open(model_path, "rb") as f:
model_dic = pickle.load(f) # noqa: S301
conv_get = model_dic.get("conv1")
conv_get.append(model_dic.get("step_conv1"))
size_p1 = model_dic.get("size_pooling1")
bp1 = model_dic.get("num_bp1")
bp2 = model_dic.get("num_bp2")
bp3 = model_dic.get("num_bp3")
r_w = model_dic.get("rate_weight")
r_t = model_dic.get("rate_thre")
# create model instance
conv_ins = CNN(conv_get, size_p1, bp1, bp2, bp3, r_w, r_t)
# modify model parameter
conv_ins.w_conv1 = model_dic.get("w_conv1")
conv_ins.wkj = model_dic.get("wkj")
conv_ins.vji = model_dic.get("vji")
conv_ins.thre_conv1 = model_dic.get("thre_conv1")
conv_ins.thre_bp2 = model_dic.get("thre_bp2")
conv_ins.thre_bp3 = model_dic.get("thre_bp3")
return conv_ins
def sig(self, x):
return 1 / (1 + np.exp(-1 * x))
def do_round(self, x):
return round(x, 3)
def convolute(self, data, convs, w_convs, thre_convs, conv_step):
# convolution process
size_conv = convs[0]
num_conv = convs[1]
size_data = np.shape(data)[0]
# get the data slice of original image data, data_focus
data_focus = []
for i_focus in range(0, size_data - size_conv + 1, conv_step):
for j_focus in range(0, size_data - size_conv + 1, conv_step):
focus = data[
i_focus : i_focus + size_conv, j_focus : j_focus + size_conv
]
data_focus.append(focus)
# calculate the feature map of every single kernel, and saved as list of matrix
data_featuremap = []
size_feature_map = int((size_data - size_conv) / conv_step + 1)
for i_map in range(num_conv):
featuremap = []
for i_focus in range(len(data_focus)):
net_focus = (
np.sum(np.multiply(data_focus[i_focus], w_convs[i_map]))
- thre_convs[i_map]
)
featuremap.append(self.sig(net_focus))
featuremap = np.asmatrix(featuremap).reshape(
size_feature_map, size_feature_map
)
data_featuremap.append(featuremap)
# expanding the data slice to One dimenssion
focus1_list = []
for each_focus in data_focus:
focus1_list.extend(self.Expand_Mat(each_focus))
focus_list = np.asarray(focus1_list)
return focus_list, data_featuremap
def pooling(self, featuremaps, size_pooling, pooling_type="average_pool"):
# pooling process
size_map = len(featuremaps[0])
size_pooled = int(size_map / size_pooling)
featuremap_pooled = []
for i_map in range(len(featuremaps)):
feature_map = featuremaps[i_map]
map_pooled = []
for i_focus in range(0, size_map, size_pooling):
for j_focus in range(0, size_map, size_pooling):
focus = feature_map[
i_focus : i_focus + size_pooling,
j_focus : j_focus + size_pooling,
]
if pooling_type == "average_pool":
# average pooling
map_pooled.append(np.average(focus))
elif pooling_type == "max_pooling":
# max pooling
map_pooled.append(np.max(focus))
map_pooled = np.asmatrix(map_pooled).reshape(size_pooled, size_pooled)
featuremap_pooled.append(map_pooled)
return featuremap_pooled
def _expand(self, data):
# expanding three dimension data to one dimension list
data_expanded = []
for i in range(len(data)):
shapes = np.shape(data[i])
data_listed = data[i].reshape(1, shapes[0] * shapes[1])
data_listed = data_listed.getA().tolist()[0]
data_expanded.extend(data_listed)
data_expanded = np.asarray(data_expanded)
return data_expanded
def _expand_mat(self, data_mat):
# expanding matrix to one dimension list
data_mat = np.asarray(data_mat)
shapes = np.shape(data_mat)
data_expanded = data_mat.reshape(1, shapes[0] * shapes[1])
return data_expanded
def _calculate_gradient_from_pool(
self, out_map, pd_pool, num_map, size_map, size_pooling
):
"""
calculate the gradient from the data slice of pool layer
pd_pool: list of matrix
out_map: the shape of data slice(size_map*size_map)
return: pd_all: list of matrix, [num, size_map, size_map]
"""
pd_all = []
i_pool = 0
for i_map in range(num_map):
pd_conv1 = np.ones((size_map, size_map))
for i in range(0, size_map, size_pooling):
for j in range(0, size_map, size_pooling):
pd_conv1[i : i + size_pooling, j : j + size_pooling] = pd_pool[
i_pool
]
i_pool = i_pool + 1
pd_conv2 = np.multiply(
pd_conv1, np.multiply(out_map[i_map], (1 - out_map[i_map]))
)
pd_all.append(pd_conv2)
return pd_all
def train(
self, patterns, datas_train, datas_teach, n_repeat, error_accuracy, draw_e=bool
):
# model traning
print("----------------------Start Training-------------------------")
print((" - - Shape: Train_Data ", np.shape(datas_train)))
print((" - - Shape: Teach_Data ", np.shape(datas_teach)))
rp = 0
all_mse = []
mse = 10000
while rp < n_repeat and mse >= error_accuracy:
error_count = 0
print(f"-------------Learning Time {rp}--------------")
for p in range(len(datas_train)):
# print('------------Learning Image: %d--------------'%p)
data_train = np.asmatrix(datas_train[p])
data_teach = np.asarray(datas_teach[p])
data_focus1, data_conved1 = self.convolute(
data_train,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
shape_featuremap1 = np.shape(data_conved1)
"""
print(' -----original shape ', np.shape(data_train))
print(' ---- after convolution ',np.shape(data_conv1))
print(' -----after pooling ',np.shape(data_pooled1))
"""
data_bp_input = self._expand(data_pooled1)
bp_out1 = data_bp_input
bp_net_j = np.dot(bp_out1, self.vji.T) - self.thre_bp2
bp_out2 = self.sig(bp_net_j)
bp_net_k = np.dot(bp_out2, self.wkj.T) - self.thre_bp3
bp_out3 = self.sig(bp_net_k)
# --------------Model Leaning ------------------------
# calculate error and gradient---------------
pd_k_all = np.multiply(
(data_teach - bp_out3), np.multiply(bp_out3, (1 - bp_out3))
)
pd_j_all = np.multiply(
np.dot(pd_k_all, self.wkj), np.multiply(bp_out2, (1 - bp_out2))
)
pd_i_all = np.dot(pd_j_all, self.vji)
pd_conv1_pooled = pd_i_all / (self.size_pooling1 * self.size_pooling1)
pd_conv1_pooled = pd_conv1_pooled.T.getA().tolist()
pd_conv1_all = self._calculate_gradient_from_pool(
data_conved1,
pd_conv1_pooled,
shape_featuremap1[0],
shape_featuremap1[1],
self.size_pooling1,
)
# weight and threshold learning process---------
# convolution layer
for k_conv in range(self.conv1[1]):
pd_conv_list = self._expand_mat(pd_conv1_all[k_conv])
delta_w = self.rate_weight * np.dot(pd_conv_list, data_focus1)
self.w_conv1[k_conv] = self.w_conv1[k_conv] + delta_w.reshape(
(self.conv1[0], self.conv1[0])
)
self.thre_conv1[k_conv] = (
self.thre_conv1[k_conv]
- np.sum(pd_conv1_all[k_conv]) * self.rate_thre
)
# all connected layer
self.wkj = self.wkj + pd_k_all.T * bp_out2 * self.rate_weight
self.vji = self.vji + pd_j_all.T * bp_out1 * self.rate_weight
self.thre_bp3 = self.thre_bp3 - pd_k_all * self.rate_thre
self.thre_bp2 = self.thre_bp2 - pd_j_all * self.rate_thre
# calculate the sum error of all single image
errors = np.sum(abs(data_teach - bp_out3))
error_count += errors
# print(' ----Teach ',data_teach)
# print(' ----BP_output ',bp_out3)
rp = rp + 1
mse = error_count / patterns
all_mse.append(mse)
def draw_error():
yplot = [error_accuracy for i in range(int(n_repeat * 1.2))]
plt.plot(all_mse, "+-")
plt.plot(yplot, "r--")
plt.xlabel("Learning Times")
plt.ylabel("All_mse")
plt.grid(True, alpha=0.5)
plt.show()
print("------------------Training Complished---------------------")
print((" - - Training epoch: ", rp, f" - - Mse: {mse:.6f}"))
if draw_e:
draw_error()
return mse
def predict(self, datas_test):
# model predict
produce_out = []
print("-------------------Start Testing-------------------------")
print((" - - Shape: Test_Data ", np.shape(datas_test)))
for p in range(len(datas_test)):
data_test = np.asmatrix(datas_test[p])
data_focus1, data_conved1 = self.convolute(
data_test,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
data_bp_input = self._expand(data_pooled1)
bp_out1 = data_bp_input
bp_net_j = bp_out1 * self.vji.T - self.thre_bp2
bp_out2 = self.sig(bp_net_j)
bp_net_k = bp_out2 * self.wkj.T - self.thre_bp3
bp_out3 = self.sig(bp_net_k)
produce_out.extend(bp_out3.getA().tolist())
res = [list(map(self.do_round, each)) for each in produce_out]
return np.asarray(res)
def convolution(self, data):
# return the data of image after convoluting process so we can check it out
data_test = np.asmatrix(data)
data_focus1, data_conved1 = self.convolute(
data_test,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
return data_conved1, data_pooled1
if __name__ == "__main__":
"""
I will put the example on other file
"""
| """
- - - - - -- - - - - - - - - - - - - - - - - - - - - - -
Name - - CNN - Convolution Neural Network For Photo Recognizing
Goal - - Recognize Handing Writing Word Photo
Detail:Total 5 layers neural network
* Convolution layer
* Pooling layer
* Input layer layer of BP
* Hidden layer of BP
* Output layer of BP
Author: Stephen Lee
Github: [email protected]
Date: 2017.9.20
- - - - - -- - - - - - - - - - - - - - - - - - - - - - -
"""
import pickle
import numpy as np
from matplotlib import pyplot as plt
class CNN:
def __init__(
self, conv1_get, size_p1, bp_num1, bp_num2, bp_num3, rate_w=0.2, rate_t=0.2
):
"""
:param conv1_get: [a,c,d],size, number, step of convolution kernel
:param size_p1: pooling size
:param bp_num1: units number of flatten layer
:param bp_num2: units number of hidden layer
:param bp_num3: units number of output layer
:param rate_w: rate of weight learning
:param rate_t: rate of threshold learning
"""
self.num_bp1 = bp_num1
self.num_bp2 = bp_num2
self.num_bp3 = bp_num3
self.conv1 = conv1_get[:2]
self.step_conv1 = conv1_get[2]
self.size_pooling1 = size_p1
self.rate_weight = rate_w
self.rate_thre = rate_t
self.w_conv1 = [
np.mat(-1 * np.random.rand(self.conv1[0], self.conv1[0]) + 0.5)
for i in range(self.conv1[1])
]
self.wkj = np.mat(-1 * np.random.rand(self.num_bp3, self.num_bp2) + 0.5)
self.vji = np.mat(-1 * np.random.rand(self.num_bp2, self.num_bp1) + 0.5)
self.thre_conv1 = -2 * np.random.rand(self.conv1[1]) + 1
self.thre_bp2 = -2 * np.random.rand(self.num_bp2) + 1
self.thre_bp3 = -2 * np.random.rand(self.num_bp3) + 1
def save_model(self, save_path):
# save model dict with pickle
model_dic = {
"num_bp1": self.num_bp1,
"num_bp2": self.num_bp2,
"num_bp3": self.num_bp3,
"conv1": self.conv1,
"step_conv1": self.step_conv1,
"size_pooling1": self.size_pooling1,
"rate_weight": self.rate_weight,
"rate_thre": self.rate_thre,
"w_conv1": self.w_conv1,
"wkj": self.wkj,
"vji": self.vji,
"thre_conv1": self.thre_conv1,
"thre_bp2": self.thre_bp2,
"thre_bp3": self.thre_bp3,
}
with open(save_path, "wb") as f:
pickle.dump(model_dic, f)
print(f"Model saved: {save_path}")
@classmethod
def read_model(cls, model_path):
# read saved model
with open(model_path, "rb") as f:
model_dic = pickle.load(f) # noqa: S301
conv_get = model_dic.get("conv1")
conv_get.append(model_dic.get("step_conv1"))
size_p1 = model_dic.get("size_pooling1")
bp1 = model_dic.get("num_bp1")
bp2 = model_dic.get("num_bp2")
bp3 = model_dic.get("num_bp3")
r_w = model_dic.get("rate_weight")
r_t = model_dic.get("rate_thre")
# create model instance
conv_ins = CNN(conv_get, size_p1, bp1, bp2, bp3, r_w, r_t)
# modify model parameter
conv_ins.w_conv1 = model_dic.get("w_conv1")
conv_ins.wkj = model_dic.get("wkj")
conv_ins.vji = model_dic.get("vji")
conv_ins.thre_conv1 = model_dic.get("thre_conv1")
conv_ins.thre_bp2 = model_dic.get("thre_bp2")
conv_ins.thre_bp3 = model_dic.get("thre_bp3")
return conv_ins
def sig(self, x):
return 1 / (1 + np.exp(-1 * x))
def do_round(self, x):
return round(x, 3)
def convolute(self, data, convs, w_convs, thre_convs, conv_step):
# convolution process
size_conv = convs[0]
num_conv = convs[1]
size_data = np.shape(data)[0]
# get the data slice of original image data, data_focus
data_focus = []
for i_focus in range(0, size_data - size_conv + 1, conv_step):
for j_focus in range(0, size_data - size_conv + 1, conv_step):
focus = data[
i_focus : i_focus + size_conv, j_focus : j_focus + size_conv
]
data_focus.append(focus)
# calculate the feature map of every single kernel, and saved as list of matrix
data_featuremap = []
size_feature_map = int((size_data - size_conv) / conv_step + 1)
for i_map in range(num_conv):
featuremap = []
for i_focus in range(len(data_focus)):
net_focus = (
np.sum(np.multiply(data_focus[i_focus], w_convs[i_map]))
- thre_convs[i_map]
)
featuremap.append(self.sig(net_focus))
featuremap = np.asmatrix(featuremap).reshape(
size_feature_map, size_feature_map
)
data_featuremap.append(featuremap)
# expanding the data slice to One dimenssion
focus1_list = []
for each_focus in data_focus:
focus1_list.extend(self.Expand_Mat(each_focus))
focus_list = np.asarray(focus1_list)
return focus_list, data_featuremap
def pooling(self, featuremaps, size_pooling, pooling_type="average_pool"):
# pooling process
size_map = len(featuremaps[0])
size_pooled = int(size_map / size_pooling)
featuremap_pooled = []
for i_map in range(len(featuremaps)):
feature_map = featuremaps[i_map]
map_pooled = []
for i_focus in range(0, size_map, size_pooling):
for j_focus in range(0, size_map, size_pooling):
focus = feature_map[
i_focus : i_focus + size_pooling,
j_focus : j_focus + size_pooling,
]
if pooling_type == "average_pool":
# average pooling
map_pooled.append(np.average(focus))
elif pooling_type == "max_pooling":
# max pooling
map_pooled.append(np.max(focus))
map_pooled = np.asmatrix(map_pooled).reshape(size_pooled, size_pooled)
featuremap_pooled.append(map_pooled)
return featuremap_pooled
def _expand(self, data):
# expanding three dimension data to one dimension list
data_expanded = []
for i in range(len(data)):
shapes = np.shape(data[i])
data_listed = data[i].reshape(1, shapes[0] * shapes[1])
data_listed = data_listed.getA().tolist()[0]
data_expanded.extend(data_listed)
data_expanded = np.asarray(data_expanded)
return data_expanded
def _expand_mat(self, data_mat):
# expanding matrix to one dimension list
data_mat = np.asarray(data_mat)
shapes = np.shape(data_mat)
data_expanded = data_mat.reshape(1, shapes[0] * shapes[1])
return data_expanded
def _calculate_gradient_from_pool(
self, out_map, pd_pool, num_map, size_map, size_pooling
):
"""
calculate the gradient from the data slice of pool layer
pd_pool: list of matrix
out_map: the shape of data slice(size_map*size_map)
return: pd_all: list of matrix, [num, size_map, size_map]
"""
pd_all = []
i_pool = 0
for i_map in range(num_map):
pd_conv1 = np.ones((size_map, size_map))
for i in range(0, size_map, size_pooling):
for j in range(0, size_map, size_pooling):
pd_conv1[i : i + size_pooling, j : j + size_pooling] = pd_pool[
i_pool
]
i_pool = i_pool + 1
pd_conv2 = np.multiply(
pd_conv1, np.multiply(out_map[i_map], (1 - out_map[i_map]))
)
pd_all.append(pd_conv2)
return pd_all
def train(
self, patterns, datas_train, datas_teach, n_repeat, error_accuracy, draw_e=bool
):
# model traning
print("----------------------Start Training-------------------------")
print((" - - Shape: Train_Data ", np.shape(datas_train)))
print((" - - Shape: Teach_Data ", np.shape(datas_teach)))
rp = 0
all_mse = []
mse = 10000
while rp < n_repeat and mse >= error_accuracy:
error_count = 0
print(f"-------------Learning Time {rp}--------------")
for p in range(len(datas_train)):
# print('------------Learning Image: %d--------------'%p)
data_train = np.asmatrix(datas_train[p])
data_teach = np.asarray(datas_teach[p])
data_focus1, data_conved1 = self.convolute(
data_train,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
shape_featuremap1 = np.shape(data_conved1)
"""
print(' -----original shape ', np.shape(data_train))
print(' ---- after convolution ',np.shape(data_conv1))
print(' -----after pooling ',np.shape(data_pooled1))
"""
data_bp_input = self._expand(data_pooled1)
bp_out1 = data_bp_input
bp_net_j = np.dot(bp_out1, self.vji.T) - self.thre_bp2
bp_out2 = self.sig(bp_net_j)
bp_net_k = np.dot(bp_out2, self.wkj.T) - self.thre_bp3
bp_out3 = self.sig(bp_net_k)
# --------------Model Leaning ------------------------
# calculate error and gradient---------------
pd_k_all = np.multiply(
(data_teach - bp_out3), np.multiply(bp_out3, (1 - bp_out3))
)
pd_j_all = np.multiply(
np.dot(pd_k_all, self.wkj), np.multiply(bp_out2, (1 - bp_out2))
)
pd_i_all = np.dot(pd_j_all, self.vji)
pd_conv1_pooled = pd_i_all / (self.size_pooling1 * self.size_pooling1)
pd_conv1_pooled = pd_conv1_pooled.T.getA().tolist()
pd_conv1_all = self._calculate_gradient_from_pool(
data_conved1,
pd_conv1_pooled,
shape_featuremap1[0],
shape_featuremap1[1],
self.size_pooling1,
)
# weight and threshold learning process---------
# convolution layer
for k_conv in range(self.conv1[1]):
pd_conv_list = self._expand_mat(pd_conv1_all[k_conv])
delta_w = self.rate_weight * np.dot(pd_conv_list, data_focus1)
self.w_conv1[k_conv] = self.w_conv1[k_conv] + delta_w.reshape(
(self.conv1[0], self.conv1[0])
)
self.thre_conv1[k_conv] = (
self.thre_conv1[k_conv]
- np.sum(pd_conv1_all[k_conv]) * self.rate_thre
)
# all connected layer
self.wkj = self.wkj + pd_k_all.T * bp_out2 * self.rate_weight
self.vji = self.vji + pd_j_all.T * bp_out1 * self.rate_weight
self.thre_bp3 = self.thre_bp3 - pd_k_all * self.rate_thre
self.thre_bp2 = self.thre_bp2 - pd_j_all * self.rate_thre
# calculate the sum error of all single image
errors = np.sum(abs(data_teach - bp_out3))
error_count += errors
# print(' ----Teach ',data_teach)
# print(' ----BP_output ',bp_out3)
rp = rp + 1
mse = error_count / patterns
all_mse.append(mse)
def draw_error():
yplot = [error_accuracy for i in range(int(n_repeat * 1.2))]
plt.plot(all_mse, "+-")
plt.plot(yplot, "r--")
plt.xlabel("Learning Times")
plt.ylabel("All_mse")
plt.grid(True, alpha=0.5)
plt.show()
print("------------------Training Complished---------------------")
print((" - - Training epoch: ", rp, f" - - Mse: {mse:.6f}"))
if draw_e:
draw_error()
return mse
def predict(self, datas_test):
# model predict
produce_out = []
print("-------------------Start Testing-------------------------")
print((" - - Shape: Test_Data ", np.shape(datas_test)))
for p in range(len(datas_test)):
data_test = np.asmatrix(datas_test[p])
data_focus1, data_conved1 = self.convolute(
data_test,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
data_bp_input = self._expand(data_pooled1)
bp_out1 = data_bp_input
bp_net_j = bp_out1 * self.vji.T - self.thre_bp2
bp_out2 = self.sig(bp_net_j)
bp_net_k = bp_out2 * self.wkj.T - self.thre_bp3
bp_out3 = self.sig(bp_net_k)
produce_out.extend(bp_out3.getA().tolist())
res = [list(map(self.do_round, each)) for each in produce_out]
return np.asarray(res)
def convolution(self, data):
# return the data of image after convoluting process so we can check it out
data_test = np.asmatrix(data)
data_focus1, data_conved1 = self.convolute(
data_test,
self.conv1,
self.w_conv1,
self.thre_conv1,
conv_step=self.step_conv1,
)
data_pooled1 = self.pooling(data_conved1, self.size_pooling1)
return data_conved1, data_pooled1
if __name__ == "__main__":
"""
I will put the example on other file
"""
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 131: https://projecteuler.net/problem=131
There are some prime values, p, for which there exists a positive integer, n,
such that the expression n^3 + n^2p is a perfect cube.
For example, when p = 19, 8^3 + 8^2 x 19 = 12^3.
What is perhaps most surprising is that for each prime with this property
the value of n is unique, and there are only four such primes below one-hundred.
How many primes below one million have this remarkable property?
"""
from math import isqrt
def is_prime(number: int) -> bool:
"""
Determines whether number is prime
>>> is_prime(3)
True
>>> is_prime(4)
False
"""
return all(number % divisor != 0 for divisor in range(2, isqrt(number) + 1))
def solution(max_prime: int = 10**6) -> int:
"""
Returns number of primes below max_prime with the property
>>> solution(100)
4
"""
primes_count = 0
cube_index = 1
prime_candidate = 7
while prime_candidate < max_prime:
primes_count += is_prime(prime_candidate)
cube_index += 1
prime_candidate += 6 * cube_index
return primes_count
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 131: https://projecteuler.net/problem=131
There are some prime values, p, for which there exists a positive integer, n,
such that the expression n^3 + n^2p is a perfect cube.
For example, when p = 19, 8^3 + 8^2 x 19 = 12^3.
What is perhaps most surprising is that for each prime with this property
the value of n is unique, and there are only four such primes below one-hundred.
How many primes below one million have this remarkable property?
"""
from math import isqrt
def is_prime(number: int) -> bool:
"""
Determines whether number is prime
>>> is_prime(3)
True
>>> is_prime(4)
False
"""
return all(number % divisor != 0 for divisor in range(2, isqrt(number) + 1))
def solution(max_prime: int = 10**6) -> int:
"""
Returns number of primes below max_prime with the property
>>> solution(100)
4
"""
primes_count = 0
cube_index = 1
prime_candidate = 7
while prime_candidate < max_prime:
primes_count += is_prime(prime_candidate)
cube_index += 1
prime_candidate += 6 * cube_index
return primes_count
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """ https://en.wikipedia.org/wiki/Cocktail_shaker_sort """
def cocktail_shaker_sort(unsorted: list) -> list:
"""
Pure implementation of the cocktail shaker sort algorithm in Python.
>>> cocktail_shaker_sort([4, 5, 2, 1, 2])
[1, 2, 2, 4, 5]
>>> cocktail_shaker_sort([-4, 5, 0, 1, 2, 11])
[-4, 0, 1, 2, 5, 11]
>>> cocktail_shaker_sort([0.1, -2.4, 4.4, 2.2])
[-2.4, 0.1, 2.2, 4.4]
>>> cocktail_shaker_sort([1, 2, 3, 4, 5])
[1, 2, 3, 4, 5]
>>> cocktail_shaker_sort([-4, -5, -24, -7, -11])
[-24, -11, -7, -5, -4]
"""
for i in range(len(unsorted) - 1, 0, -1):
swapped = False
for j in range(i, 0, -1):
if unsorted[j] < unsorted[j - 1]:
unsorted[j], unsorted[j - 1] = unsorted[j - 1], unsorted[j]
swapped = True
for j in range(i):
if unsorted[j] > unsorted[j + 1]:
unsorted[j], unsorted[j + 1] = unsorted[j + 1], unsorted[j]
swapped = True
if not swapped:
break
return unsorted
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(f"{cocktail_shaker_sort(unsorted) = }")
| """ https://en.wikipedia.org/wiki/Cocktail_shaker_sort """
def cocktail_shaker_sort(unsorted: list) -> list:
"""
Pure implementation of the cocktail shaker sort algorithm in Python.
>>> cocktail_shaker_sort([4, 5, 2, 1, 2])
[1, 2, 2, 4, 5]
>>> cocktail_shaker_sort([-4, 5, 0, 1, 2, 11])
[-4, 0, 1, 2, 5, 11]
>>> cocktail_shaker_sort([0.1, -2.4, 4.4, 2.2])
[-2.4, 0.1, 2.2, 4.4]
>>> cocktail_shaker_sort([1, 2, 3, 4, 5])
[1, 2, 3, 4, 5]
>>> cocktail_shaker_sort([-4, -5, -24, -7, -11])
[-24, -11, -7, -5, -4]
"""
for i in range(len(unsorted) - 1, 0, -1):
swapped = False
for j in range(i, 0, -1):
if unsorted[j] < unsorted[j - 1]:
unsorted[j], unsorted[j - 1] = unsorted[j - 1], unsorted[j]
swapped = True
for j in range(i):
if unsorted[j] > unsorted[j + 1]:
unsorted[j], unsorted[j + 1] = unsorted[j + 1], unsorted[j]
swapped = True
if not swapped:
break
return unsorted
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(f"{cocktail_shaker_sort(unsorted) = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Self Powers
Problem 48
The series, 1^1 + 2^2 + 3^3 + ... + 10^10 = 10405071317.
Find the last ten digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
"""
def solution():
"""
Returns the last 10 digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
>>> solution()
'9110846700'
"""
total = 0
for i in range(1, 1001):
total += i**i
return str(total)[-10:]
if __name__ == "__main__":
print(solution())
| """
Self Powers
Problem 48
The series, 1^1 + 2^2 + 3^3 + ... + 10^10 = 10405071317.
Find the last ten digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
"""
def solution():
"""
Returns the last 10 digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
>>> solution()
'9110846700'
"""
total = 0
for i in range(1, 1001):
total += i**i
return str(total)[-10:]
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
import math
from itertools import permutations
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
import math
from itertools import permutations
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from math import log2
def binary_count_trailing_zeros(a: int) -> int:
"""
Take in 1 integer, return a number that is
the number of trailing zeros in binary representation of that number.
>>> binary_count_trailing_zeros(25)
0
>>> binary_count_trailing_zeros(36)
2
>>> binary_count_trailing_zeros(16)
4
>>> binary_count_trailing_zeros(58)
1
>>> binary_count_trailing_zeros(4294967296)
32
>>> binary_count_trailing_zeros(0)
0
>>> binary_count_trailing_zeros(-10)
Traceback (most recent call last):
...
ValueError: Input value must be a positive integer
>>> binary_count_trailing_zeros(0.8)
Traceback (most recent call last):
...
TypeError: Input value must be a 'int' type
>>> binary_count_trailing_zeros("0")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0:
raise ValueError("Input value must be a positive integer")
elif isinstance(a, float):
raise TypeError("Input value must be a 'int' type")
return 0 if (a == 0) else int(log2(a & -a))
if __name__ == "__main__":
import doctest
doctest.testmod()
| from math import log2
def binary_count_trailing_zeros(a: int) -> int:
"""
Take in 1 integer, return a number that is
the number of trailing zeros in binary representation of that number.
>>> binary_count_trailing_zeros(25)
0
>>> binary_count_trailing_zeros(36)
2
>>> binary_count_trailing_zeros(16)
4
>>> binary_count_trailing_zeros(58)
1
>>> binary_count_trailing_zeros(4294967296)
32
>>> binary_count_trailing_zeros(0)
0
>>> binary_count_trailing_zeros(-10)
Traceback (most recent call last):
...
ValueError: Input value must be a positive integer
>>> binary_count_trailing_zeros(0.8)
Traceback (most recent call last):
...
TypeError: Input value must be a 'int' type
>>> binary_count_trailing_zeros("0")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0:
raise ValueError("Input value must be a positive integer")
elif isinstance(a, float):
raise TypeError("Input value must be a 'int' type")
return 0 if (a == 0) else int(log2(a & -a))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
ELECTRON_CHARGE = 1.6021e-19 # units = C
def electric_conductivity(
conductivity: float,
electron_conc: float,
mobility: float,
) -> tuple[str, float]:
"""
This function can calculate any one of the three -
1. Conductivity
2. Electron Concentration
3. Electron Mobility
This is calculated from the other two provided values
Examples -
>>> electric_conductivity(conductivity=25, electron_conc=100, mobility=0)
('mobility', 1.5604519068722301e+18)
>>> electric_conductivity(conductivity=0, electron_conc=1600, mobility=200)
('conductivity', 5.12672e-14)
>>> electric_conductivity(conductivity=1000, electron_conc=0, mobility=1200)
('electron_conc', 5.201506356240767e+18)
"""
if (conductivity, electron_conc, mobility).count(0) != 1:
raise ValueError("You cannot supply more or less than 2 values")
elif conductivity < 0:
raise ValueError("Conductivity cannot be negative")
elif electron_conc < 0:
raise ValueError("Electron concentration cannot be negative")
elif mobility < 0:
raise ValueError("mobility cannot be negative")
elif conductivity == 0:
return (
"conductivity",
mobility * electron_conc * ELECTRON_CHARGE,
)
elif electron_conc == 0:
return (
"electron_conc",
conductivity / (mobility * ELECTRON_CHARGE),
)
else:
return (
"mobility",
conductivity / (electron_conc * ELECTRON_CHARGE),
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
ELECTRON_CHARGE = 1.6021e-19 # units = C
def electric_conductivity(
conductivity: float,
electron_conc: float,
mobility: float,
) -> tuple[str, float]:
"""
This function can calculate any one of the three -
1. Conductivity
2. Electron Concentration
3. Electron Mobility
This is calculated from the other two provided values
Examples -
>>> electric_conductivity(conductivity=25, electron_conc=100, mobility=0)
('mobility', 1.5604519068722301e+18)
>>> electric_conductivity(conductivity=0, electron_conc=1600, mobility=200)
('conductivity', 5.12672e-14)
>>> electric_conductivity(conductivity=1000, electron_conc=0, mobility=1200)
('electron_conc', 5.201506356240767e+18)
"""
if (conductivity, electron_conc, mobility).count(0) != 1:
raise ValueError("You cannot supply more or less than 2 values")
elif conductivity < 0:
raise ValueError("Conductivity cannot be negative")
elif electron_conc < 0:
raise ValueError("Electron concentration cannot be negative")
elif mobility < 0:
raise ValueError("mobility cannot be negative")
elif conductivity == 0:
return (
"conductivity",
mobility * electron_conc * ELECTRON_CHARGE,
)
elif electron_conc == 0:
return (
"electron_conc",
conductivity / (mobility * ELECTRON_CHARGE),
)
else:
return (
"mobility",
conductivity / (electron_conc * ELECTRON_CHARGE),
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """Convert a positive Decimal Number to Any Other Representation"""
from string import ascii_uppercase
ALPHABET_VALUES = {str(ord(c) - 55): c for c in ascii_uppercase}
def decimal_to_any(num: int, base: int) -> str:
"""
Convert a positive integer to another base as str.
>>> decimal_to_any(0, 2)
'0'
>>> decimal_to_any(5, 4)
'11'
>>> decimal_to_any(20, 3)
'202'
>>> decimal_to_any(58, 16)
'3A'
>>> decimal_to_any(243, 17)
'E5'
>>> decimal_to_any(34923, 36)
'QY3'
>>> decimal_to_any(10, 11)
'A'
>>> decimal_to_any(16, 16)
'10'
>>> decimal_to_any(36, 36)
'10'
>>> # negatives will error
>>> decimal_to_any(-45, 8) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: parameter must be positive int
>>> # floats will error
>>> decimal_to_any(34.4, 6) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: int() can't convert non-string with explicit base
>>> # a float base will error
>>> decimal_to_any(5, 2.5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # a str base will error
>>> decimal_to_any(10, '16') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
>>> # a base less than 2 will error
>>> decimal_to_any(7, 0) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: base must be >= 2
>>> # a base greater than 36 will error
>>> decimal_to_any(34, 37) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: base must be <= 36
"""
if isinstance(num, float):
raise TypeError("int() can't convert non-string with explicit base")
if num < 0:
raise ValueError("parameter must be positive int")
if isinstance(base, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if isinstance(base, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if base in (0, 1):
raise ValueError("base must be >= 2")
if base > 36:
raise ValueError("base must be <= 36")
new_value = ""
mod = 0
div = 0
while div != 1:
div, mod = divmod(num, base)
if base >= 11 and 9 < mod < 36:
actual_value = ALPHABET_VALUES[str(mod)]
else:
actual_value = str(mod)
new_value += actual_value
div = num // base
num = div
if div == 0:
return str(new_value[::-1])
elif div == 1:
new_value += str(div)
return str(new_value[::-1])
return new_value[::-1]
if __name__ == "__main__":
import doctest
doctest.testmod()
for base in range(2, 37):
for num in range(1000):
assert int(decimal_to_any(num, base), base) == num, (
num,
base,
decimal_to_any(num, base),
int(decimal_to_any(num, base), base),
)
| """Convert a positive Decimal Number to Any Other Representation"""
from string import ascii_uppercase
ALPHABET_VALUES = {str(ord(c) - 55): c for c in ascii_uppercase}
def decimal_to_any(num: int, base: int) -> str:
"""
Convert a positive integer to another base as str.
>>> decimal_to_any(0, 2)
'0'
>>> decimal_to_any(5, 4)
'11'
>>> decimal_to_any(20, 3)
'202'
>>> decimal_to_any(58, 16)
'3A'
>>> decimal_to_any(243, 17)
'E5'
>>> decimal_to_any(34923, 36)
'QY3'
>>> decimal_to_any(10, 11)
'A'
>>> decimal_to_any(16, 16)
'10'
>>> decimal_to_any(36, 36)
'10'
>>> # negatives will error
>>> decimal_to_any(-45, 8) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: parameter must be positive int
>>> # floats will error
>>> decimal_to_any(34.4, 6) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: int() can't convert non-string with explicit base
>>> # a float base will error
>>> decimal_to_any(5, 2.5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # a str base will error
>>> decimal_to_any(10, '16') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
>>> # a base less than 2 will error
>>> decimal_to_any(7, 0) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: base must be >= 2
>>> # a base greater than 36 will error
>>> decimal_to_any(34, 37) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: base must be <= 36
"""
if isinstance(num, float):
raise TypeError("int() can't convert non-string with explicit base")
if num < 0:
raise ValueError("parameter must be positive int")
if isinstance(base, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if isinstance(base, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if base in (0, 1):
raise ValueError("base must be >= 2")
if base > 36:
raise ValueError("base must be <= 36")
new_value = ""
mod = 0
div = 0
while div != 1:
div, mod = divmod(num, base)
if base >= 11 and 9 < mod < 36:
actual_value = ALPHABET_VALUES[str(mod)]
else:
actual_value = str(mod)
new_value += actual_value
div = num // base
num = div
if div == 0:
return str(new_value[::-1])
elif div == 1:
new_value += str(div)
return str(new_value[::-1])
return new_value[::-1]
if __name__ == "__main__":
import doctest
doctest.testmod()
for base in range(2, 37):
for num in range(1000):
assert int(decimal_to_any(num, base), base) == num, (
num,
base,
decimal_to_any(num, base),
int(decimal_to_any(num, base), base),
)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from scipy.constants import g
"""
Finding the gravitational potential energy of an object with reference
to the earth,by taking its mass and height above the ground as input
Description : Gravitational energy or gravitational potential energy
is the potential energy a massive object has in relation to another
massive object due to gravity. It is the potential energy associated
with the gravitational field, which is released (converted into
kinetic energy) when the objects fall towards each other.
Gravitational potential energy increases when two objects
are brought further apart.
For two pairwise interacting point particles, the gravitational
potential energy U is given by
U=-GMm/R
where M and m are the masses of the two particles, R is the distance
between them, and G is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately
constant, and the gravitational potential energy of an object reduces to
U=mgh
where m is the object's mass, g=GM/R² is the gravity of Earth, and h is
the height of the object's center of mass above a chosen reference level.
Reference : "https://en.m.wikipedia.org/wiki/Gravitational_energy"
"""
def potential_energy(mass: float, height: float) -> float:
# function will accept mass and height as parameters and return potential energy
"""
>>> potential_energy(10,10)
980.665
>>> potential_energy(0,5)
0.0
>>> potential_energy(8,0)
0.0
>>> potential_energy(10,5)
490.3325
>>> potential_energy(0,0)
0.0
>>> potential_energy(2,8)
156.9064
>>> potential_energy(20,100)
19613.3
"""
if mass < 0:
# handling of negative values of mass
raise ValueError("The mass of a body cannot be negative")
if height < 0:
# handling of negative values of height
raise ValueError("The height above the ground cannot be negative")
return mass * g * height
if __name__ == "__main__":
from doctest import testmod
testmod(name="potential_energy")
| from scipy.constants import g
"""
Finding the gravitational potential energy of an object with reference
to the earth,by taking its mass and height above the ground as input
Description : Gravitational energy or gravitational potential energy
is the potential energy a massive object has in relation to another
massive object due to gravity. It is the potential energy associated
with the gravitational field, which is released (converted into
kinetic energy) when the objects fall towards each other.
Gravitational potential energy increases when two objects
are brought further apart.
For two pairwise interacting point particles, the gravitational
potential energy U is given by
U=-GMm/R
where M and m are the masses of the two particles, R is the distance
between them, and G is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately
constant, and the gravitational potential energy of an object reduces to
U=mgh
where m is the object's mass, g=GM/R² is the gravity of Earth, and h is
the height of the object's center of mass above a chosen reference level.
Reference : "https://en.m.wikipedia.org/wiki/Gravitational_energy"
"""
def potential_energy(mass: float, height: float) -> float:
# function will accept mass and height as parameters and return potential energy
"""
>>> potential_energy(10,10)
980.665
>>> potential_energy(0,5)
0.0
>>> potential_energy(8,0)
0.0
>>> potential_energy(10,5)
490.3325
>>> potential_energy(0,0)
0.0
>>> potential_energy(2,8)
156.9064
>>> potential_energy(20,100)
19613.3
"""
if mass < 0:
# handling of negative values of mass
raise ValueError("The mass of a body cannot be negative")
if height < 0:
# handling of negative values of height
raise ValueError("The height above the ground cannot be negative")
return mass * g * height
if __name__ == "__main__":
from doctest import testmod
testmod(name="potential_energy")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Lucas_number
"""
def recursive_lucas_number(n_th_number: int) -> int:
"""
Returns the nth lucas number
>>> recursive_lucas_number(1)
1
>>> recursive_lucas_number(20)
15127
>>> recursive_lucas_number(0)
2
>>> recursive_lucas_number(25)
167761
>>> recursive_lucas_number(-1.5)
Traceback (most recent call last):
...
TypeError: recursive_lucas_number accepts only integer arguments.
"""
if not isinstance(n_th_number, int):
raise TypeError("recursive_lucas_number accepts only integer arguments.")
if n_th_number == 0:
return 2
if n_th_number == 1:
return 1
return recursive_lucas_number(n_th_number - 1) + recursive_lucas_number(
n_th_number - 2
)
def dynamic_lucas_number(n_th_number: int) -> int:
"""
Returns the nth lucas number
>>> dynamic_lucas_number(1)
1
>>> dynamic_lucas_number(20)
15127
>>> dynamic_lucas_number(0)
2
>>> dynamic_lucas_number(25)
167761
>>> dynamic_lucas_number(-1.5)
Traceback (most recent call last):
...
TypeError: dynamic_lucas_number accepts only integer arguments.
"""
if not isinstance(n_th_number, int):
raise TypeError("dynamic_lucas_number accepts only integer arguments.")
a, b = 2, 1
for _ in range(n_th_number):
a, b = b, a + b
return a
if __name__ == "__main__":
from doctest import testmod
testmod()
n = int(input("Enter the number of terms in lucas series:\n").strip())
print("Using recursive function to calculate lucas series:")
print(" ".join(str(recursive_lucas_number(i)) for i in range(n)))
print("\nUsing dynamic function to calculate lucas series:")
print(" ".join(str(dynamic_lucas_number(i)) for i in range(n)))
| """
https://en.wikipedia.org/wiki/Lucas_number
"""
def recursive_lucas_number(n_th_number: int) -> int:
"""
Returns the nth lucas number
>>> recursive_lucas_number(1)
1
>>> recursive_lucas_number(20)
15127
>>> recursive_lucas_number(0)
2
>>> recursive_lucas_number(25)
167761
>>> recursive_lucas_number(-1.5)
Traceback (most recent call last):
...
TypeError: recursive_lucas_number accepts only integer arguments.
"""
if not isinstance(n_th_number, int):
raise TypeError("recursive_lucas_number accepts only integer arguments.")
if n_th_number == 0:
return 2
if n_th_number == 1:
return 1
return recursive_lucas_number(n_th_number - 1) + recursive_lucas_number(
n_th_number - 2
)
def dynamic_lucas_number(n_th_number: int) -> int:
"""
Returns the nth lucas number
>>> dynamic_lucas_number(1)
1
>>> dynamic_lucas_number(20)
15127
>>> dynamic_lucas_number(0)
2
>>> dynamic_lucas_number(25)
167761
>>> dynamic_lucas_number(-1.5)
Traceback (most recent call last):
...
TypeError: dynamic_lucas_number accepts only integer arguments.
"""
if not isinstance(n_th_number, int):
raise TypeError("dynamic_lucas_number accepts only integer arguments.")
a, b = 2, 1
for _ in range(n_th_number):
a, b = b, a + b
return a
if __name__ == "__main__":
from doctest import testmod
testmod()
n = int(input("Enter the number of terms in lucas series:\n").strip())
print("Using recursive function to calculate lucas series:")
print(" ".join(str(recursive_lucas_number(i)) for i in range(n)))
print("\nUsing dynamic function to calculate lucas series:")
print(" ".join(str(dynamic_lucas_number(i)) for i in range(n)))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 70: https://projecteuler.net/problem=70
Euler's Totient function, φ(n) [sometimes called the phi function], is used to
determine the number of positive numbers less than or equal to n which are
relatively prime to n. For example, as 1, 2, 4, 5, 7, and 8, are all less than
nine and relatively prime to nine, φ(9)=6.
The number 1 is considered to be relatively prime to every positive number, so
φ(1)=1.
Interestingly, φ(87109)=79180, and it can be seen that 87109 is a permutation
of 79180.
Find the value of n, 1 < n < 10^7, for which φ(n) is a permutation of n and
the ratio n/φ(n) produces a minimum.
-----
This is essentially brute force. Calculate all totients up to 10^7 and
find the minimum ratio of n/φ(n) that way. To minimize the ratio, we want
to minimize n and maximize φ(n) as much as possible, so we can store the
minimum fraction's numerator and denominator and calculate new fractions
with each totient to compare against. To avoid dividing by zero, I opt to
use cross multiplication.
References:
Finding totients
https://en.wikipedia.org/wiki/Euler's_totient_function#Euler's_product_formula
"""
from __future__ import annotations
def get_totients(max_one: int) -> list[int]:
"""
Calculates a list of totients from 0 to max_one exclusive, using the
definition of Euler's product formula.
>>> get_totients(5)
[0, 1, 1, 2, 2]
>>> get_totients(10)
[0, 1, 1, 2, 2, 4, 2, 6, 4, 6]
"""
totients = [0] * max_one
for i in range(0, max_one):
totients[i] = i
for i in range(2, max_one):
if totients[i] == i:
for j in range(i, max_one, i):
totients[j] -= totients[j] // i
return totients
def has_same_digits(num1: int, num2: int) -> bool:
"""
Return True if num1 and num2 have the same frequency of every digit, False
otherwise.
>>> has_same_digits(123456789, 987654321)
True
>>> has_same_digits(123, 23)
False
>>> has_same_digits(1234566, 123456)
False
"""
return sorted(str(num1)) == sorted(str(num2))
def solution(max_n: int = 10000000) -> int:
"""
Finds the value of n from 1 to max such that n/φ(n) produces a minimum.
>>> solution(100)
21
>>> solution(10000)
4435
"""
min_numerator = 1 # i
min_denominator = 0 # φ(i)
totients = get_totients(max_n + 1)
for i in range(2, max_n + 1):
t = totients[i]
if i * min_denominator < min_numerator * t and has_same_digits(i, t):
min_numerator = i
min_denominator = t
return min_numerator
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 70: https://projecteuler.net/problem=70
Euler's Totient function, φ(n) [sometimes called the phi function], is used to
determine the number of positive numbers less than or equal to n which are
relatively prime to n. For example, as 1, 2, 4, 5, 7, and 8, are all less than
nine and relatively prime to nine, φ(9)=6.
The number 1 is considered to be relatively prime to every positive number, so
φ(1)=1.
Interestingly, φ(87109)=79180, and it can be seen that 87109 is a permutation
of 79180.
Find the value of n, 1 < n < 10^7, for which φ(n) is a permutation of n and
the ratio n/φ(n) produces a minimum.
-----
This is essentially brute force. Calculate all totients up to 10^7 and
find the minimum ratio of n/φ(n) that way. To minimize the ratio, we want
to minimize n and maximize φ(n) as much as possible, so we can store the
minimum fraction's numerator and denominator and calculate new fractions
with each totient to compare against. To avoid dividing by zero, I opt to
use cross multiplication.
References:
Finding totients
https://en.wikipedia.org/wiki/Euler's_totient_function#Euler's_product_formula
"""
from __future__ import annotations
def get_totients(max_one: int) -> list[int]:
"""
Calculates a list of totients from 0 to max_one exclusive, using the
definition of Euler's product formula.
>>> get_totients(5)
[0, 1, 1, 2, 2]
>>> get_totients(10)
[0, 1, 1, 2, 2, 4, 2, 6, 4, 6]
"""
totients = [0] * max_one
for i in range(0, max_one):
totients[i] = i
for i in range(2, max_one):
if totients[i] == i:
for j in range(i, max_one, i):
totients[j] -= totients[j] // i
return totients
def has_same_digits(num1: int, num2: int) -> bool:
"""
Return True if num1 and num2 have the same frequency of every digit, False
otherwise.
>>> has_same_digits(123456789, 987654321)
True
>>> has_same_digits(123, 23)
False
>>> has_same_digits(1234566, 123456)
False
"""
return sorted(str(num1)) == sorted(str(num2))
def solution(max_n: int = 10000000) -> int:
"""
Finds the value of n from 1 to max such that n/φ(n) produces a minimum.
>>> solution(100)
21
>>> solution(10000)
4435
"""
min_numerator = 1 # i
min_denominator = 0 # φ(i)
totients = get_totients(max_n + 1)
for i in range(2, max_n + 1):
t = totients[i]
if i * min_denominator < min_numerator * t and has_same_digits(i, t):
min_numerator = i
min_denominator = t
return min_numerator
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def points_to_polynomial(coordinates: list[list[int]]) -> str:
"""
coordinates is a two dimensional matrix: [[x, y], [x, y], ...]
number of points you want to use
>>> print(points_to_polynomial([]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[]]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*1.0
>>> print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*3.0
>>> print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
f(x)=x^2*0.0+x^1*1.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*2.0
>>> print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
f(x)=x^2*-1.0+x^1*-0.0+x^0*-2.0
>>> print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
f(x)=x^2*5.0+x^1*-18.0+x^0*18.0
"""
if len(coordinates) == 0 or not all(len(pair) == 2 for pair in coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
if len({tuple(pair) for pair in coordinates}) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
set_x = {x for x, _ in coordinates}
if len(set_x) == 1:
return f"x={coordinates[0][0]}"
if len(set_x) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
x = len(coordinates)
count_of_line = 0
matrix: list[list[float]] = []
# put the x and x to the power values in a matrix
while count_of_line < x:
count_in_line = 0
a = coordinates[count_of_line][0]
count_line: list[float] = []
while count_in_line < x:
count_line.append(a ** (x - (count_in_line + 1)))
count_in_line += 1
matrix.append(count_line)
count_of_line += 1
count_of_line = 0
# put the y values into a vector
vector: list[float] = []
while count_of_line < x:
vector.append(coordinates[count_of_line][1])
count_of_line += 1
count = 0
while count < x:
zahlen = 0
while zahlen < x:
if count == zahlen:
zahlen += 1
if zahlen == x:
break
bruch = matrix[zahlen][count] / matrix[count][count]
for counting_columns, item in enumerate(matrix[count]):
# manipulating all the values in the matrix
matrix[zahlen][counting_columns] -= item * bruch
# manipulating the values in the vector
vector[zahlen] -= vector[count] * bruch
zahlen += 1
count += 1
count = 0
# make solutions
solution: list[str] = []
while count < x:
solution.append(str(vector[count] / matrix[count][count]))
count += 1
count = 0
solved = "f(x)="
while count < x:
remove_e: list[str] = solution[count].split("E")
if len(remove_e) > 1:
solution[count] = f"{remove_e[0]}*10^{remove_e[1]}"
solved += f"x^{x - (count + 1)}*{solution[count]}"
if count + 1 != x:
solved += "+"
count += 1
return solved
if __name__ == "__main__":
print(points_to_polynomial([]))
print(points_to_polynomial([[]]))
print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
| def points_to_polynomial(coordinates: list[list[int]]) -> str:
"""
coordinates is a two dimensional matrix: [[x, y], [x, y], ...]
number of points you want to use
>>> print(points_to_polynomial([]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[]]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*1.0
>>> print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*3.0
>>> print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
f(x)=x^2*0.0+x^1*1.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*2.0
>>> print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
f(x)=x^2*-1.0+x^1*-0.0+x^0*-2.0
>>> print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
f(x)=x^2*5.0+x^1*-18.0+x^0*18.0
"""
if len(coordinates) == 0 or not all(len(pair) == 2 for pair in coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
if len({tuple(pair) for pair in coordinates}) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
set_x = {x for x, _ in coordinates}
if len(set_x) == 1:
return f"x={coordinates[0][0]}"
if len(set_x) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
x = len(coordinates)
count_of_line = 0
matrix: list[list[float]] = []
# put the x and x to the power values in a matrix
while count_of_line < x:
count_in_line = 0
a = coordinates[count_of_line][0]
count_line: list[float] = []
while count_in_line < x:
count_line.append(a ** (x - (count_in_line + 1)))
count_in_line += 1
matrix.append(count_line)
count_of_line += 1
count_of_line = 0
# put the y values into a vector
vector: list[float] = []
while count_of_line < x:
vector.append(coordinates[count_of_line][1])
count_of_line += 1
count = 0
while count < x:
zahlen = 0
while zahlen < x:
if count == zahlen:
zahlen += 1
if zahlen == x:
break
bruch = matrix[zahlen][count] / matrix[count][count]
for counting_columns, item in enumerate(matrix[count]):
# manipulating all the values in the matrix
matrix[zahlen][counting_columns] -= item * bruch
# manipulating the values in the vector
vector[zahlen] -= vector[count] * bruch
zahlen += 1
count += 1
count = 0
# make solutions
solution: list[str] = []
while count < x:
solution.append(str(vector[count] / matrix[count][count]))
count += 1
count = 0
solved = "f(x)="
while count < x:
remove_e: list[str] = solution[count].split("E")
if len(remove_e) > 1:
solution[count] = f"{remove_e[0]}*10^{remove_e[1]}"
solved += f"x^{x - (count + 1)}*{solution[count]}"
if count + 1 != x:
solved += "+"
count += 1
return solved
if __name__ == "__main__":
print(points_to_polynomial([]))
print(points_to_polynomial([[]]))
print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
import collections
import pprint
from pathlib import Path
def signature(word: str) -> str:
"""Return a word sorted
>>> signature("test")
'estt'
>>> signature("this is a test")
' aehiisssttt'
>>> signature("finaltest")
'aefilnstt'
"""
return "".join(sorted(word))
def anagram(my_word: str) -> list[str]:
"""Return every anagram of the given word
>>> anagram('test')
['sett', 'stet', 'test']
>>> anagram('this is a test')
[]
>>> anagram('final')
['final']
"""
return word_by_signature[signature(my_word)]
data: str = Path(__file__).parent.joinpath("words.txt").read_text(encoding="utf-8")
word_list = sorted({word.strip().lower() for word in data.splitlines()})
word_by_signature = collections.defaultdict(list)
for word in word_list:
word_by_signature[signature(word)].append(word)
if __name__ == "__main__":
all_anagrams = {word: anagram(word) for word in word_list if len(anagram(word)) > 1}
with open("anagrams.txt", "w") as file:
file.write("all_anagrams = \n ")
file.write(pprint.pformat(all_anagrams))
| from __future__ import annotations
import collections
import pprint
from pathlib import Path
def signature(word: str) -> str:
"""Return a word sorted
>>> signature("test")
'estt'
>>> signature("this is a test")
' aehiisssttt'
>>> signature("finaltest")
'aefilnstt'
"""
return "".join(sorted(word))
def anagram(my_word: str) -> list[str]:
"""Return every anagram of the given word
>>> anagram('test')
['sett', 'stet', 'test']
>>> anagram('this is a test')
[]
>>> anagram('final')
['final']
"""
return word_by_signature[signature(my_word)]
data: str = Path(__file__).parent.joinpath("words.txt").read_text(encoding="utf-8")
word_list = sorted({word.strip().lower() for word in data.splitlines()})
word_by_signature = collections.defaultdict(list)
for word in word_list:
word_by_signature[signature(word)].append(word)
if __name__ == "__main__":
all_anagrams = {word: anagram(word) for word in word_list if len(anagram(word)) > 1}
with open("anagrams.txt", "w") as file:
file.write("all_anagrams = \n ")
file.write(pprint.pformat(all_anagrams))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Breadth-first_search
pseudo-code:
breadth_first_search(graph G, start vertex s):
// all nodes initially unexplored
mark s as explored
let Q = queue data structure, initialized with s
while Q is non-empty:
remove the first node of Q, call it v
for each edge(v, w): // for w in graph[v]
if w unexplored:
mark w as explored
add w to Q (at the end)
"""
from __future__ import annotations
from collections import deque
from queue import Queue
from timeit import timeit
G = {
"A": ["B", "C"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B"],
"E": ["B", "F"],
"F": ["C", "E"],
}
def breadth_first_search(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using queue.Queue.
>>> ''.join(breadth_first_search(G, 'A'))
'ABCDEF'
"""
explored = {start}
result = [start]
queue: Queue = Queue()
queue.put(start)
while not queue.empty():
v = queue.get()
for w in graph[v]:
if w not in explored:
explored.add(w)
result.append(w)
queue.put(w)
return result
def breadth_first_search_with_deque(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using collection.queue.
>>> ''.join(breadth_first_search_with_deque(G, 'A'))
'ABCDEF'
"""
visited = {start}
result = [start]
queue = deque([start])
while queue:
v = queue.popleft()
for child in graph[v]:
if child not in visited:
visited.add(child)
result.append(child)
queue.append(child)
return result
def benchmark_function(name: str) -> None:
setup = f"from __main__ import G, {name}"
number = 10000
res = timeit(f"{name}(G, 'A')", setup=setup, number=number)
print(f"{name:<35} finished {number} runs in {res:.5f} seconds")
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark_function("breadth_first_search")
benchmark_function("breadth_first_search_with_deque")
# breadth_first_search finished 10000 runs in 0.20999 seconds
# breadth_first_search_with_deque finished 10000 runs in 0.01421 seconds
| """
https://en.wikipedia.org/wiki/Breadth-first_search
pseudo-code:
breadth_first_search(graph G, start vertex s):
// all nodes initially unexplored
mark s as explored
let Q = queue data structure, initialized with s
while Q is non-empty:
remove the first node of Q, call it v
for each edge(v, w): // for w in graph[v]
if w unexplored:
mark w as explored
add w to Q (at the end)
"""
from __future__ import annotations
from collections import deque
from queue import Queue
from timeit import timeit
G = {
"A": ["B", "C"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B"],
"E": ["B", "F"],
"F": ["C", "E"],
}
def breadth_first_search(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using queue.Queue.
>>> ''.join(breadth_first_search(G, 'A'))
'ABCDEF'
"""
explored = {start}
result = [start]
queue: Queue = Queue()
queue.put(start)
while not queue.empty():
v = queue.get()
for w in graph[v]:
if w not in explored:
explored.add(w)
result.append(w)
queue.put(w)
return result
def breadth_first_search_with_deque(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using collection.queue.
>>> ''.join(breadth_first_search_with_deque(G, 'A'))
'ABCDEF'
"""
visited = {start}
result = [start]
queue = deque([start])
while queue:
v = queue.popleft()
for child in graph[v]:
if child not in visited:
visited.add(child)
result.append(child)
queue.append(child)
return result
def benchmark_function(name: str) -> None:
setup = f"from __main__ import G, {name}"
number = 10000
res = timeit(f"{name}(G, 'A')", setup=setup, number=number)
print(f"{name:<35} finished {number} runs in {res:.5f} seconds")
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark_function("breadth_first_search")
benchmark_function("breadth_first_search_with_deque")
# breadth_first_search finished 10000 runs in 0.20999 seconds
# breadth_first_search_with_deque finished 10000 runs in 0.01421 seconds
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from collections.abc import Callable
from math import log
from os import name, system
from random import gauss, seed
from typing import TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str | None = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear") # noqa: S605
if __name__ == "__main__":
main()
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from collections.abc import Callable
from math import log
from os import name, system
from random import gauss, seed
from typing import TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str | None = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear") # noqa: S605
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def gcd(a: int, b: int) -> int:
while a != 0:
a, b = b % a, a
return b
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
msg = f"mod inverse of {a!r} and {m!r} does not exist"
raise ValueError(msg)
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:
q = u3 // v3
v1, v2, v3, u1, u2, u3 = (u1 - q * v1), (u2 - q * v2), (u3 - q * v3), v1, v2, v3
return u1 % m
| def gcd(a: int, b: int) -> int:
while a != 0:
a, b = b % a, a
return b
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
msg = f"mod inverse of {a!r} and {m!r} does not exist"
raise ValueError(msg)
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:
q = u3 // v3
v1, v2, v3, u1, u2, u3 = (u1 - q * v1), (u2 - q * v2), (u3 - q * v3), v1, v2, v3
return u1 % m
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is an implementation of odd-even transposition sort.
It works by performing a series of parallel swaps between odd and even pairs of
variables in the list.
This implementation represents each variable in the list with a process and
each process communicates with its neighboring processes in the list to perform
comparisons.
They are synchronized with locks and message passing but other forms of
synchronization could be used.
"""
from multiprocessing import Lock, Pipe, Process
# lock used to ensure that two processes do not access a pipe at the same time
process_lock = Lock()
"""
The function run by the processes that sorts the list
position = the position in the list the process represents, used to know which
neighbor we pass our value to
value = the initial value at list[position]
LSend, RSend = the pipes we use to send to our left and right neighbors
LRcv, RRcv = the pipes we use to receive from our left and right neighbors
resultPipe = the pipe used to send results back to main
"""
def oe_process(position, value, l_send, r_send, lr_cv, rr_cv, result_pipe):
global process_lock
# we perform n swaps since after n swaps we know we are sorted
# we *could* stop early if we are sorted already, but it takes as long to
# find out we are sorted as it does to sort the list with this algorithm
for i in range(0, 10):
if (i + position) % 2 == 0 and r_send is not None:
# send your value to your right neighbor
process_lock.acquire()
r_send[1].send(value)
process_lock.release()
# receive your right neighbor's value
process_lock.acquire()
temp = rr_cv[0].recv()
process_lock.release()
# take the lower value since you are on the left
value = min(value, temp)
elif (i + position) % 2 != 0 and l_send is not None:
# send your value to your left neighbor
process_lock.acquire()
l_send[1].send(value)
process_lock.release()
# receive your left neighbor's value
process_lock.acquire()
temp = lr_cv[0].recv()
process_lock.release()
# take the higher value since you are on the right
value = max(value, temp)
# after all swaps are performed, send the values back to main
result_pipe[1].send(value)
"""
the function which creates the processes that perform the parallel swaps
arr = the list to be sorted
"""
def odd_even_transposition(arr):
process_array_ = []
result_pipe = []
# initialize the list of pipes where the values will be retrieved
for _ in arr:
result_pipe.append(Pipe())
# creates the processes
# the first and last process only have one neighbor so they are made outside
# of the loop
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(0, arr[0], None, temp_rs, None, temp_rr, result_pipe[0]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
for i in range(1, len(arr) - 1):
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(i, arr[i], temp_ls, temp_rs, temp_lr, temp_rr, result_pipe[i]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
process_array_.append(
Process(
target=oe_process,
args=(
len(arr) - 1,
arr[len(arr) - 1],
temp_ls,
None,
temp_lr,
None,
result_pipe[len(arr) - 1],
),
)
)
# start the processes
for p in process_array_:
p.start()
# wait for the processes to end and write their values to the list
for p in range(0, len(result_pipe)):
arr[p] = result_pipe[p][0].recv()
process_array_[p].join()
return arr
# creates a reverse sorted list and sorts it
def main():
arr = list(range(10, 0, -1))
print("Initial List")
print(*arr)
arr = odd_even_transposition(arr)
print("Sorted List\n")
print(*arr)
if __name__ == "__main__":
main()
| """
This is an implementation of odd-even transposition sort.
It works by performing a series of parallel swaps between odd and even pairs of
variables in the list.
This implementation represents each variable in the list with a process and
each process communicates with its neighboring processes in the list to perform
comparisons.
They are synchronized with locks and message passing but other forms of
synchronization could be used.
"""
from multiprocessing import Lock, Pipe, Process
# lock used to ensure that two processes do not access a pipe at the same time
process_lock = Lock()
"""
The function run by the processes that sorts the list
position = the position in the list the process represents, used to know which
neighbor we pass our value to
value = the initial value at list[position]
LSend, RSend = the pipes we use to send to our left and right neighbors
LRcv, RRcv = the pipes we use to receive from our left and right neighbors
resultPipe = the pipe used to send results back to main
"""
def oe_process(position, value, l_send, r_send, lr_cv, rr_cv, result_pipe):
global process_lock
# we perform n swaps since after n swaps we know we are sorted
# we *could* stop early if we are sorted already, but it takes as long to
# find out we are sorted as it does to sort the list with this algorithm
for i in range(0, 10):
if (i + position) % 2 == 0 and r_send is not None:
# send your value to your right neighbor
process_lock.acquire()
r_send[1].send(value)
process_lock.release()
# receive your right neighbor's value
process_lock.acquire()
temp = rr_cv[0].recv()
process_lock.release()
# take the lower value since you are on the left
value = min(value, temp)
elif (i + position) % 2 != 0 and l_send is not None:
# send your value to your left neighbor
process_lock.acquire()
l_send[1].send(value)
process_lock.release()
# receive your left neighbor's value
process_lock.acquire()
temp = lr_cv[0].recv()
process_lock.release()
# take the higher value since you are on the right
value = max(value, temp)
# after all swaps are performed, send the values back to main
result_pipe[1].send(value)
"""
the function which creates the processes that perform the parallel swaps
arr = the list to be sorted
"""
def odd_even_transposition(arr):
process_array_ = []
result_pipe = []
# initialize the list of pipes where the values will be retrieved
for _ in arr:
result_pipe.append(Pipe())
# creates the processes
# the first and last process only have one neighbor so they are made outside
# of the loop
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(0, arr[0], None, temp_rs, None, temp_rr, result_pipe[0]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
for i in range(1, len(arr) - 1):
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(i, arr[i], temp_ls, temp_rs, temp_lr, temp_rr, result_pipe[i]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
process_array_.append(
Process(
target=oe_process,
args=(
len(arr) - 1,
arr[len(arr) - 1],
temp_ls,
None,
temp_lr,
None,
result_pipe[len(arr) - 1],
),
)
)
# start the processes
for p in process_array_:
p.start()
# wait for the processes to end and write their values to the list
for p in range(0, len(result_pipe)):
arr[p] = result_pipe[p][0].recv()
process_array_[p].join()
return arr
# creates a reverse sorted list and sorts it
def main():
arr = list(range(10, 0, -1))
print("Initial List")
print(*arr)
arr = odd_even_transposition(arr)
print("Sorted List\n")
print(*arr)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Conway's Game of Life implemented in Python.
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
"""
from __future__ import annotations
from PIL import Image
# Define glider example
GLIDER = [
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
]
# Define blinker example
BLINKER = [[0, 1, 0], [0, 1, 0], [0, 1, 0]]
def new_generation(cells: list[list[int]]) -> list[list[int]]:
"""
Generates the next generation for a given state of Conway's Game of Life.
>>> new_generation(BLINKER)
[[0, 0, 0], [1, 1, 1], [0, 0, 0]]
"""
next_generation = []
for i in range(len(cells)):
next_generation_row = []
for j in range(len(cells[i])):
# Get the number of live neighbours
neighbour_count = 0
if i > 0 and j > 0:
neighbour_count += cells[i - 1][j - 1]
if i > 0:
neighbour_count += cells[i - 1][j]
if i > 0 and j < len(cells[i]) - 1:
neighbour_count += cells[i - 1][j + 1]
if j > 0:
neighbour_count += cells[i][j - 1]
if j < len(cells[i]) - 1:
neighbour_count += cells[i][j + 1]
if i < len(cells) - 1 and j > 0:
neighbour_count += cells[i + 1][j - 1]
if i < len(cells) - 1:
neighbour_count += cells[i + 1][j]
if i < len(cells) - 1 and j < len(cells[i]) - 1:
neighbour_count += cells[i + 1][j + 1]
# Rules of the game of life (excerpt from Wikipedia):
# 1. Any live cell with two or three live neighbours survives.
# 2. Any dead cell with three live neighbours becomes a live cell.
# 3. All other live cells die in the next generation.
# Similarly, all other dead cells stay dead.
alive = cells[i][j] == 1
if (
(alive and 2 <= neighbour_count <= 3)
or not alive
and neighbour_count == 3
):
next_generation_row.append(1)
else:
next_generation_row.append(0)
next_generation.append(next_generation_row)
return next_generation
def generate_images(cells: list[list[int]], frames: int) -> list[Image.Image]:
"""
Generates a list of images of subsequent Game of Life states.
"""
images = []
for _ in range(frames):
# Create output image
img = Image.new("RGB", (len(cells[0]), len(cells)))
pixels = img.load()
# Save cells to image
for x in range(len(cells)):
for y in range(len(cells[0])):
colour = 255 - cells[y][x] * 255
pixels[x, y] = (colour, colour, colour)
# Save image
images.append(img)
cells = new_generation(cells)
return images
if __name__ == "__main__":
images = generate_images(GLIDER, 16)
images[0].save("out.gif", save_all=True, append_images=images[1:])
| """
Conway's Game of Life implemented in Python.
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
"""
from __future__ import annotations
from PIL import Image
# Define glider example
GLIDER = [
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
]
# Define blinker example
BLINKER = [[0, 1, 0], [0, 1, 0], [0, 1, 0]]
def new_generation(cells: list[list[int]]) -> list[list[int]]:
"""
Generates the next generation for a given state of Conway's Game of Life.
>>> new_generation(BLINKER)
[[0, 0, 0], [1, 1, 1], [0, 0, 0]]
"""
next_generation = []
for i in range(len(cells)):
next_generation_row = []
for j in range(len(cells[i])):
# Get the number of live neighbours
neighbour_count = 0
if i > 0 and j > 0:
neighbour_count += cells[i - 1][j - 1]
if i > 0:
neighbour_count += cells[i - 1][j]
if i > 0 and j < len(cells[i]) - 1:
neighbour_count += cells[i - 1][j + 1]
if j > 0:
neighbour_count += cells[i][j - 1]
if j < len(cells[i]) - 1:
neighbour_count += cells[i][j + 1]
if i < len(cells) - 1 and j > 0:
neighbour_count += cells[i + 1][j - 1]
if i < len(cells) - 1:
neighbour_count += cells[i + 1][j]
if i < len(cells) - 1 and j < len(cells[i]) - 1:
neighbour_count += cells[i + 1][j + 1]
# Rules of the game of life (excerpt from Wikipedia):
# 1. Any live cell with two or three live neighbours survives.
# 2. Any dead cell with three live neighbours becomes a live cell.
# 3. All other live cells die in the next generation.
# Similarly, all other dead cells stay dead.
alive = cells[i][j] == 1
if (
(alive and 2 <= neighbour_count <= 3)
or not alive
and neighbour_count == 3
):
next_generation_row.append(1)
else:
next_generation_row.append(0)
next_generation.append(next_generation_row)
return next_generation
def generate_images(cells: list[list[int]], frames: int) -> list[Image.Image]:
"""
Generates a list of images of subsequent Game of Life states.
"""
images = []
for _ in range(frames):
# Create output image
img = Image.new("RGB", (len(cells[0]), len(cells)))
pixels = img.load()
# Save cells to image
for x in range(len(cells)):
for y in range(len(cells[0])):
colour = 255 - cells[y][x] * 255
pixels[x, y] = (colour, colour, colour)
# Save image
images.append(img)
cells = new_generation(cells)
return images
if __name__ == "__main__":
images = generate_images(GLIDER, 16)
images[0].save("out.gif", save_all=True, append_images=images[1:])
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A perfect number is a number for which the sum of its proper divisors is exactly
equal to the number. For example, the sum of the proper divisors of 28 would be
1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n
and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest
number that can be written as the sum of two abundant numbers is 24. By
mathematical analysis, it can be shown that all integers greater than 28123
can be written as the sum of two abundant numbers. However, this upper limit
cannot be reduced any further by analysis even though it is known that the
greatest number that cannot be expressed as the sum of two abundant numbers
is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum
of two abundant numbers.
"""
def solution(limit=28123):
"""
Finds the sum of all the positive integers which cannot be written as
the sum of two abundant numbers
as described by the statement above.
>>> solution()
4179871
"""
sum_divs = [1] * (limit + 1)
for i in range(2, int(limit**0.5) + 1):
sum_divs[i * i] += i
for k in range(i + 1, limit // i + 1):
sum_divs[k * i] += k + i
abundants = set()
res = 0
for n in range(1, limit + 1):
if sum_divs[n] > n:
abundants.add(n)
if not any((n - a in abundants) for a in abundants):
res += n
return res
if __name__ == "__main__":
print(solution())
| """
A perfect number is a number for which the sum of its proper divisors is exactly
equal to the number. For example, the sum of the proper divisors of 28 would be
1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n
and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest
number that can be written as the sum of two abundant numbers is 24. By
mathematical analysis, it can be shown that all integers greater than 28123
can be written as the sum of two abundant numbers. However, this upper limit
cannot be reduced any further by analysis even though it is known that the
greatest number that cannot be expressed as the sum of two abundant numbers
is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum
of two abundant numbers.
"""
def solution(limit=28123):
"""
Finds the sum of all the positive integers which cannot be written as
the sum of two abundant numbers
as described by the statement above.
>>> solution()
4179871
"""
sum_divs = [1] * (limit + 1)
for i in range(2, int(limit**0.5) + 1):
sum_divs[i * i] += i
for k in range(i + 1, limit // i + 1):
sum_divs[k * i] += k + i
abundants = set()
res = 0
for n in range(1, limit + 1):
if sum_divs[n] > n:
abundants.add(n)
if not any((n - a in abundants) for a in abundants):
res += n
return res
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/env python3
"""
Build a half-adder quantum circuit that takes two bits as input,
encodes them into qubits, then runs the half-adder circuit calculating
the sum and carry qubits, observed over 1000 runs of the experiment
.
References:
https://en.wikipedia.org/wiki/Adder_(electronics)
https://qiskit.org/textbook/ch-states/atoms-computation.html#4.2-Remembering-how-to-add-
"""
import qiskit
def half_adder(bit0: int, bit1: int) -> qiskit.result.counts.Counts:
"""
>>> half_adder(0, 0)
{'00': 1000}
>>> half_adder(0, 1)
{'01': 1000}
>>> half_adder(1, 0)
{'01': 1000}
>>> half_adder(1, 1)
{'10': 1000}
"""
# Use Aer's simulator
simulator = qiskit.Aer.get_backend("aer_simulator")
qc_ha = qiskit.QuantumCircuit(4, 2)
# encode inputs in qubits 0 and 1
if bit0 == 1:
qc_ha.x(0)
if bit1 == 1:
qc_ha.x(1)
qc_ha.barrier()
# use cnots to write XOR of the inputs on qubit2
qc_ha.cx(0, 2)
qc_ha.cx(1, 2)
# use ccx / toffoli gate to write AND of the inputs on qubit3
qc_ha.ccx(0, 1, 3)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2, 0) # extract XOR value
qc_ha.measure(3, 1) # extract AND value
# Execute the circuit on the qasm simulator
job = qiskit.execute(qc_ha, simulator, shots=1000)
# Return the histogram data of the results of the experiment
return job.result().get_counts(qc_ha)
if __name__ == "__main__":
counts = half_adder(1, 1)
print(f"Half Adder Output Qubit Counts: {counts}")
| #!/usr/bin/env python3
"""
Build a half-adder quantum circuit that takes two bits as input,
encodes them into qubits, then runs the half-adder circuit calculating
the sum and carry qubits, observed over 1000 runs of the experiment
.
References:
https://en.wikipedia.org/wiki/Adder_(electronics)
https://qiskit.org/textbook/ch-states/atoms-computation.html#4.2-Remembering-how-to-add-
"""
import qiskit
def half_adder(bit0: int, bit1: int) -> qiskit.result.counts.Counts:
"""
>>> half_adder(0, 0)
{'00': 1000}
>>> half_adder(0, 1)
{'01': 1000}
>>> half_adder(1, 0)
{'01': 1000}
>>> half_adder(1, 1)
{'10': 1000}
"""
# Use Aer's simulator
simulator = qiskit.Aer.get_backend("aer_simulator")
qc_ha = qiskit.QuantumCircuit(4, 2)
# encode inputs in qubits 0 and 1
if bit0 == 1:
qc_ha.x(0)
if bit1 == 1:
qc_ha.x(1)
qc_ha.barrier()
# use cnots to write XOR of the inputs on qubit2
qc_ha.cx(0, 2)
qc_ha.cx(1, 2)
# use ccx / toffoli gate to write AND of the inputs on qubit3
qc_ha.ccx(0, 1, 3)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2, 0) # extract XOR value
qc_ha.measure(3, 1) # extract AND value
# Execute the circuit on the qasm simulator
job = qiskit.execute(qc_ha, simulator, shots=1000)
# Return the histogram data of the results of the experiment
return job.result().get_counts(qc_ha)
if __name__ == "__main__":
counts = half_adder(1, 1)
print(f"Half Adder Output Qubit Counts: {counts}")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """README, Author - Anurag Kumar(mailto:[email protected])
Requirements:
- sklearn
- numpy
- matplotlib
Python:
- 3.5
Inputs:
- X , a 2D numpy array of features.
- k , number of clusters to create.
- initial_centroids , initial centroid values generated by utility function(mentioned
in usage).
- maxiter , maximum number of iterations to process.
- heterogeneity , empty list that will be filled with hetrogeneity values if passed
to kmeans func.
Usage:
1. define 'k' value, 'X' features array and 'hetrogeneity' empty list
2. create initial_centroids,
initial_centroids = get_initial_centroids(
X,
k,
seed=0 # seed value for initial centroid generation,
# None for randomness(default=None)
)
3. find centroids and clusters using kmeans function.
centroids, cluster_assignment = kmeans(
X,
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True # whether to print logs in console or not.(default=False)
)
4. Plot the loss function, hetrogeneity values for every iteration saved in
hetrogeneity list.
plot_heterogeneity(
heterogeneity,
k
)
5. Transfers Dataframe into excel format it must have feature called
'Clust' with k means clustering numbers in it.
"""
import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import pairwise_distances
warnings.filterwarnings("ignore")
TAG = "K-MEANS-CLUST/ "
def get_initial_centroids(data, k, seed=None):
"""Randomly choose k data points as initial centroids"""
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices, :]
return centroids
def centroid_pairwise_dist(x, centroids):
return pairwise_distances(x, centroids, metric="euclidean")
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = centroid_pairwise_dist(data, centroids)
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis=1)
return cluster_assignment
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(
member_data_points, [centroids[i]], metric="euclidean"
)
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7, 4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel("# Iterations")
plt.ylabel("Heterogeneity")
plt.title(f"Heterogeneity of clustering over time, K={k:d}")
plt.rcParams.update({"font.size": 16})
plt.show()
def kmeans(
data, k, initial_centroids, maxiter=500, record_heterogeneity=None, verbose=False
):
"""This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.(default=500)
record_heterogeneity: (optional) a list, to store the history of heterogeneity
as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in
each iteration"""
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in range(maxiter):
if verbose:
print(itr, end="")
# 1. Make cluster assignments using nearest centroids
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data
# points assigned to that cluster.
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if (
prev_cluster_assignment is not None
and (prev_cluster_assignment == cluster_assignment).all()
):
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment != cluster_assignment)
if verbose:
print(
f" {num_changed:5d} elements changed their cluster assignment."
)
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
# Mock test below
if False: # change to true to run this test case.
from sklearn import datasets as ds
dataset = ds.load_iris()
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(dataset["data"], k, seed=0)
centroids, cluster_assignment = kmeans(
dataset["data"],
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True,
)
plot_heterogeneity(heterogeneity, k)
def report_generator(
df: pd.DataFrame, clustering_variables: np.ndarray, fill_missing_report=None
) -> pd.DataFrame:
"""
Function generates easy-erading clustering report. It takes 2 arguments as an input:
DataFrame - dataframe with predicted cluester column;
FillMissingReport - dictionary of rules how we are going to fill missing
values of for final report generate (not included in modeling);
in order to run the function following libraries must be imported:
import pandas as pd
import numpy as np
>>> data = pd.DataFrame()
>>> data['numbers'] = [1, 2, 3]
>>> data['col1'] = [0.5, 2.5, 4.5]
>>> data['col2'] = [100, 200, 300]
>>> data['col3'] = [10, 20, 30]
>>> data['Cluster'] = [1, 1, 2]
>>> report_generator(data, ['col1', 'col2'], 0)
Features Type Mark 1 2
0 # of Customers ClusterSize False 2.000000 1.000000
1 % of Customers ClusterProportion False 0.666667 0.333333
2 col1 mean_with_zeros True 1.500000 4.500000
3 col2 mean_with_zeros True 150.000000 300.000000
4 numbers mean_with_zeros False 1.500000 3.000000
.. ... ... ... ... ...
99 dummy 5% False 1.000000 1.000000
100 dummy 95% False 1.000000 1.000000
101 dummy stdev False 0.000000 NaN
102 dummy mode False 1.000000 1.000000
103 dummy median False 1.000000 1.000000
<BLANKLINE>
[104 rows x 5 columns]
"""
# Fill missing values with given rules
if fill_missing_report:
df = df.fillna(value=fill_missing_report)
df["dummy"] = 1
numeric_cols = df.select_dtypes(np.number).columns
report = (
df.groupby(["Cluster"])[ # construct report dataframe
numeric_cols
] # group by cluster number
.agg(
[
("sum", np.sum),
("mean_with_zeros", lambda x: np.mean(np.nan_to_num(x))),
("mean_without_zeros", lambda x: x.replace(0, np.NaN).mean()),
(
"mean_25-75",
lambda x: np.mean(
np.nan_to_num(
sorted(x)[
round(len(x) * 25 / 100) : round(len(x) * 75 / 100)
]
)
),
),
("mean_with_na", np.mean),
("min", lambda x: x.min()),
("5%", lambda x: x.quantile(0.05)),
("25%", lambda x: x.quantile(0.25)),
("50%", lambda x: x.quantile(0.50)),
("75%", lambda x: x.quantile(0.75)),
("95%", lambda x: x.quantile(0.95)),
("max", lambda x: x.max()),
("count", lambda x: x.count()),
("stdev", lambda x: x.std()),
("mode", lambda x: x.mode()[0]),
("median", lambda x: x.median()),
("# > 0", lambda x: (x > 0).sum()),
]
)
.T.reset_index()
.rename(index=str, columns={"level_0": "Features", "level_1": "Type"})
) # rename columns
# calculate the size of cluster(count of clientID's)
clustersize = report[
(report["Features"] == "dummy") & (report["Type"] == "count")
].copy() # avoid SettingWithCopyWarning
clustersize.Type = (
"ClusterSize" # rename created cluster df to match report column names
)
clustersize.Features = "# of Customers"
clusterproportion = pd.DataFrame(
clustersize.iloc[:, 2:].values
/ clustersize.iloc[:, 2:].values.sum() # calculating the proportion of cluster
)
clusterproportion[
"Type"
] = "% of Customers" # rename created cluster df to match report column names
clusterproportion["Features"] = "ClusterProportion"
cols = clusterproportion.columns.tolist()
cols = cols[-2:] + cols[:-2]
clusterproportion = clusterproportion[cols] # rearrange columns to match report
clusterproportion.columns = report.columns
a = pd.DataFrame(
abs(
report[report["Type"] == "count"].iloc[:, 2:].values
- clustersize.iloc[:, 2:].values
)
) # generating df with count of nan values
a["Features"] = 0
a["Type"] = "# of nan"
a.Features = report[
report["Type"] == "count"
].Features.tolist() # filling values in order to match report
cols = a.columns.tolist()
cols = cols[-2:] + cols[:-2]
a = a[cols] # rearrange columns to match report
a.columns = report.columns # rename columns to match report
report = report.drop(
report[report.Type == "count"].index
) # drop count values except cluster size
report = pd.concat(
[report, a, clustersize, clusterproportion], axis=0
) # concat report with clustert size and nan values
report["Mark"] = report["Features"].isin(clustering_variables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
report = report[cols]
sorter1 = {
"ClusterSize": 9,
"ClusterProportion": 8,
"mean_with_zeros": 7,
"mean_with_na": 6,
"max": 5,
"50%": 4,
"min": 3,
"25%": 2,
"75%": 1,
"# of nan": 0,
"# > 0": -1,
"sum_with_na": -2,
}
report = (
report.assign(
Sorter1=lambda x: x.Type.map(sorter1),
Sorter2=lambda x: list(reversed(range(len(x)))),
)
.sort_values(["Sorter1", "Mark", "Sorter2"], ascending=False)
.drop(["Sorter1", "Sorter2"], axis=1)
)
report.columns.name = ""
report = report.reset_index()
report = report.drop(columns=["index"])
return report
if __name__ == "__main__":
import doctest
doctest.testmod()
| """README, Author - Anurag Kumar(mailto:[email protected])
Requirements:
- sklearn
- numpy
- matplotlib
Python:
- 3.5
Inputs:
- X , a 2D numpy array of features.
- k , number of clusters to create.
- initial_centroids , initial centroid values generated by utility function(mentioned
in usage).
- maxiter , maximum number of iterations to process.
- heterogeneity , empty list that will be filled with hetrogeneity values if passed
to kmeans func.
Usage:
1. define 'k' value, 'X' features array and 'hetrogeneity' empty list
2. create initial_centroids,
initial_centroids = get_initial_centroids(
X,
k,
seed=0 # seed value for initial centroid generation,
# None for randomness(default=None)
)
3. find centroids and clusters using kmeans function.
centroids, cluster_assignment = kmeans(
X,
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True # whether to print logs in console or not.(default=False)
)
4. Plot the loss function, hetrogeneity values for every iteration saved in
hetrogeneity list.
plot_heterogeneity(
heterogeneity,
k
)
5. Transfers Dataframe into excel format it must have feature called
'Clust' with k means clustering numbers in it.
"""
import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import pairwise_distances
warnings.filterwarnings("ignore")
TAG = "K-MEANS-CLUST/ "
def get_initial_centroids(data, k, seed=None):
"""Randomly choose k data points as initial centroids"""
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices, :]
return centroids
def centroid_pairwise_dist(x, centroids):
return pairwise_distances(x, centroids, metric="euclidean")
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = centroid_pairwise_dist(data, centroids)
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis=1)
return cluster_assignment
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(
member_data_points, [centroids[i]], metric="euclidean"
)
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7, 4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel("# Iterations")
plt.ylabel("Heterogeneity")
plt.title(f"Heterogeneity of clustering over time, K={k:d}")
plt.rcParams.update({"font.size": 16})
plt.show()
def kmeans(
data, k, initial_centroids, maxiter=500, record_heterogeneity=None, verbose=False
):
"""This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.(default=500)
record_heterogeneity: (optional) a list, to store the history of heterogeneity
as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in
each iteration"""
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in range(maxiter):
if verbose:
print(itr, end="")
# 1. Make cluster assignments using nearest centroids
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data
# points assigned to that cluster.
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if (
prev_cluster_assignment is not None
and (prev_cluster_assignment == cluster_assignment).all()
):
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment != cluster_assignment)
if verbose:
print(
f" {num_changed:5d} elements changed their cluster assignment."
)
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
# Mock test below
if False: # change to true to run this test case.
from sklearn import datasets as ds
dataset = ds.load_iris()
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(dataset["data"], k, seed=0)
centroids, cluster_assignment = kmeans(
dataset["data"],
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True,
)
plot_heterogeneity(heterogeneity, k)
def report_generator(
df: pd.DataFrame, clustering_variables: np.ndarray, fill_missing_report=None
) -> pd.DataFrame:
"""
Function generates easy-erading clustering report. It takes 2 arguments as an input:
DataFrame - dataframe with predicted cluester column;
FillMissingReport - dictionary of rules how we are going to fill missing
values of for final report generate (not included in modeling);
in order to run the function following libraries must be imported:
import pandas as pd
import numpy as np
>>> data = pd.DataFrame()
>>> data['numbers'] = [1, 2, 3]
>>> data['col1'] = [0.5, 2.5, 4.5]
>>> data['col2'] = [100, 200, 300]
>>> data['col3'] = [10, 20, 30]
>>> data['Cluster'] = [1, 1, 2]
>>> report_generator(data, ['col1', 'col2'], 0)
Features Type Mark 1 2
0 # of Customers ClusterSize False 2.000000 1.000000
1 % of Customers ClusterProportion False 0.666667 0.333333
2 col1 mean_with_zeros True 1.500000 4.500000
3 col2 mean_with_zeros True 150.000000 300.000000
4 numbers mean_with_zeros False 1.500000 3.000000
.. ... ... ... ... ...
99 dummy 5% False 1.000000 1.000000
100 dummy 95% False 1.000000 1.000000
101 dummy stdev False 0.000000 NaN
102 dummy mode False 1.000000 1.000000
103 dummy median False 1.000000 1.000000
<BLANKLINE>
[104 rows x 5 columns]
"""
# Fill missing values with given rules
if fill_missing_report:
df = df.fillna(value=fill_missing_report)
df["dummy"] = 1
numeric_cols = df.select_dtypes(np.number).columns
report = (
df.groupby(["Cluster"])[ # construct report dataframe
numeric_cols
] # group by cluster number
.agg(
[
("sum", np.sum),
("mean_with_zeros", lambda x: np.mean(np.nan_to_num(x))),
("mean_without_zeros", lambda x: x.replace(0, np.NaN).mean()),
(
"mean_25-75",
lambda x: np.mean(
np.nan_to_num(
sorted(x)[
round(len(x) * 25 / 100) : round(len(x) * 75 / 100)
]
)
),
),
("mean_with_na", np.mean),
("min", lambda x: x.min()),
("5%", lambda x: x.quantile(0.05)),
("25%", lambda x: x.quantile(0.25)),
("50%", lambda x: x.quantile(0.50)),
("75%", lambda x: x.quantile(0.75)),
("95%", lambda x: x.quantile(0.95)),
("max", lambda x: x.max()),
("count", lambda x: x.count()),
("stdev", lambda x: x.std()),
("mode", lambda x: x.mode()[0]),
("median", lambda x: x.median()),
("# > 0", lambda x: (x > 0).sum()),
]
)
.T.reset_index()
.rename(index=str, columns={"level_0": "Features", "level_1": "Type"})
) # rename columns
# calculate the size of cluster(count of clientID's)
clustersize = report[
(report["Features"] == "dummy") & (report["Type"] == "count")
].copy() # avoid SettingWithCopyWarning
clustersize.Type = (
"ClusterSize" # rename created cluster df to match report column names
)
clustersize.Features = "# of Customers"
clusterproportion = pd.DataFrame(
clustersize.iloc[:, 2:].values
/ clustersize.iloc[:, 2:].values.sum() # calculating the proportion of cluster
)
clusterproportion[
"Type"
] = "% of Customers" # rename created cluster df to match report column names
clusterproportion["Features"] = "ClusterProportion"
cols = clusterproportion.columns.tolist()
cols = cols[-2:] + cols[:-2]
clusterproportion = clusterproportion[cols] # rearrange columns to match report
clusterproportion.columns = report.columns
a = pd.DataFrame(
abs(
report[report["Type"] == "count"].iloc[:, 2:].values
- clustersize.iloc[:, 2:].values
)
) # generating df with count of nan values
a["Features"] = 0
a["Type"] = "# of nan"
a.Features = report[
report["Type"] == "count"
].Features.tolist() # filling values in order to match report
cols = a.columns.tolist()
cols = cols[-2:] + cols[:-2]
a = a[cols] # rearrange columns to match report
a.columns = report.columns # rename columns to match report
report = report.drop(
report[report.Type == "count"].index
) # drop count values except cluster size
report = pd.concat(
[report, a, clustersize, clusterproportion], axis=0
) # concat report with clustert size and nan values
report["Mark"] = report["Features"].isin(clustering_variables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
report = report[cols]
sorter1 = {
"ClusterSize": 9,
"ClusterProportion": 8,
"mean_with_zeros": 7,
"mean_with_na": 6,
"max": 5,
"50%": 4,
"min": 3,
"25%": 2,
"75%": 1,
"# of nan": 0,
"# > 0": -1,
"sum_with_na": -2,
}
report = (
report.assign(
Sorter1=lambda x: x.Type.map(sorter1),
Sorter2=lambda x: list(reversed(range(len(x)))),
)
.sort_values(["Sorter1", "Mark", "Sorter2"], ascending=False)
.drop(["Sorter1", "Sorter2"], axis=1)
)
report.columns.name = ""
report = report.reset_index()
report = report.drop(columns=["index"])
return report
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def exchange_sort(numbers: list[int]) -> list[int]:
"""
Uses exchange sort to sort a list of numbers.
Source: https://en.wikipedia.org/wiki/Sorting_algorithm#Exchange_sort
>>> exchange_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> exchange_sort([-1, -2, -3])
[-3, -2, -1]
>>> exchange_sort([1, 2, 3, 4, 5])
[1, 2, 3, 4, 5]
>>> exchange_sort([0, 10, -2, 5, 3])
[-2, 0, 3, 5, 10]
>>> exchange_sort([])
[]
"""
numbers_length = len(numbers)
for i in range(numbers_length):
for j in range(i + 1, numbers_length):
if numbers[j] < numbers[i]:
numbers[i], numbers[j] = numbers[j], numbers[i]
return numbers
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(exchange_sort(unsorted))
| def exchange_sort(numbers: list[int]) -> list[int]:
"""
Uses exchange sort to sort a list of numbers.
Source: https://en.wikipedia.org/wiki/Sorting_algorithm#Exchange_sort
>>> exchange_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> exchange_sort([-1, -2, -3])
[-3, -2, -1]
>>> exchange_sort([1, 2, 3, 4, 5])
[1, 2, 3, 4, 5]
>>> exchange_sort([0, 10, -2, 5, 3])
[-2, 0, 3, 5, 10]
>>> exchange_sort([])
[]
"""
numbers_length = len(numbers)
for i in range(numbers_length):
for j in range(i + 1, numbers_length):
if numbers[j] < numbers[i]:
numbers[i], numbers[j] = numbers[j], numbers[i]
return numbers
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(exchange_sort(unsorted))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 7: https://projecteuler.net/problem=7
10001st prime
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we
can see that the 6th prime is 13.
What is the 10001st prime number?
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
import itertools
import math
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
Returns boolean representing primality of given number (i.e., if the
result is true, then the number is indeed prime else it is not).
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(2999)
True
>>> is_prime(0)
False
>>> is_prime(1)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def prime_generator():
"""
Generate a sequence of prime numbers
"""
num = 2
while True:
if is_prime(num):
yield num
num += 1
def solution(nth: int = 10001) -> int:
"""
Returns the n-th prime number.
>>> solution(6)
13
>>> solution(1)
2
>>> solution(3)
5
>>> solution(20)
71
>>> solution(50)
229
>>> solution(100)
541
"""
return next(itertools.islice(prime_generator(), nth - 1, nth))
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 7: https://projecteuler.net/problem=7
10001st prime
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we
can see that the 6th prime is 13.
What is the 10001st prime number?
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
import itertools
import math
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
Returns boolean representing primality of given number (i.e., if the
result is true, then the number is indeed prime else it is not).
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(2999)
True
>>> is_prime(0)
False
>>> is_prime(1)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def prime_generator():
"""
Generate a sequence of prime numbers
"""
num = 2
while True:
if is_prime(num):
yield num
num += 1
def solution(nth: int = 10001) -> int:
"""
Returns the n-th prime number.
>>> solution(6)
13
>>> solution(1)
2
>>> solution(3)
5
>>> solution(20)
71
>>> solution(50)
229
>>> solution(100)
541
"""
return next(itertools.islice(prime_generator(), nth - 1, nth))
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Backtracking
Backtracking is a way to speed up the search process by removing candidates when they can't be the solution of a problem.
* <https://en.wikipedia.org/wiki/Backtracking>
* <https://en.wikipedia.org/wiki/Decision_tree_pruning>
* <https://medium.com/@priyankmistry1999/backtracking-sudoku-6e4439e4825c>
* <https://www.geeksforgeeks.org/sudoku-backtracking-7/>
| # Backtracking
Backtracking is a way to speed up the search process by removing candidates when they can't be the solution of a problem.
* <https://en.wikipedia.org/wiki/Backtracking>
* <https://en.wikipedia.org/wiki/Decision_tree_pruning>
* <https://medium.com/@priyankmistry1999/backtracking-sudoku-6e4439e4825c>
* <https://www.geeksforgeeks.org/sudoku-backtracking-7/>
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A hexagonal number sequence is a sequence of figurate numbers
where the nth hexagonal number hₙ is the number of distinct dots
in a pattern of dots consisting of the outlines of regular
hexagons with sides up to n dots, when the hexagons are overlaid
so that they share one vertex.
Calculates the hexagonal numbers sequence with a formula
hₙ = n(2n-1)
where:
hₙ --> is nth element of the sequence
n --> is the number of element in the sequence
reference-->"Hexagonal number" Wikipedia
<https://en.wikipedia.org/wiki/Hexagonal_number>
"""
def hexagonal_numbers(length: int) -> list[int]:
"""
:param len: max number of elements
:type len: int
:return: Hexagonal numbers as a list
Tests:
>>> hexagonal_numbers(10)
[0, 1, 6, 15, 28, 45, 66, 91, 120, 153]
>>> hexagonal_numbers(5)
[0, 1, 6, 15, 28]
>>> hexagonal_numbers(0)
Traceback (most recent call last):
...
ValueError: Length must be a positive integer.
"""
if length <= 0 or not isinstance(length, int):
raise ValueError("Length must be a positive integer.")
return [n * (2 * n - 1) for n in range(length)]
if __name__ == "__main__":
print(hexagonal_numbers(length=5))
print(hexagonal_numbers(length=10))
| """
A hexagonal number sequence is a sequence of figurate numbers
where the nth hexagonal number hₙ is the number of distinct dots
in a pattern of dots consisting of the outlines of regular
hexagons with sides up to n dots, when the hexagons are overlaid
so that they share one vertex.
Calculates the hexagonal numbers sequence with a formula
hₙ = n(2n-1)
where:
hₙ --> is nth element of the sequence
n --> is the number of element in the sequence
reference-->"Hexagonal number" Wikipedia
<https://en.wikipedia.org/wiki/Hexagonal_number>
"""
def hexagonal_numbers(length: int) -> list[int]:
"""
:param len: max number of elements
:type len: int
:return: Hexagonal numbers as a list
Tests:
>>> hexagonal_numbers(10)
[0, 1, 6, 15, 28, 45, 66, 91, 120, 153]
>>> hexagonal_numbers(5)
[0, 1, 6, 15, 28]
>>> hexagonal_numbers(0)
Traceback (most recent call last):
...
ValueError: Length must be a positive integer.
"""
if length <= 0 or not isinstance(length, int):
raise ValueError("Length must be a positive integer.")
return [n * (2 * n - 1) for n in range(length)]
if __name__ == "__main__":
print(hexagonal_numbers(length=5))
print(hexagonal_numbers(length=10))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Description : Centripetal force is the force acting on an object in
curvilinear motion directed towards the axis of rotation
or centre of curvature.
The unit of centripetal force is newton.
The centripetal force is always directed perpendicular to the
direction of the object’s displacement. Using Newton’s second
law of motion, it is found that the centripetal force of an object
moving in a circular path always acts towards the centre of the circle.
The Centripetal Force Formula is given as the product of mass (in kg)
and tangential velocity (in meters per second) squared, divided by the
radius (in meters) that implies that on doubling the tangential velocity,
the centripetal force will be quadrupled. Mathematically it is written as:
F = mv²/r
Where, F is the Centripetal force, m is the mass of the object, v is the
speed or velocity of the object and r is the radius.
Reference: https://byjus.com/physics/centripetal-and-centrifugal-force/
"""
def centripetal(mass: float, velocity: float, radius: float) -> float:
"""
The Centripetal Force formula is given as: (m*v*v)/r
>>> round(centripetal(15.5,-30,10),2)
1395.0
>>> round(centripetal(10,15,5),2)
450.0
>>> round(centripetal(20,-50,15),2)
3333.33
>>> round(centripetal(12.25,40,25),2)
784.0
>>> round(centripetal(50,100,50),2)
10000.0
"""
if mass < 0:
raise ValueError("The mass of the body cannot be negative")
if radius <= 0:
raise ValueError("The radius is always a positive non zero integer")
return (mass * (velocity) ** 2) / radius
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| """
Description : Centripetal force is the force acting on an object in
curvilinear motion directed towards the axis of rotation
or centre of curvature.
The unit of centripetal force is newton.
The centripetal force is always directed perpendicular to the
direction of the object’s displacement. Using Newton’s second
law of motion, it is found that the centripetal force of an object
moving in a circular path always acts towards the centre of the circle.
The Centripetal Force Formula is given as the product of mass (in kg)
and tangential velocity (in meters per second) squared, divided by the
radius (in meters) that implies that on doubling the tangential velocity,
the centripetal force will be quadrupled. Mathematically it is written as:
F = mv²/r
Where, F is the Centripetal force, m is the mass of the object, v is the
speed or velocity of the object and r is the radius.
Reference: https://byjus.com/physics/centripetal-and-centrifugal-force/
"""
def centripetal(mass: float, velocity: float, radius: float) -> float:
"""
The Centripetal Force formula is given as: (m*v*v)/r
>>> round(centripetal(15.5,-30,10),2)
1395.0
>>> round(centripetal(10,15,5),2)
450.0
>>> round(centripetal(20,-50,15),2)
3333.33
>>> round(centripetal(12.25,40,25),2)
784.0
>>> round(centripetal(50,100,50),2)
10000.0
"""
if mass < 0:
raise ValueError("The mass of the body cannot be negative")
if radius <= 0:
raise ValueError("The radius is always a positive non zero integer")
return (mass * (velocity) ** 2) / radius
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
In this problem, we want to determine all possible permutations
of the given sequence. We use backtracking to solve this problem.
Time complexity: O(n! * n),
where n denotes the length of the given sequence.
"""
from __future__ import annotations
def generate_all_permutations(sequence: list[int | str]) -> None:
create_state_space_tree(sequence, [], 0, [0 for i in range(len(sequence))])
def create_state_space_tree(
sequence: list[int | str],
current_sequence: list[int | str],
index: int,
index_used: list[int],
) -> None:
"""
Creates a state space tree to iterate through each branch using DFS.
We know that each state has exactly len(sequence) - index children.
It terminates when it reaches the end of the given sequence.
"""
if index == len(sequence):
print(current_sequence)
return
for i in range(len(sequence)):
if not index_used[i]:
current_sequence.append(sequence[i])
index_used[i] = True
create_state_space_tree(sequence, current_sequence, index + 1, index_used)
current_sequence.pop()
index_used[i] = False
"""
remove the comment to take an input from the user
print("Enter the elements")
sequence = list(map(int, input().split()))
"""
sequence: list[int | str] = [3, 1, 2, 4]
generate_all_permutations(sequence)
sequence_2: list[int | str] = ["A", "B", "C"]
generate_all_permutations(sequence_2)
| """
In this problem, we want to determine all possible permutations
of the given sequence. We use backtracking to solve this problem.
Time complexity: O(n! * n),
where n denotes the length of the given sequence.
"""
from __future__ import annotations
def generate_all_permutations(sequence: list[int | str]) -> None:
create_state_space_tree(sequence, [], 0, [0 for i in range(len(sequence))])
def create_state_space_tree(
sequence: list[int | str],
current_sequence: list[int | str],
index: int,
index_used: list[int],
) -> None:
"""
Creates a state space tree to iterate through each branch using DFS.
We know that each state has exactly len(sequence) - index children.
It terminates when it reaches the end of the given sequence.
"""
if index == len(sequence):
print(current_sequence)
return
for i in range(len(sequence)):
if not index_used[i]:
current_sequence.append(sequence[i])
index_used[i] = True
create_state_space_tree(sequence, current_sequence, index + 1, index_used)
current_sequence.pop()
index_used[i] = False
"""
remove the comment to take an input from the user
print("Enter the elements")
sequence = list(map(int, input().split()))
"""
sequence: list[int | str] = [3, 1, 2, 4]
generate_all_permutations(sequence)
sequence_2: list[int | str] = ["A", "B", "C"]
generate_all_permutations(sequence_2)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Coin sums
Problem 31: https://projecteuler.net/problem=31
In England the currency is made up of pound, £, and pence, p, and there are
eight coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:
1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?
"""
def one_pence() -> int:
return 1
def two_pence(x: int) -> int:
return 0 if x < 0 else two_pence(x - 2) + one_pence()
def five_pence(x: int) -> int:
return 0 if x < 0 else five_pence(x - 5) + two_pence(x)
def ten_pence(x: int) -> int:
return 0 if x < 0 else ten_pence(x - 10) + five_pence(x)
def twenty_pence(x: int) -> int:
return 0 if x < 0 else twenty_pence(x - 20) + ten_pence(x)
def fifty_pence(x: int) -> int:
return 0 if x < 0 else fifty_pence(x - 50) + twenty_pence(x)
def one_pound(x: int) -> int:
return 0 if x < 0 else one_pound(x - 100) + fifty_pence(x)
def two_pound(x: int) -> int:
return 0 if x < 0 else two_pound(x - 200) + one_pound(x)
def solution(n: int = 200) -> int:
"""Returns the number of different ways can n pence be made using any number of
coins?
>>> solution(500)
6295434
>>> solution(200)
73682
>>> solution(50)
451
>>> solution(10)
11
"""
return two_pound(n)
if __name__ == "__main__":
print(solution(int(input().strip())))
| """
Coin sums
Problem 31: https://projecteuler.net/problem=31
In England the currency is made up of pound, £, and pence, p, and there are
eight coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:
1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?
"""
def one_pence() -> int:
return 1
def two_pence(x: int) -> int:
return 0 if x < 0 else two_pence(x - 2) + one_pence()
def five_pence(x: int) -> int:
return 0 if x < 0 else five_pence(x - 5) + two_pence(x)
def ten_pence(x: int) -> int:
return 0 if x < 0 else ten_pence(x - 10) + five_pence(x)
def twenty_pence(x: int) -> int:
return 0 if x < 0 else twenty_pence(x - 20) + ten_pence(x)
def fifty_pence(x: int) -> int:
return 0 if x < 0 else fifty_pence(x - 50) + twenty_pence(x)
def one_pound(x: int) -> int:
return 0 if x < 0 else one_pound(x - 100) + fifty_pence(x)
def two_pound(x: int) -> int:
return 0 if x < 0 else two_pound(x - 200) + one_pound(x)
def solution(n: int = 200) -> int:
"""Returns the number of different ways can n pence be made using any number of
coins?
>>> solution(500)
6295434
>>> solution(200)
73682
>>> solution(50)
451
>>> solution(10)
11
"""
return two_pound(n)
if __name__ == "__main__":
print(solution(int(input().strip())))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Implementation of iterative merge sort in Python
Author: Aman Gupta
For doctests run following command:
python3 -m doctest -v iterative_merge_sort.py
For manual testing run:
python3 iterative_merge_sort.py
"""
from __future__ import annotations
def merge(input_list: list, low: int, mid: int, high: int) -> list:
"""
sorting left-half and right-half individually
then merging them into result
"""
result = []
left, right = input_list[low:mid], input_list[mid : high + 1]
while left and right:
result.append((left if left[0] <= right[0] else right).pop(0))
input_list[low : high + 1] = result + left + right
return input_list
# iteration over the unsorted list
def iter_merge_sort(input_list: list) -> list:
"""
Return a sorted copy of the input list
>>> iter_merge_sort([5, 9, 8, 7, 1, 2, 7])
[1, 2, 5, 7, 7, 8, 9]
>>> iter_merge_sort([1])
[1]
>>> iter_merge_sort([2, 1])
[1, 2]
>>> iter_merge_sort([2, 1, 3])
[1, 2, 3]
>>> iter_merge_sort([4, 3, 2, 1])
[1, 2, 3, 4]
>>> iter_merge_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort([0.3, 0.2, 0.1])
[0.1, 0.2, 0.3]
>>> iter_merge_sort(['dep', 'dang', 'trai'])
['dang', 'dep', 'trai']
>>> iter_merge_sort([6])
[6]
>>> iter_merge_sort([])
[]
>>> iter_merge_sort([-2, -9, -1, -4])
[-9, -4, -2, -1]
>>> iter_merge_sort([1.1, 1, 0.0, -1, -1.1])
[-1.1, -1, 0.0, 1, 1.1]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort('cba')
['a', 'b', 'c']
"""
if len(input_list) <= 1:
return input_list
input_list = list(input_list)
# iteration for two-way merging
p = 2
while p <= len(input_list):
# getting low, high and middle value for merge-sort of single list
for i in range(0, len(input_list), p):
low = i
high = i + p - 1
mid = (low + high + 1) // 2
input_list = merge(input_list, low, mid, high)
# final merge of last two parts
if p * 2 >= len(input_list):
mid = i
input_list = merge(input_list, 0, mid, len(input_list) - 1)
break
p *= 2
return input_list
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
if user_input == "":
unsorted = []
else:
unsorted = [int(item.strip()) for item in user_input.split(",")]
print(iter_merge_sort(unsorted))
| """
Implementation of iterative merge sort in Python
Author: Aman Gupta
For doctests run following command:
python3 -m doctest -v iterative_merge_sort.py
For manual testing run:
python3 iterative_merge_sort.py
"""
from __future__ import annotations
def merge(input_list: list, low: int, mid: int, high: int) -> list:
"""
sorting left-half and right-half individually
then merging them into result
"""
result = []
left, right = input_list[low:mid], input_list[mid : high + 1]
while left and right:
result.append((left if left[0] <= right[0] else right).pop(0))
input_list[low : high + 1] = result + left + right
return input_list
# iteration over the unsorted list
def iter_merge_sort(input_list: list) -> list:
"""
Return a sorted copy of the input list
>>> iter_merge_sort([5, 9, 8, 7, 1, 2, 7])
[1, 2, 5, 7, 7, 8, 9]
>>> iter_merge_sort([1])
[1]
>>> iter_merge_sort([2, 1])
[1, 2]
>>> iter_merge_sort([2, 1, 3])
[1, 2, 3]
>>> iter_merge_sort([4, 3, 2, 1])
[1, 2, 3, 4]
>>> iter_merge_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort([0.3, 0.2, 0.1])
[0.1, 0.2, 0.3]
>>> iter_merge_sort(['dep', 'dang', 'trai'])
['dang', 'dep', 'trai']
>>> iter_merge_sort([6])
[6]
>>> iter_merge_sort([])
[]
>>> iter_merge_sort([-2, -9, -1, -4])
[-9, -4, -2, -1]
>>> iter_merge_sort([1.1, 1, 0.0, -1, -1.1])
[-1.1, -1, 0.0, 1, 1.1]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort('cba')
['a', 'b', 'c']
"""
if len(input_list) <= 1:
return input_list
input_list = list(input_list)
# iteration for two-way merging
p = 2
while p <= len(input_list):
# getting low, high and middle value for merge-sort of single list
for i in range(0, len(input_list), p):
low = i
high = i + p - 1
mid = (low + high + 1) // 2
input_list = merge(input_list, low, mid, high)
# final merge of last two parts
if p * 2 >= len(input_list):
mid = i
input_list = merge(input_list, 0, mid, len(input_list) - 1)
break
p *= 2
return input_list
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
if user_input == "":
unsorted = []
else:
unsorted = [int(item.strip()) for item in user_input.split(",")]
print(iter_merge_sort(unsorted))
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import base64
def base32_encode(string: str) -> bytes:
"""
Encodes a given string to base32, returning a bytes-like object
>>> base32_encode("Hello World!")
b'JBSWY3DPEBLW64TMMQQQ===='
>>> base32_encode("123456")
b'GEZDGNBVGY======'
>>> base32_encode("some long complex string")
b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY='
"""
# encoded the input (we need a bytes like object)
# then, b32encoded the bytes-like object
return base64.b32encode(string.encode("utf-8"))
def base32_decode(encoded_bytes: bytes) -> str:
"""
Decodes a given bytes-like object to a string, returning a string
>>> base32_decode(b'JBSWY3DPEBLW64TMMQQQ====')
'Hello World!'
>>> base32_decode(b'GEZDGNBVGY======')
'123456'
>>> base32_decode(b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY=')
'some long complex string'
"""
# decode the bytes from base32
# then, decode the bytes-like object to return as a string
return base64.b32decode(encoded_bytes).decode("utf-8")
if __name__ == "__main__":
test = "Hello World!"
encoded = base32_encode(test)
print(encoded)
decoded = base32_decode(encoded)
print(decoded)
| import base64
def base32_encode(string: str) -> bytes:
"""
Encodes a given string to base32, returning a bytes-like object
>>> base32_encode("Hello World!")
b'JBSWY3DPEBLW64TMMQQQ===='
>>> base32_encode("123456")
b'GEZDGNBVGY======'
>>> base32_encode("some long complex string")
b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY='
"""
# encoded the input (we need a bytes like object)
# then, b32encoded the bytes-like object
return base64.b32encode(string.encode("utf-8"))
def base32_decode(encoded_bytes: bytes) -> str:
"""
Decodes a given bytes-like object to a string, returning a string
>>> base32_decode(b'JBSWY3DPEBLW64TMMQQQ====')
'Hello World!'
>>> base32_decode(b'GEZDGNBVGY======')
'123456'
>>> base32_decode(b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY=')
'some long complex string'
"""
# decode the bytes from base32
# then, decode the bytes-like object to return as a string
return base64.b32decode(encoded_bytes).decode("utf-8")
if __name__ == "__main__":
test = "Hello World!"
encoded = base32_encode(test)
print(encoded)
decoded = base32_decode(encoded)
print(decoded)
| -1 |
TheAlgorithms/Python | 8,879 | [Upgrade Ruff] Fix all errors raised from ruff | ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| CaedenPH | "2023-07-22T08:47:09Z" | "2023-07-22T10:05:11Z" | 5aefc00f0f1c692ce772ddbc616d7cd91233236b | 93fb169627ea9fe43436a312fdfa751818808180 | [Upgrade Ruff] Fix all errors raised from ruff. ### Describe your change:
Fixes all errors shown in the failing tests here
https://github.com/TheAlgorithms/Python/actions/runs/5629561962/job/15254651560?pr=8875
Fixes #8876
Fixes #8877
Fixes #8878
Fixes #8880
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def dencrypt(s: str, n: int = 13) -> str:
"""
https://en.wikipedia.org/wiki/ROT13
>>> msg = "My secret bank account number is 173-52946 so don't tell anyone!!"
>>> s = dencrypt(msg)
>>> s
"Zl frperg onax nppbhag ahzore vf 173-52946 fb qba'g gryy nalbar!!"
>>> dencrypt(s) == msg
True
"""
out = ""
for c in s:
if "A" <= c <= "Z":
out += chr(ord("A") + (ord(c) - ord("A") + n) % 26)
elif "a" <= c <= "z":
out += chr(ord("a") + (ord(c) - ord("a") + n) % 26)
else:
out += c
return out
def main() -> None:
s0 = input("Enter message: ")
s1 = dencrypt(s0, 13)
print("Encryption:", s1)
s2 = dencrypt(s1, 13)
print("Decryption: ", s2)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| def dencrypt(s: str, n: int = 13) -> str:
"""
https://en.wikipedia.org/wiki/ROT13
>>> msg = "My secret bank account number is 173-52946 so don't tell anyone!!"
>>> s = dencrypt(msg)
>>> s
"Zl frperg onax nppbhag ahzore vf 173-52946 fb qba'g gryy nalbar!!"
>>> dencrypt(s) == msg
True
"""
out = ""
for c in s:
if "A" <= c <= "Z":
out += chr(ord("A") + (ord(c) - ord("A") + n) % 26)
elif "a" <= c <= "z":
out += chr(ord("a") + (ord(c) - ord("a") + n) % 26)
else:
out += c
return out
def main() -> None:
s0 = input("Enter message: ")
s1 = dencrypt(s0, 13)
print("Encryption:", s1)
s2 = dencrypt(s1, 13)
print("Decryption: ", s2)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
Subsets and Splits