repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| #!/usr/bin/env python3
"""
Simulation of the Quantum Key Distribution (QKD) protocol called BB84,
created by Charles Bennett and Gilles Brassard in 1984.
BB84 is a key-distribution protocol that ensures secure key distribution
using qubits instead of classical bits. The generated key is the result
of simulating a quantum circuit. Our algorithm to construct the circuit
is as follows:
Alice generates two binary strings. One encodes the basis for each qubit:
- 0 -> {0,1} basis.
- 1 -> {+,-} basis.
The other encodes the state:
- 0 -> |0> or |+>.
- 1 -> |1> or |->.
Bob also generates a binary string and uses the same convention to choose
a basis for measurement. Based on the following results, we follow the
algorithm below:
X|0> = |1>
H|0> = |+>
HX|0> = |->
1. Whenever Alice wants to encode 1 in a qubit, she applies an
X (NOT) gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the {+,-} basis, she applies
an H (Hadamard) gate. No action is necessary to encode a qubit in
the {0,1} basis.
3. She then sends the qubits to Bob (symbolically represented in
this circuit using wires).
4. Bob measures the qubits according to his binary string for
measurement. To measure a qubit in the {+,-} basis, he applies
an H gate to the corresponding qubit and then performs a measurement.
References:
https://en.wikipedia.org/wiki/BB84
https://qiskit.org/textbook/ch-algorithms/quantum-key-distribution.html
"""
import numpy as np
import qiskit
def bb84(key_len: int = 8, seed: int | None = None) -> str:
"""
Performs the BB84 protocol using a key made of `key_len` bits.
The two parties in the key distribution are called Alice and Bob.
Args:
key_len: The length of the generated key in bits. The default is 8.
seed: Seed for the random number generator.
Mostly used for testing. Default is None.
Returns:
key: The key generated using BB84 protocol.
>>> bb84(16, seed=0)
'1101101100010000'
>>> bb84(8, seed=0)
'01011011'
"""
# Set up the random number generator.
rng = np.random.default_rng(seed=seed)
# Roughly 25% of the qubits will contribute to the key.
# So we take more than we need.
num_qubits = 6 * key_len
# Measurement basis for Alice's qubits.
alice_basis = rng.integers(2, size=num_qubits)
# The set of states Alice will prepare.
alice_state = rng.integers(2, size=num_qubits)
# Measurement basis for Bob's qubits.
bob_basis = rng.integers(2, size=num_qubits)
# Quantum Circuit to simulate BB84
bb84_circ = qiskit.QuantumCircuit(num_qubits, name="BB84")
# Alice prepares her qubits according to rules above.
for index, _ in enumerate(alice_basis):
if alice_state[index] == 1:
bb84_circ.x(index)
if alice_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Bob measures the received qubits according to rules above.
for index, _ in enumerate(bob_basis):
if bob_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
# Simulate the quantum circuit.
sim = qiskit.Aer.get_backend("aer_simulator")
# We only need to run one shot because the key is unique.
# Multiple shots will produce the same key.
job = qiskit.execute(bb84_circ, sim, shots=1, seed_simulator=seed)
# Returns the result of measurement.
result = job.result().get_counts(bb84_circ).most_frequent()
# Extracting the generated key from the simulation results.
# Only keep measurement results where Alice and Bob chose the same basis.
gen_key = "".join(
[
result_bit
for alice_basis_bit, bob_basis_bit, result_bit in zip(
alice_basis, bob_basis, result
)
if alice_basis_bit == bob_basis_bit
]
)
# Get final key. Pad with 0 if too short, otherwise truncate.
key = gen_key[:key_len] if len(gen_key) >= key_len else gen_key.ljust(key_len, "0")
return key
if __name__ == "__main__":
print(f"The generated key is : {bb84(8, seed=0)}")
from doctest import testmod
testmod()
| #!/usr/bin/env python3
"""
Simulation of the Quantum Key Distribution (QKD) protocol called BB84,
created by Charles Bennett and Gilles Brassard in 1984.
BB84 is a key-distribution protocol that ensures secure key distribution
using qubits instead of classical bits. The generated key is the result
of simulating a quantum circuit. Our algorithm to construct the circuit
is as follows:
Alice generates two binary strings. One encodes the basis for each qubit:
- 0 -> {0,1} basis.
- 1 -> {+,-} basis.
The other encodes the state:
- 0 -> |0> or |+>.
- 1 -> |1> or |->.
Bob also generates a binary string and uses the same convention to choose
a basis for measurement. Based on the following results, we follow the
algorithm below:
X|0> = |1>
H|0> = |+>
HX|0> = |->
1. Whenever Alice wants to encode 1 in a qubit, she applies an
X (NOT) gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the {+,-} basis, she applies
an H (Hadamard) gate. No action is necessary to encode a qubit in
the {0,1} basis.
3. She then sends the qubits to Bob (symbolically represented in
this circuit using wires).
4. Bob measures the qubits according to his binary string for
measurement. To measure a qubit in the {+,-} basis, he applies
an H gate to the corresponding qubit and then performs a measurement.
References:
https://en.wikipedia.org/wiki/BB84
https://qiskit.org/textbook/ch-algorithms/quantum-key-distribution.html
"""
import numpy as np
import qiskit
def bb84(key_len: int = 8, seed: int | None = None) -> str:
"""
Performs the BB84 protocol using a key made of `key_len` bits.
The two parties in the key distribution are called Alice and Bob.
Args:
key_len: The length of the generated key in bits. The default is 8.
seed: Seed for the random number generator.
Mostly used for testing. Default is None.
Returns:
key: The key generated using BB84 protocol.
>>> bb84(16, seed=0)
'1101101100010000'
>>> bb84(8, seed=0)
'01011011'
"""
# Set up the random number generator.
rng = np.random.default_rng(seed=seed)
# Roughly 25% of the qubits will contribute to the key.
# So we take more than we need.
num_qubits = 6 * key_len
# Measurement basis for Alice's qubits.
alice_basis = rng.integers(2, size=num_qubits)
# The set of states Alice will prepare.
alice_state = rng.integers(2, size=num_qubits)
# Measurement basis for Bob's qubits.
bob_basis = rng.integers(2, size=num_qubits)
# Quantum Circuit to simulate BB84
bb84_circ = qiskit.QuantumCircuit(num_qubits, name="BB84")
# Alice prepares her qubits according to rules above.
for index, _ in enumerate(alice_basis):
if alice_state[index] == 1:
bb84_circ.x(index)
if alice_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Bob measures the received qubits according to rules above.
for index, _ in enumerate(bob_basis):
if bob_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
# Simulate the quantum circuit.
sim = qiskit.Aer.get_backend("aer_simulator")
# We only need to run one shot because the key is unique.
# Multiple shots will produce the same key.
job = qiskit.execute(bb84_circ, sim, shots=1, seed_simulator=seed)
# Returns the result of measurement.
result = job.result().get_counts(bb84_circ).most_frequent()
# Extracting the generated key from the simulation results.
# Only keep measurement results where Alice and Bob chose the same basis.
gen_key = "".join(
[
result_bit
for alice_basis_bit, bob_basis_bit, result_bit in zip(
alice_basis, bob_basis, result
)
if alice_basis_bit == bob_basis_bit
]
)
# Get final key. Pad with 0 if too short, otherwise truncate.
key = gen_key[:key_len] if len(gen_key) >= key_len else gen_key.ljust(key_len, "0")
return key
if __name__ == "__main__":
print(f"The generated key is : {bb84(8, seed=0)}")
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Problem 46: https://projecteuler.net/problem=46
It was proposed by Christian Goldbach that every odd composite number can be
written as the sum of a prime and twice a square.
9 = 7 + 2 × 12
15 = 7 + 2 × 22
21 = 3 + 2 × 32
25 = 7 + 2 × 32
27 = 19 + 2 × 22
33 = 31 + 2 × 12
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a
prime and twice a square?
"""
from __future__ import annotations
import math
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
odd_composites = [num for num in range(3, 100001, 2) if not is_prime(num)]
def compute_nums(n: int) -> list[int]:
"""
Returns a list of first n odd composite numbers which do
not follow the conjecture.
>>> compute_nums(1)
[5777]
>>> compute_nums(2)
[5777, 5993]
>>> compute_nums(0)
Traceback (most recent call last):
...
ValueError: n must be >= 0
>>> compute_nums("a")
Traceback (most recent call last):
...
ValueError: n must be an integer
>>> compute_nums(1.1)
Traceback (most recent call last):
...
ValueError: n must be an integer
"""
if not isinstance(n, int):
raise ValueError("n must be an integer")
if n <= 0:
raise ValueError("n must be >= 0")
list_nums = []
for num in range(len(odd_composites)):
i = 0
while 2 * i * i <= odd_composites[num]:
rem = odd_composites[num] - 2 * i * i
if is_prime(rem):
break
i += 1
else:
list_nums.append(odd_composites[num])
if len(list_nums) == n:
return list_nums
return []
def solution() -> int:
"""Return the solution to the problem"""
return compute_nums(1)[0]
if __name__ == "__main__":
print(f"{solution() = }")
| """
Problem 46: https://projecteuler.net/problem=46
It was proposed by Christian Goldbach that every odd composite number can be
written as the sum of a prime and twice a square.
9 = 7 + 2 × 12
15 = 7 + 2 × 22
21 = 3 + 2 × 32
25 = 7 + 2 × 32
27 = 19 + 2 × 22
33 = 31 + 2 × 12
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a
prime and twice a square?
"""
from __future__ import annotations
import math
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
odd_composites = [num for num in range(3, 100001, 2) if not is_prime(num)]
def compute_nums(n: int) -> list[int]:
"""
Returns a list of first n odd composite numbers which do
not follow the conjecture.
>>> compute_nums(1)
[5777]
>>> compute_nums(2)
[5777, 5993]
>>> compute_nums(0)
Traceback (most recent call last):
...
ValueError: n must be >= 0
>>> compute_nums("a")
Traceback (most recent call last):
...
ValueError: n must be an integer
>>> compute_nums(1.1)
Traceback (most recent call last):
...
ValueError: n must be an integer
"""
if not isinstance(n, int):
raise ValueError("n must be an integer")
if n <= 0:
raise ValueError("n must be >= 0")
list_nums = []
for num in range(len(odd_composites)):
i = 0
while 2 * i * i <= odd_composites[num]:
rem = odd_composites[num] - 2 * i * i
if is_prime(rem):
break
i += 1
else:
list_nums.append(odd_composites[num])
if len(list_nums) == n:
return list_nums
return []
def solution() -> int:
"""Return the solution to the problem"""
return compute_nums(1)[0]
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| -1 |
||
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| import numpy as np
def power_iteration(
input_matrix: np.ndarray,
vector: np.ndarray,
error_tol: float = 1e-12,
max_iterations: int = 100,
) -> tuple[float, np.ndarray]:
"""
Power Iteration.
Find the largest eigenvalue and corresponding eigenvector
of matrix input_matrix given a random vector in the same space.
Will work so long as vector has component of largest eigenvector.
input_matrix must be either real or Hermitian.
Input
input_matrix: input matrix whose largest eigenvalue we will find.
Numpy array. np.shape(input_matrix) == (N,N).
vector: random initial vector in same space as matrix.
Numpy array. np.shape(vector) == (N,) or (N,1)
Output
largest_eigenvalue: largest eigenvalue of the matrix input_matrix.
Float. Scalar.
largest_eigenvector: eigenvector corresponding to largest_eigenvalue.
Numpy array. np.shape(largest_eigenvector) == (N,) or (N,1).
>>> import numpy as np
>>> input_matrix = np.array([
... [41, 4, 20],
... [ 4, 26, 30],
... [20, 30, 50]
... ])
>>> vector = np.array([41,4,20])
>>> power_iteration(input_matrix,vector)
(79.66086378788381, array([0.44472726, 0.46209842, 0.76725662]))
"""
# Ensure matrix is square.
assert np.shape(input_matrix)[0] == np.shape(input_matrix)[1]
# Ensure proper dimensionality.
assert np.shape(input_matrix)[0] == np.shape(vector)[0]
# Ensure inputs are either both complex or both real
assert np.iscomplexobj(input_matrix) == np.iscomplexobj(vector)
is_complex = np.iscomplexobj(input_matrix)
if is_complex:
# Ensure complex input_matrix is Hermitian
assert np.array_equal(input_matrix, input_matrix.conj().T)
# Set convergence to False. Will define convergence when we exceed max_iterations
# or when we have small changes from one iteration to next.
convergence = False
lambda_previous = 0
iterations = 0
error = 1e12
while not convergence:
# Multiple matrix by the vector.
w = np.dot(input_matrix, vector)
# Normalize the resulting output vector.
vector = w / np.linalg.norm(w)
# Find rayleigh quotient
# (faster than usual b/c we know vector is normalized already)
vector_h = vector.conj().T if is_complex else vector.T
lambda_ = np.dot(vector_h, np.dot(input_matrix, vector))
# Check convergence.
error = np.abs(lambda_ - lambda_previous) / lambda_
iterations += 1
if error <= error_tol or iterations >= max_iterations:
convergence = True
lambda_previous = lambda_
if is_complex:
lambda_ = np.real(lambda_)
return lambda_, vector
def test_power_iteration() -> None:
"""
>>> test_power_iteration() # self running tests
"""
real_input_matrix = np.array([[41, 4, 20], [4, 26, 30], [20, 30, 50]])
real_vector = np.array([41, 4, 20])
complex_input_matrix = real_input_matrix.astype(np.complex128)
imag_matrix = np.triu(1j * complex_input_matrix, 1)
complex_input_matrix += imag_matrix
complex_input_matrix += -1 * imag_matrix.T
complex_vector = np.array([41, 4, 20]).astype(np.complex128)
for problem_type in ["real", "complex"]:
if problem_type == "real":
input_matrix = real_input_matrix
vector = real_vector
elif problem_type == "complex":
input_matrix = complex_input_matrix
vector = complex_vector
# Our implementation.
eigen_value, eigen_vector = power_iteration(input_matrix, vector)
# Numpy implementation.
# Get eigenvalues and eigenvectors using built-in numpy
# eigh (eigh used for symmetric or hermetian matrices).
eigen_values, eigen_vectors = np.linalg.eigh(input_matrix)
# Last eigenvalue is the maximum one.
eigen_value_max = eigen_values[-1]
# Last column in this matrix is eigenvector corresponding to largest eigenvalue.
eigen_vector_max = eigen_vectors[:, -1]
# Check our implementation and numpy gives close answers.
assert np.abs(eigen_value - eigen_value_max) <= 1e-6
# Take absolute values element wise of each eigenvector.
# as they are only unique to a minus sign.
assert np.linalg.norm(np.abs(eigen_vector) - np.abs(eigen_vector_max)) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_power_iteration()
| import numpy as np
def power_iteration(
input_matrix: np.ndarray,
vector: np.ndarray,
error_tol: float = 1e-12,
max_iterations: int = 100,
) -> tuple[float, np.ndarray]:
"""
Power Iteration.
Find the largest eigenvalue and corresponding eigenvector
of matrix input_matrix given a random vector in the same space.
Will work so long as vector has component of largest eigenvector.
input_matrix must be either real or Hermitian.
Input
input_matrix: input matrix whose largest eigenvalue we will find.
Numpy array. np.shape(input_matrix) == (N,N).
vector: random initial vector in same space as matrix.
Numpy array. np.shape(vector) == (N,) or (N,1)
Output
largest_eigenvalue: largest eigenvalue of the matrix input_matrix.
Float. Scalar.
largest_eigenvector: eigenvector corresponding to largest_eigenvalue.
Numpy array. np.shape(largest_eigenvector) == (N,) or (N,1).
>>> import numpy as np
>>> input_matrix = np.array([
... [41, 4, 20],
... [ 4, 26, 30],
... [20, 30, 50]
... ])
>>> vector = np.array([41,4,20])
>>> power_iteration(input_matrix,vector)
(79.66086378788381, array([0.44472726, 0.46209842, 0.76725662]))
"""
# Ensure matrix is square.
assert np.shape(input_matrix)[0] == np.shape(input_matrix)[1]
# Ensure proper dimensionality.
assert np.shape(input_matrix)[0] == np.shape(vector)[0]
# Ensure inputs are either both complex or both real
assert np.iscomplexobj(input_matrix) == np.iscomplexobj(vector)
is_complex = np.iscomplexobj(input_matrix)
if is_complex:
# Ensure complex input_matrix is Hermitian
assert np.array_equal(input_matrix, input_matrix.conj().T)
# Set convergence to False. Will define convergence when we exceed max_iterations
# or when we have small changes from one iteration to next.
convergence = False
lambda_previous = 0
iterations = 0
error = 1e12
while not convergence:
# Multiple matrix by the vector.
w = np.dot(input_matrix, vector)
# Normalize the resulting output vector.
vector = w / np.linalg.norm(w)
# Find rayleigh quotient
# (faster than usual b/c we know vector is normalized already)
vector_h = vector.conj().T if is_complex else vector.T
lambda_ = np.dot(vector_h, np.dot(input_matrix, vector))
# Check convergence.
error = np.abs(lambda_ - lambda_previous) / lambda_
iterations += 1
if error <= error_tol or iterations >= max_iterations:
convergence = True
lambda_previous = lambda_
if is_complex:
lambda_ = np.real(lambda_)
return lambda_, vector
def test_power_iteration() -> None:
"""
>>> test_power_iteration() # self running tests
"""
real_input_matrix = np.array([[41, 4, 20], [4, 26, 30], [20, 30, 50]])
real_vector = np.array([41, 4, 20])
complex_input_matrix = real_input_matrix.astype(np.complex128)
imag_matrix = np.triu(1j * complex_input_matrix, 1)
complex_input_matrix += imag_matrix
complex_input_matrix += -1 * imag_matrix.T
complex_vector = np.array([41, 4, 20]).astype(np.complex128)
for problem_type in ["real", "complex"]:
if problem_type == "real":
input_matrix = real_input_matrix
vector = real_vector
elif problem_type == "complex":
input_matrix = complex_input_matrix
vector = complex_vector
# Our implementation.
eigen_value, eigen_vector = power_iteration(input_matrix, vector)
# Numpy implementation.
# Get eigenvalues and eigenvectors using built-in numpy
# eigh (eigh used for symmetric or hermetian matrices).
eigen_values, eigen_vectors = np.linalg.eigh(input_matrix)
# Last eigenvalue is the maximum one.
eigen_value_max = eigen_values[-1]
# Last column in this matrix is eigenvector corresponding to largest eigenvalue.
eigen_vector_max = eigen_vectors[:, -1]
# Check our implementation and numpy gives close answers.
assert np.abs(eigen_value - eigen_value_max) <= 1e-6
# Take absolute values element wise of each eigenvector.
# as they are only unique to a minus sign.
assert np.linalg.norm(np.abs(eigen_vector) - np.abs(eigen_vector_max)) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_power_iteration()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Similarity Search : https://en.wikipedia.org/wiki/Similarity_search
Similarity search is a search algorithm for finding the nearest vector from
vectors, used in natural language processing.
In this algorithm, it calculates distance with euclidean distance and
returns a list containing two data for each vector:
1. the nearest vector
2. distance between the vector and the nearest vector (float)
"""
from __future__ import annotations
import math
import numpy as np
from numpy.linalg import norm
def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates euclidean distance between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Euclidean distance of input_a and input_b. By using math.sqrt(),
result will be float.
>>> euclidean(np.array([0]), np.array([1]))
1.0
>>> euclidean(np.array([0, 1]), np.array([1, 1]))
1.0
>>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1]))
1.0
"""
return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b)))
def similarity_search(
dataset: np.ndarray, value_array: np.ndarray
) -> list[list[list[float] | float]]:
"""
:param dataset: Set containing the vectors. Should be ndarray.
:param value_array: vector/vectors we want to know the nearest vector from dataset.
:return: Result will be a list containing
1. the nearest vector
2. distance from the vector
>>> dataset = np.array([[0], [1], [2]])
>>> value_array = np.array([[0]])
>>> similarity_search(dataset, value_array)
[[[0], 0.0]]
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]]
These are the errors that might occur:
1. If dimensions are different.
For example, dataset has 2d array and value_array has 1d array:
>>> dataset = np.array([[1]])
>>> value_array = np.array([1])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1
2. If data's shapes are different.
For example, dataset has shape of (3, 2) and value_array has (2, 3).
We are expecting same shapes of two arrays, so it is wrong.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's shape... dataset : 2, value_array : 3
3. If data types are different.
When trying to compare, we are expecting same types so they should be same.
If not, it'll come up with errors.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32)
>>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32)
>>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: Input data have different datatype...
dataset : float32, value_array : int32
"""
if dataset.ndim != value_array.ndim:
msg = (
"Wrong input data's dimensions... "
f"dataset : {dataset.ndim}, value_array : {value_array.ndim}"
)
raise ValueError(msg)
try:
if dataset.shape[1] != value_array.shape[1]:
msg = (
"Wrong input data's shape... "
f"dataset : {dataset.shape[1]}, value_array : {value_array.shape[1]}"
)
raise ValueError(msg)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
msg = (
"Input data have different datatype... "
f"dataset : {dataset.dtype}, value_array : {value_array.dtype}"
)
raise TypeError(msg)
answer = []
for value in value_array:
dist = euclidean(value, dataset[0])
vector = dataset[0].tolist()
for dataset_value in dataset[1:]:
temp_dist = euclidean(value, dataset_value)
if dist > temp_dist:
dist = temp_dist
vector = dataset_value.tolist()
answer.append([vector, dist])
return answer
def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates cosine similarity between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Cosine similarity of input_a and input_b. By using math.sqrt(),
result will be float.
>>> cosine_similarity(np.array([1]), np.array([1]))
1.0
>>> cosine_similarity(np.array([1, 2]), np.array([6, 32]))
0.9615239476408232
"""
return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Similarity Search : https://en.wikipedia.org/wiki/Similarity_search
Similarity search is a search algorithm for finding the nearest vector from
vectors, used in natural language processing.
In this algorithm, it calculates distance with euclidean distance and
returns a list containing two data for each vector:
1. the nearest vector
2. distance between the vector and the nearest vector (float)
"""
from __future__ import annotations
import math
import numpy as np
from numpy.linalg import norm
def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates euclidean distance between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Euclidean distance of input_a and input_b. By using math.sqrt(),
result will be float.
>>> euclidean(np.array([0]), np.array([1]))
1.0
>>> euclidean(np.array([0, 1]), np.array([1, 1]))
1.0
>>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1]))
1.0
"""
return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b)))
def similarity_search(
dataset: np.ndarray, value_array: np.ndarray
) -> list[list[list[float] | float]]:
"""
:param dataset: Set containing the vectors. Should be ndarray.
:param value_array: vector/vectors we want to know the nearest vector from dataset.
:return: Result will be a list containing
1. the nearest vector
2. distance from the vector
>>> dataset = np.array([[0], [1], [2]])
>>> value_array = np.array([[0]])
>>> similarity_search(dataset, value_array)
[[[0], 0.0]]
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]]
These are the errors that might occur:
1. If dimensions are different.
For example, dataset has 2d array and value_array has 1d array:
>>> dataset = np.array([[1]])
>>> value_array = np.array([1])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1
2. If data's shapes are different.
For example, dataset has shape of (3, 2) and value_array has (2, 3).
We are expecting same shapes of two arrays, so it is wrong.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's shape... dataset : 2, value_array : 3
3. If data types are different.
When trying to compare, we are expecting same types so they should be same.
If not, it'll come up with errors.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32)
>>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32)
>>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: Input data have different datatype...
dataset : float32, value_array : int32
"""
if dataset.ndim != value_array.ndim:
msg = (
"Wrong input data's dimensions... "
f"dataset : {dataset.ndim}, value_array : {value_array.ndim}"
)
raise ValueError(msg)
try:
if dataset.shape[1] != value_array.shape[1]:
msg = (
"Wrong input data's shape... "
f"dataset : {dataset.shape[1]}, value_array : {value_array.shape[1]}"
)
raise ValueError(msg)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
msg = (
"Input data have different datatype... "
f"dataset : {dataset.dtype}, value_array : {value_array.dtype}"
)
raise TypeError(msg)
answer = []
for value in value_array:
dist = euclidean(value, dataset[0])
vector = dataset[0].tolist()
for dataset_value in dataset[1:]:
temp_dist = euclidean(value, dataset_value)
if dist > temp_dist:
dist = temp_dist
vector = dataset_value.tolist()
answer.append([vector, dist])
return answer
def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates cosine similarity between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Cosine similarity of input_a and input_b. By using math.sqrt(),
result will be float.
>>> cosine_similarity(np.array([1]), np.array([1]))
1.0
>>> cosine_similarity(np.array([1, 2]), np.array([6, 32]))
0.9615239476408232
"""
return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| r"""
Problem: Given root of a binary tree, return the:
1. binary-tree-right-side-view
2. binary-tree-left-side-view
3. binary-tree-top-side-view
4. binary-tree-bottom-side-view
"""
from __future__ import annotations
from collections import defaultdict
from dataclasses import dataclass
@dataclass
class TreeNode:
val: int
left: TreeNode | None = None
right: TreeNode | None = None
def make_tree() -> TreeNode:
"""
>>> make_tree().val
3
"""
return TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
def binary_tree_right_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the right side view of binary tree.
3 <- 3
/ \
9 20 <- 20
/ \
15 7 <- 7
>>> binary_tree_right_side_view(make_tree())
[3, 20, 7]
>>> binary_tree_right_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, right_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values at
right side of tree.
"""
if not root:
return
if depth == len(right_view):
right_view.append(root.val)
depth_first_search(root.right, depth + 1, right_view)
depth_first_search(root.left, depth + 1, right_view)
right_view: list = []
if not root:
return right_view
depth_first_search(root, 0, right_view)
return right_view
def binary_tree_left_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the left side view of binary tree.
3 -> 3
/ \
9 -> 9 20
/ \
15 -> 15 7
>>> binary_tree_left_side_view(make_tree())
[3, 9, 15]
>>> binary_tree_left_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, left_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values
at left side of tree.
"""
if not root:
return
if depth == len(left_view):
left_view.append(root.val)
depth_first_search(root.left, depth + 1, left_view)
depth_first_search(root.right, depth + 1, left_view)
left_view: list = []
if not root:
return left_view
depth_first_search(root, 0, left_view)
return left_view
def binary_tree_top_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the top side view of binary tree.
9 3 20 7
⬇ ⬇ ⬇ ⬇
3
/ \
9 20
/ \
15 7
>>> binary_tree_top_side_view(make_tree())
[9, 3, 20, 7]
>>> binary_tree_top_side_view(None)
[]
"""
def breadth_first_search(root: TreeNode, top_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from top view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
top_view.append(pair[1][0])
top_view: list = []
if not root:
return top_view
breadth_first_search(root, top_view)
return top_view
def binary_tree_bottom_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the bottom side view of binary tree
3
/ \
9 20
/ \
15 7
↑ ↑ ↑ ↑
9 15 20 7
>>> binary_tree_bottom_side_view(make_tree())
[9, 15, 20, 7]
>>> binary_tree_bottom_side_view(None)
[]
"""
from collections import defaultdict
def breadth_first_search(root: TreeNode, bottom_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from bottom view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
bottom_view.append(pair[1][-1])
bottom_view: list = []
if not root:
return bottom_view
breadth_first_search(root, bottom_view)
return bottom_view
if __name__ == "__main__":
import doctest
doctest.testmod()
| r"""
Problem: Given root of a binary tree, return the:
1. binary-tree-right-side-view
2. binary-tree-left-side-view
3. binary-tree-top-side-view
4. binary-tree-bottom-side-view
"""
from __future__ import annotations
from collections import defaultdict
from dataclasses import dataclass
@dataclass
class TreeNode:
val: int
left: TreeNode | None = None
right: TreeNode | None = None
def make_tree() -> TreeNode:
"""
>>> make_tree().val
3
"""
return TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
def binary_tree_right_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the right side view of binary tree.
3 <- 3
/ \
9 20 <- 20
/ \
15 7 <- 7
>>> binary_tree_right_side_view(make_tree())
[3, 20, 7]
>>> binary_tree_right_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, right_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values at
right side of tree.
"""
if not root:
return
if depth == len(right_view):
right_view.append(root.val)
depth_first_search(root.right, depth + 1, right_view)
depth_first_search(root.left, depth + 1, right_view)
right_view: list = []
if not root:
return right_view
depth_first_search(root, 0, right_view)
return right_view
def binary_tree_left_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the left side view of binary tree.
3 -> 3
/ \
9 -> 9 20
/ \
15 -> 15 7
>>> binary_tree_left_side_view(make_tree())
[3, 9, 15]
>>> binary_tree_left_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, left_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values
at left side of tree.
"""
if not root:
return
if depth == len(left_view):
left_view.append(root.val)
depth_first_search(root.left, depth + 1, left_view)
depth_first_search(root.right, depth + 1, left_view)
left_view: list = []
if not root:
return left_view
depth_first_search(root, 0, left_view)
return left_view
def binary_tree_top_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the top side view of binary tree.
9 3 20 7
⬇ ⬇ ⬇ ⬇
3
/ \
9 20
/ \
15 7
>>> binary_tree_top_side_view(make_tree())
[9, 3, 20, 7]
>>> binary_tree_top_side_view(None)
[]
"""
def breadth_first_search(root: TreeNode, top_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from top view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
top_view.append(pair[1][0])
top_view: list = []
if not root:
return top_view
breadth_first_search(root, top_view)
return top_view
def binary_tree_bottom_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the bottom side view of binary tree
3
/ \
9 20
/ \
15 7
↑ ↑ ↑ ↑
9 15 20 7
>>> binary_tree_bottom_side_view(make_tree())
[9, 15, 20, 7]
>>> binary_tree_bottom_side_view(None)
[]
"""
from collections import defaultdict
def breadth_first_search(root: TreeNode, bottom_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from bottom view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
bottom_view.append(pair[1][-1])
bottom_view: list = []
if not root:
return bottom_view
breadth_first_search(root, bottom_view)
return bottom_view
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| import numpy as np
def runge_kutta(f, y0, x0, h, x_end):
"""
Calculate the numeric solution at each step to the ODE f(x, y) using RK4
https://en.wikipedia.org/wiki/Runge-Kutta_methods
Arguments:
f -- The ode as a function of x and y
y0 -- the initial value for y
x0 -- the initial value for x
h -- the stepsize
x_end -- the end value for x
>>> # the exact solution is math.exp(x)
>>> def f(x, y):
... return y
>>> y0 = 1
>>> y = runge_kutta(f, y0, 0.0, 0.01, 5)
>>> y[-1]
148.41315904125113
"""
n = int(np.ceil((x_end - x0) / h))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
k1 = f(x, y[k])
k2 = f(x + 0.5 * h, y[k] + 0.5 * h * k1)
k3 = f(x + 0.5 * h, y[k] + 0.5 * h * k2)
k4 = f(x + h, y[k] + h * k3)
y[k + 1] = y[k] + (1 / 6) * h * (k1 + 2 * k2 + 2 * k3 + k4)
x += h
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
| import numpy as np
def runge_kutta(f, y0, x0, h, x_end):
"""
Calculate the numeric solution at each step to the ODE f(x, y) using RK4
https://en.wikipedia.org/wiki/Runge-Kutta_methods
Arguments:
f -- The ode as a function of x and y
y0 -- the initial value for y
x0 -- the initial value for x
h -- the stepsize
x_end -- the end value for x
>>> # the exact solution is math.exp(x)
>>> def f(x, y):
... return y
>>> y0 = 1
>>> y = runge_kutta(f, y0, 0.0, 0.01, 5)
>>> y[-1]
148.41315904125113
"""
n = int(np.ceil((x_end - x0) / h))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
k1 = f(x, y[k])
k2 = f(x + 0.5 * h, y[k] + 0.5 * h * k1)
k3 = f(x + 0.5 * h, y[k] + 0.5 * h * k2)
k4 = f(x + h, y[k] + h * k3)
y[k + 1] = y[k] + (1 / 6) * h * (k1 + 2 * k2 + 2 * k3 + k4)
x += h
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(os.remove, self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size) # noqa: UP015
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(os.remove, self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size) # noqa: UP015
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return factorial(n) // (factorial(k) * factorial(n - k))
if __name__ == "__main__":
print(
"The number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}\n",
)
print(
"If a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.",
)
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return factorial(n) // (factorial(k) * factorial(n - k))
if __name__ == "__main__":
print(
"The number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}\n",
)
print(
"If a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.",
)
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
== Twin Prime ==
A number n+2 is said to be a Twin prime of number n if
both n and n+2 are prime.
Examples of Twin pairs: (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), ...
https://en.wikipedia.org/wiki/Twin_prime
"""
# Author : Akshay Dubey (https://github.com/itsAkshayDubey)
from maths.prime_check import is_prime
def twin_prime(number: int) -> int:
"""
# doctest: +NORMALIZE_WHITESPACE
This functions takes an integer number as input.
returns n+2 if n and n+2 are prime numbers and -1 otherwise.
>>> twin_prime(3)
5
>>> twin_prime(4)
-1
>>> twin_prime(5)
7
>>> twin_prime(17)
19
>>> twin_prime(0)
-1
>>> twin_prime(6.0)
Traceback (most recent call last):
...
TypeError: Input value of [number=6.0] must be an integer
"""
if not isinstance(number, int):
msg = f"Input value of [number={number}] must be an integer"
raise TypeError(msg)
if is_prime(number) and is_prime(number + 2):
return number + 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
== Twin Prime ==
A number n+2 is said to be a Twin prime of number n if
both n and n+2 are prime.
Examples of Twin pairs: (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), ...
https://en.wikipedia.org/wiki/Twin_prime
"""
# Author : Akshay Dubey (https://github.com/itsAkshayDubey)
from maths.prime_check import is_prime
def twin_prime(number: int) -> int:
"""
# doctest: +NORMALIZE_WHITESPACE
This functions takes an integer number as input.
returns n+2 if n and n+2 are prime numbers and -1 otherwise.
>>> twin_prime(3)
5
>>> twin_prime(4)
-1
>>> twin_prime(5)
7
>>> twin_prime(17)
19
>>> twin_prime(0)
-1
>>> twin_prime(6.0)
Traceback (most recent call last):
...
TypeError: Input value of [number=6.0] must be an integer
"""
if not isinstance(number, int):
msg = f"Input value of [number={number}] must be an integer"
raise TypeError(msg)
if is_prime(number) and is_prime(number + 2):
return number + 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
This is a Python implementation of the levenshtein distance.
Levenshtein distance is a string metric for measuring the
difference between two sequences.
For doctests run following command:
python -m doctest -v levenshtein-distance.py
or
python3 -m doctest -v levenshtein-distance.py
For manual testing run:
python levenshtein-distance.py
"""
def levenshtein_distance(first_word: str, second_word: str) -> int:
"""Implementation of the levenshtein distance in Python.
:param first_word: the first word to measure the difference.
:param second_word: the second word to measure the difference.
:return: the levenshtein distance between the two words.
Examples:
>>> levenshtein_distance("planet", "planetary")
3
>>> levenshtein_distance("", "test")
4
>>> levenshtein_distance("book", "back")
2
>>> levenshtein_distance("book", "book")
0
>>> levenshtein_distance("test", "")
4
>>> levenshtein_distance("", "")
0
>>> levenshtein_distance("orchestration", "container")
10
"""
# The longer word should come first
if len(first_word) < len(second_word):
return levenshtein_distance(second_word, first_word)
if len(second_word) == 0:
return len(first_word)
previous_row = list(range(len(second_word) + 1))
for i, c1 in enumerate(first_word):
current_row = [i + 1]
for j, c2 in enumerate(second_word):
# Calculate insertions, deletions and substitutions
insertions = previous_row[j + 1] + 1
deletions = current_row[j] + 1
substitutions = previous_row[j] + (c1 != c2)
# Get the minimum to append to the current row
current_row.append(min(insertions, deletions, substitutions))
# Store the previous row
previous_row = current_row
# Returns the last element (distance)
return previous_row[-1]
if __name__ == "__main__":
first_word = input("Enter the first word:\n").strip()
second_word = input("Enter the second word:\n").strip()
result = levenshtein_distance(first_word, second_word)
print(f"Levenshtein distance between {first_word} and {second_word} is {result}")
| """
This is a Python implementation of the levenshtein distance.
Levenshtein distance is a string metric for measuring the
difference between two sequences.
For doctests run following command:
python -m doctest -v levenshtein-distance.py
or
python3 -m doctest -v levenshtein-distance.py
For manual testing run:
python levenshtein-distance.py
"""
def levenshtein_distance(first_word: str, second_word: str) -> int:
"""Implementation of the levenshtein distance in Python.
:param first_word: the first word to measure the difference.
:param second_word: the second word to measure the difference.
:return: the levenshtein distance between the two words.
Examples:
>>> levenshtein_distance("planet", "planetary")
3
>>> levenshtein_distance("", "test")
4
>>> levenshtein_distance("book", "back")
2
>>> levenshtein_distance("book", "book")
0
>>> levenshtein_distance("test", "")
4
>>> levenshtein_distance("", "")
0
>>> levenshtein_distance("orchestration", "container")
10
"""
# The longer word should come first
if len(first_word) < len(second_word):
return levenshtein_distance(second_word, first_word)
if len(second_word) == 0:
return len(first_word)
previous_row = list(range(len(second_word) + 1))
for i, c1 in enumerate(first_word):
current_row = [i + 1]
for j, c2 in enumerate(second_word):
# Calculate insertions, deletions and substitutions
insertions = previous_row[j + 1] + 1
deletions = current_row[j] + 1
substitutions = previous_row[j] + (c1 != c2)
# Get the minimum to append to the current row
current_row.append(min(insertions, deletions, substitutions))
# Store the previous row
previous_row = current_row
# Returns the last element (distance)
return previous_row[-1]
if __name__ == "__main__":
first_word = input("Enter the first word:\n").strip()
second_word = input("Enter the second word:\n").strip()
result = levenshtein_distance(first_word, second_word)
print(f"Levenshtein distance between {first_word} and {second_word} is {result}")
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def find_min(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def find_min(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| #!/bin/sh
# An example hook script to verify what is about to be pushed. Called by "git
# push" after it has checked the remote status, but before anything has been
# pushed. If this script exits with a non-zero status nothing will be pushed.
#
# This hook is called with the following parameters:
#
# $1 -- Name of the remote to which the push is being done
# $2 -- URL to which the push is being done
#
# If pushing without using a named remote those arguments will be equal.
#
# Information about the commits which are being pushed is supplied as lines to
# the standard input in the form:
#
# <local ref> <local sha1> <remote ref> <remote sha1>
#
# This sample shows how to prevent push of commits where the log message starts
# with "WIP" (work in progress).
remote="$1"
url="$2"
z40=0000000000000000000000000000000000000000
while read local_ref local_sha remote_ref remote_sha
do
if [ "$local_sha" = $z40 ]
then
# Handle delete
:
else
if [ "$remote_sha" = $z40 ]
then
# New branch, examine all commits
range="$local_sha"
else
# Update to existing branch, examine new commits
range="$remote_sha..$local_sha"
fi
# Check for WIP commit
commit=`git rev-list -n 1 --grep '^WIP' "$range"`
if [ -n "$commit" ]
then
echo >&2 "Found WIP commit in $local_ref, not pushing"
exit 1
fi
fi
done
exit 0
| #!/bin/sh
# An example hook script to verify what is about to be pushed. Called by "git
# push" after it has checked the remote status, but before anything has been
# pushed. If this script exits with a non-zero status nothing will be pushed.
#
# This hook is called with the following parameters:
#
# $1 -- Name of the remote to which the push is being done
# $2 -- URL to which the push is being done
#
# If pushing without using a named remote those arguments will be equal.
#
# Information about the commits which are being pushed is supplied as lines to
# the standard input in the form:
#
# <local ref> <local sha1> <remote ref> <remote sha1>
#
# This sample shows how to prevent push of commits where the log message starts
# with "WIP" (work in progress).
remote="$1"
url="$2"
z40=0000000000000000000000000000000000000000
while read local_ref local_sha remote_ref remote_sha
do
if [ "$local_sha" = $z40 ]
then
# Handle delete
:
else
if [ "$remote_sha" = $z40 ]
then
# New branch, examine all commits
range="$local_sha"
else
# Update to existing branch, examine new commits
range="$remote_sha..$local_sha"
fi
# Check for WIP commit
commit=`git rev-list -n 1 --grep '^WIP' "$range"`
if [ -n "$commit" ]
then
echo >&2 "Found WIP commit in $local_ref, not pushing"
exit 1
fi
fi
done
exit 0
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| from math import log
from scipy.constants import Boltzmann, physical_constants
T = 300 # TEMPERATURE (unit = K)
def builtin_voltage(
donor_conc: float, # donor concentration
acceptor_conc: float, # acceptor concentration
intrinsic_conc: float, # intrinsic concentration
) -> float:
"""
This function can calculate the Builtin Voltage of a pn junction diode.
This is calculated from the given three values.
Examples -
>>> builtin_voltage(donor_conc=1e17, acceptor_conc=1e17, intrinsic_conc=1e10)
0.833370010652644
>>> builtin_voltage(donor_conc=0, acceptor_conc=1600, intrinsic_conc=200)
Traceback (most recent call last):
...
ValueError: Donor concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=0, intrinsic_conc=1200)
Traceback (most recent call last):
...
ValueError: Acceptor concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=1000, intrinsic_conc=0)
Traceback (most recent call last):
...
ValueError: Intrinsic concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=3000, intrinsic_conc=2000)
Traceback (most recent call last):
...
ValueError: Donor concentration should be greater than intrinsic concentration
>>> builtin_voltage(donor_conc=3000, acceptor_conc=1000, intrinsic_conc=2000)
Traceback (most recent call last):
...
ValueError: Acceptor concentration should be greater than intrinsic concentration
"""
if donor_conc <= 0:
raise ValueError("Donor concentration should be positive")
elif acceptor_conc <= 0:
raise ValueError("Acceptor concentration should be positive")
elif intrinsic_conc <= 0:
raise ValueError("Intrinsic concentration should be positive")
elif donor_conc <= intrinsic_conc:
raise ValueError(
"Donor concentration should be greater than intrinsic concentration"
)
elif acceptor_conc <= intrinsic_conc:
raise ValueError(
"Acceptor concentration should be greater than intrinsic concentration"
)
else:
return (
Boltzmann
* T
* log((donor_conc * acceptor_conc) / intrinsic_conc**2)
/ physical_constants["electron volt"][0]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from math import log
from scipy.constants import Boltzmann, physical_constants
T = 300 # TEMPERATURE (unit = K)
def builtin_voltage(
donor_conc: float, # donor concentration
acceptor_conc: float, # acceptor concentration
intrinsic_conc: float, # intrinsic concentration
) -> float:
"""
This function can calculate the Builtin Voltage of a pn junction diode.
This is calculated from the given three values.
Examples -
>>> builtin_voltage(donor_conc=1e17, acceptor_conc=1e17, intrinsic_conc=1e10)
0.833370010652644
>>> builtin_voltage(donor_conc=0, acceptor_conc=1600, intrinsic_conc=200)
Traceback (most recent call last):
...
ValueError: Donor concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=0, intrinsic_conc=1200)
Traceback (most recent call last):
...
ValueError: Acceptor concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=1000, intrinsic_conc=0)
Traceback (most recent call last):
...
ValueError: Intrinsic concentration should be positive
>>> builtin_voltage(donor_conc=1000, acceptor_conc=3000, intrinsic_conc=2000)
Traceback (most recent call last):
...
ValueError: Donor concentration should be greater than intrinsic concentration
>>> builtin_voltage(donor_conc=3000, acceptor_conc=1000, intrinsic_conc=2000)
Traceback (most recent call last):
...
ValueError: Acceptor concentration should be greater than intrinsic concentration
"""
if donor_conc <= 0:
raise ValueError("Donor concentration should be positive")
elif acceptor_conc <= 0:
raise ValueError("Acceptor concentration should be positive")
elif intrinsic_conc <= 0:
raise ValueError("Intrinsic concentration should be positive")
elif donor_conc <= intrinsic_conc:
raise ValueError(
"Donor concentration should be greater than intrinsic concentration"
)
elif acceptor_conc <= intrinsic_conc:
raise ValueError(
"Acceptor concentration should be greater than intrinsic concentration"
)
else:
return (
Boltzmann
* T
* log((donor_conc * acceptor_conc) / intrinsic_conc**2)
/ physical_constants["electron volt"][0]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Name scores
Problem 22
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file
containing over five-thousand first names, begin by sorting it into
alphabetical order. Then working out the alphabetical value for each name,
multiply this value by its alphabetical position in the list to obtain a name
score.
For example, when the list is sorted into alphabetical order, COLIN, which is
worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would
obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
"""
import os
def solution():
"""Returns the total of all the name scores in the file.
>>> solution()
871198282
"""
with open(os.path.dirname(__file__) + "/p022_names.txt") as file:
names = str(file.readlines()[0])
names = names.replace('"', "").split(",")
names.sort()
name_score = 0
total_score = 0
for i, name in enumerate(names):
for letter in name:
name_score += ord(letter) - 64
total_score += (i + 1) * name_score
name_score = 0
return total_score
if __name__ == "__main__":
print(solution())
| """
Name scores
Problem 22
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file
containing over five-thousand first names, begin by sorting it into
alphabetical order. Then working out the alphabetical value for each name,
multiply this value by its alphabetical position in the list to obtain a name
score.
For example, when the list is sorted into alphabetical order, COLIN, which is
worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would
obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
"""
import os
def solution():
"""Returns the total of all the name scores in the file.
>>> solution()
871198282
"""
with open(os.path.dirname(__file__) + "/p022_names.txt") as file:
names = str(file.readlines()[0])
names = names.replace('"', "").split(",")
names.sort()
name_score = 0
total_score = 0
for i, name in enumerate(names):
for letter in name:
name_score += ord(letter) - 64
total_score += (i + 1) * name_score
name_score = 0
return total_score
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Given a partially filled 9×9 2D array, the objective is to fill a 9×9
square grid with digits numbered 1 to 9, so that every row, column, and
and each of the nine 3×3 sub-grids contains all of the digits.
This can be solved using Backtracking and is similar to n-queens.
We check to see if a cell is safe or not and recursively call the
function on the next column to see if it returns True. if yes, we
have solved the puzzle. else, we backtrack and place another number
in that cell and repeat this process.
"""
from __future__ import annotations
Matrix = list[list[int]]
# assigning initial values to the grid
initial_grid: Matrix = [
[3, 0, 6, 5, 0, 8, 4, 0, 0],
[5, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 8, 7, 0, 0, 0, 0, 3, 1],
[0, 0, 3, 0, 1, 0, 0, 8, 0],
[9, 0, 0, 8, 6, 3, 0, 0, 5],
[0, 5, 0, 0, 9, 0, 6, 0, 0],
[1, 3, 0, 0, 0, 0, 2, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 7, 4],
[0, 0, 5, 2, 0, 6, 3, 0, 0],
]
# a grid with no solution
no_solution: Matrix = [
[5, 0, 6, 5, 0, 8, 4, 0, 3],
[5, 2, 0, 0, 0, 0, 0, 0, 2],
[1, 8, 7, 0, 0, 0, 0, 3, 1],
[0, 0, 3, 0, 1, 0, 0, 8, 0],
[9, 0, 0, 8, 6, 3, 0, 0, 5],
[0, 5, 0, 0, 9, 0, 6, 0, 0],
[1, 3, 0, 0, 0, 0, 2, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 7, 4],
[0, 0, 5, 2, 0, 6, 3, 0, 0],
]
def is_safe(grid: Matrix, row: int, column: int, n: int) -> bool:
"""
This function checks the grid to see if each row,
column, and the 3x3 subgrids contain the digit 'n'.
It returns False if it is not 'safe' (a duplicate digit
is found) else returns True if it is 'safe'
"""
for i in range(9):
if grid[row][i] == n or grid[i][column] == n:
return False
for i in range(3):
for j in range(3):
if grid[(row - row % 3) + i][(column - column % 3) + j] == n:
return False
return True
def find_empty_location(grid: Matrix) -> tuple[int, int] | None:
"""
This function finds an empty location so that we can assign a number
for that particular row and column.
"""
for i in range(9):
for j in range(9):
if grid[i][j] == 0:
return i, j
return None
def sudoku(grid: Matrix) -> Matrix | None:
"""
Takes a partially filled-in grid and attempts to assign values to
all unassigned locations in such a way to meet the requirements
for Sudoku solution (non-duplication across rows, columns, and boxes)
>>> sudoku(initial_grid) # doctest: +NORMALIZE_WHITESPACE
[[3, 1, 6, 5, 7, 8, 4, 9, 2],
[5, 2, 9, 1, 3, 4, 7, 6, 8],
[4, 8, 7, 6, 2, 9, 5, 3, 1],
[2, 6, 3, 4, 1, 5, 9, 8, 7],
[9, 7, 4, 8, 6, 3, 1, 2, 5],
[8, 5, 1, 7, 9, 2, 6, 4, 3],
[1, 3, 8, 9, 4, 7, 2, 5, 6],
[6, 9, 2, 3, 5, 1, 8, 7, 4],
[7, 4, 5, 2, 8, 6, 3, 1, 9]]
>>> sudoku(no_solution) is None
True
"""
if location := find_empty_location(grid):
row, column = location
else:
# If the location is ``None``, then the grid is solved.
return grid
for digit in range(1, 10):
if is_safe(grid, row, column, digit):
grid[row][column] = digit
if sudoku(grid) is not None:
return grid
grid[row][column] = 0
return None
def print_solution(grid: Matrix) -> None:
"""
A function to print the solution in the form
of a 9x9 grid
"""
for row in grid:
for cell in row:
print(cell, end=" ")
print()
if __name__ == "__main__":
# make a copy of grid so that you can compare with the unmodified grid
for example_grid in (initial_grid, no_solution):
print("\nExample grid:\n" + "=" * 20)
print_solution(example_grid)
print("\nExample grid solution:")
solution = sudoku(example_grid)
if solution is not None:
print_solution(solution)
else:
print("Cannot find a solution.")
| """
Given a partially filled 9×9 2D array, the objective is to fill a 9×9
square grid with digits numbered 1 to 9, so that every row, column, and
and each of the nine 3×3 sub-grids contains all of the digits.
This can be solved using Backtracking and is similar to n-queens.
We check to see if a cell is safe or not and recursively call the
function on the next column to see if it returns True. if yes, we
have solved the puzzle. else, we backtrack and place another number
in that cell and repeat this process.
"""
from __future__ import annotations
Matrix = list[list[int]]
# assigning initial values to the grid
initial_grid: Matrix = [
[3, 0, 6, 5, 0, 8, 4, 0, 0],
[5, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 8, 7, 0, 0, 0, 0, 3, 1],
[0, 0, 3, 0, 1, 0, 0, 8, 0],
[9, 0, 0, 8, 6, 3, 0, 0, 5],
[0, 5, 0, 0, 9, 0, 6, 0, 0],
[1, 3, 0, 0, 0, 0, 2, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 7, 4],
[0, 0, 5, 2, 0, 6, 3, 0, 0],
]
# a grid with no solution
no_solution: Matrix = [
[5, 0, 6, 5, 0, 8, 4, 0, 3],
[5, 2, 0, 0, 0, 0, 0, 0, 2],
[1, 8, 7, 0, 0, 0, 0, 3, 1],
[0, 0, 3, 0, 1, 0, 0, 8, 0],
[9, 0, 0, 8, 6, 3, 0, 0, 5],
[0, 5, 0, 0, 9, 0, 6, 0, 0],
[1, 3, 0, 0, 0, 0, 2, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 7, 4],
[0, 0, 5, 2, 0, 6, 3, 0, 0],
]
def is_safe(grid: Matrix, row: int, column: int, n: int) -> bool:
"""
This function checks the grid to see if each row,
column, and the 3x3 subgrids contain the digit 'n'.
It returns False if it is not 'safe' (a duplicate digit
is found) else returns True if it is 'safe'
"""
for i in range(9):
if grid[row][i] == n or grid[i][column] == n:
return False
for i in range(3):
for j in range(3):
if grid[(row - row % 3) + i][(column - column % 3) + j] == n:
return False
return True
def find_empty_location(grid: Matrix) -> tuple[int, int] | None:
"""
This function finds an empty location so that we can assign a number
for that particular row and column.
"""
for i in range(9):
for j in range(9):
if grid[i][j] == 0:
return i, j
return None
def sudoku(grid: Matrix) -> Matrix | None:
"""
Takes a partially filled-in grid and attempts to assign values to
all unassigned locations in such a way to meet the requirements
for Sudoku solution (non-duplication across rows, columns, and boxes)
>>> sudoku(initial_grid) # doctest: +NORMALIZE_WHITESPACE
[[3, 1, 6, 5, 7, 8, 4, 9, 2],
[5, 2, 9, 1, 3, 4, 7, 6, 8],
[4, 8, 7, 6, 2, 9, 5, 3, 1],
[2, 6, 3, 4, 1, 5, 9, 8, 7],
[9, 7, 4, 8, 6, 3, 1, 2, 5],
[8, 5, 1, 7, 9, 2, 6, 4, 3],
[1, 3, 8, 9, 4, 7, 2, 5, 6],
[6, 9, 2, 3, 5, 1, 8, 7, 4],
[7, 4, 5, 2, 8, 6, 3, 1, 9]]
>>> sudoku(no_solution) is None
True
"""
if location := find_empty_location(grid):
row, column = location
else:
# If the location is ``None``, then the grid is solved.
return grid
for digit in range(1, 10):
if is_safe(grid, row, column, digit):
grid[row][column] = digit
if sudoku(grid) is not None:
return grid
grid[row][column] = 0
return None
def print_solution(grid: Matrix) -> None:
"""
A function to print the solution in the form
of a 9x9 grid
"""
for row in grid:
for cell in row:
print(cell, end=" ")
print()
if __name__ == "__main__":
# make a copy of grid so that you can compare with the unmodified grid
for example_grid in (initial_grid, no_solution):
print("\nExample grid:\n" + "=" * 20)
print_solution(example_grid)
print("\nExample grid solution:")
solution = sudoku(example_grid)
if solution is not None:
print_solution(solution)
else:
print("Cannot find a solution.")
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| name: "build"
on:
pull_request:
schedule:
- cron: "0 0 * * *" # Run everyday
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.11
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
# See: #6591 for re-enabling tests on Python v3.11
run: pytest
--ignore=computer_vision/cnn_classification.py
--ignore=machine_learning/lstm/lstm_prediction.py
--ignore=quantum/
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered
--cov=. .
- if: ${{ success() }}
run: scripts/build_directory_md.py 2>&1 | tee DIRECTORY.md
| name: "build"
on:
pull_request:
schedule:
- cron: "0 0 * * *" # Run everyday
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.11
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
# See: #6591 for re-enabling tests on Python v3.11
run: pytest
--ignore=computer_vision/cnn_classification.py
--ignore=machine_learning/lstm/lstm_prediction.py
--ignore=quantum/
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered
--cov=. .
- if: ${{ success() }}
run: scripts/build_directory_md.py 2>&1 | tee DIRECTORY.md
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Longest Common Substring Problem Statement: Given two sequences, find the
longest common substring present in both of them. A substring is
necessarily continuous.
Example: "abcdef" and "xabded" have two longest common substrings, "ab" or "de".
Therefore, algorithm should return any one of them.
"""
def longest_common_substring(text1: str, text2: str) -> str:
"""
Finds the longest common substring between two strings.
>>> longest_common_substring("", "")
''
>>> longest_common_substring("a","")
''
>>> longest_common_substring("", "a")
''
>>> longest_common_substring("a", "a")
'a'
>>> longest_common_substring("abcdef", "bcd")
'bcd'
>>> longest_common_substring("abcdef", "xabded")
'ab'
>>> longest_common_substring("GeeksforGeeks", "GeeksQuiz")
'Geeks'
>>> longest_common_substring("abcdxyz", "xyzabcd")
'abcd'
>>> longest_common_substring("zxabcdezy", "yzabcdezx")
'abcdez'
>>> longest_common_substring("OldSite:GeeksforGeeks.org", "NewSite:GeeksQuiz.com")
'Site:Geeks'
>>> longest_common_substring(1, 1)
Traceback (most recent call last):
...
ValueError: longest_common_substring() takes two strings for inputs
"""
if not (isinstance(text1, str) and isinstance(text2, str)):
raise ValueError("longest_common_substring() takes two strings for inputs")
text1_length = len(text1)
text2_length = len(text2)
dp = [[0] * (text2_length + 1) for _ in range(text1_length + 1)]
ans_index = 0
ans_length = 0
for i in range(1, text1_length + 1):
for j in range(1, text2_length + 1):
if text1[i - 1] == text2[j - 1]:
dp[i][j] = 1 + dp[i - 1][j - 1]
if dp[i][j] > ans_length:
ans_index = i
ans_length = dp[i][j]
return text1[ans_index - ans_length : ans_index]
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Longest Common Substring Problem Statement: Given two sequences, find the
longest common substring present in both of them. A substring is
necessarily continuous.
Example: "abcdef" and "xabded" have two longest common substrings, "ab" or "de".
Therefore, algorithm should return any one of them.
"""
def longest_common_substring(text1: str, text2: str) -> str:
"""
Finds the longest common substring between two strings.
>>> longest_common_substring("", "")
''
>>> longest_common_substring("a","")
''
>>> longest_common_substring("", "a")
''
>>> longest_common_substring("a", "a")
'a'
>>> longest_common_substring("abcdef", "bcd")
'bcd'
>>> longest_common_substring("abcdef", "xabded")
'ab'
>>> longest_common_substring("GeeksforGeeks", "GeeksQuiz")
'Geeks'
>>> longest_common_substring("abcdxyz", "xyzabcd")
'abcd'
>>> longest_common_substring("zxabcdezy", "yzabcdezx")
'abcdez'
>>> longest_common_substring("OldSite:GeeksforGeeks.org", "NewSite:GeeksQuiz.com")
'Site:Geeks'
>>> longest_common_substring(1, 1)
Traceback (most recent call last):
...
ValueError: longest_common_substring() takes two strings for inputs
"""
if not (isinstance(text1, str) and isinstance(text2, str)):
raise ValueError("longest_common_substring() takes two strings for inputs")
text1_length = len(text1)
text2_length = len(text2)
dp = [[0] * (text2_length + 1) for _ in range(text1_length + 1)]
ans_index = 0
ans_length = 0
for i in range(1, text1_length + 1):
for j in range(1, text2_length + 1):
if text1[i - 1] == text2[j - 1]:
dp[i][j] = 1 + dp[i - 1][j - 1]
if dp[i][j] > ans_length:
ans_index = i
ans_length = dp[i][j]
return text1[ans_index - ans_length : ans_index]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
This module provides two implementations for the rod-cutting problem:
1. A naive recursive implementation which has an exponential runtime
2. Two dynamic programming implementations which have quadratic runtime
The rod-cutting problem is the problem of finding the maximum possible revenue
obtainable from a rod of length ``n`` given a list of prices for each integral piece
of the rod. The maximum revenue can thus be obtained by cutting the rod and selling the
pieces separately or not cutting it at all if the price of it is the maximum obtainable.
"""
def naive_cut_rod_recursive(n: int, prices: list):
"""
Solves the rod-cutting problem via naively without using the benefit of dynamic
programming. The results is the same sub-problems are solved several times
leading to an exponential runtime
Runtime: O(2^n)
Arguments
-------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
Examples
--------
>>> naive_cut_rod_recursive(4, [1, 5, 8, 9])
10
>>> naive_cut_rod_recursive(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
if n == 0:
return 0
max_revue = float("-inf")
for i in range(1, n + 1):
max_revue = max(
max_revue, prices[i - 1] + naive_cut_rod_recursive(n - i, prices)
)
return max_revue
def top_down_cut_rod(n: int, prices: list):
"""
Constructs a top-down dynamic programming solution for the rod-cutting
problem via memoization. This function serves as a wrapper for
_top_down_cut_rod_recursive
Runtime: O(n^2)
Arguments
--------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Note
----
For convenience and because Python's lists using 0-indexing, length(max_rev) =
n + 1, to accommodate for the revenue obtainable from a rod of length 0.
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
Examples
-------
>>> top_down_cut_rod(4, [1, 5, 8, 9])
10
>>> top_down_cut_rod(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
max_rev = [float("-inf") for _ in range(n + 1)]
return _top_down_cut_rod_recursive(n, prices, max_rev)
def _top_down_cut_rod_recursive(n: int, prices: list, max_rev: list):
"""
Constructs a top-down dynamic programming solution for the rod-cutting problem
via memoization.
Runtime: O(n^2)
Arguments
--------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
max_rev: list, the computed maximum revenue for a piece of rod.
``max_rev[i]`` is the maximum revenue obtainable for a rod of length ``i``
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
"""
if max_rev[n] >= 0:
return max_rev[n]
elif n == 0:
return 0
else:
max_revenue = float("-inf")
for i in range(1, n + 1):
max_revenue = max(
max_revenue,
prices[i - 1] + _top_down_cut_rod_recursive(n - i, prices, max_rev),
)
max_rev[n] = max_revenue
return max_rev[n]
def bottom_up_cut_rod(n: int, prices: list):
"""
Constructs a bottom-up dynamic programming solution for the rod-cutting problem
Runtime: O(n^2)
Arguments
----------
n: int, the maximum length of the rod.
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Returns
-------
The maximum revenue obtainable from cutting a rod of length n given
the prices for each piece of rod p.
Examples
-------
>>> bottom_up_cut_rod(4, [1, 5, 8, 9])
10
>>> bottom_up_cut_rod(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
# length(max_rev) = n + 1, to accommodate for the revenue obtainable from a rod of
# length 0.
max_rev = [float("-inf") for _ in range(n + 1)]
max_rev[0] = 0
for i in range(1, n + 1):
max_revenue_i = max_rev[i]
for j in range(1, i + 1):
max_revenue_i = max(max_revenue_i, prices[j - 1] + max_rev[i - j])
max_rev[i] = max_revenue_i
return max_rev[n]
def _enforce_args(n: int, prices: list):
"""
Basic checks on the arguments to the rod-cutting algorithms
n: int, the length of the rod
prices: list, the price list for each piece of rod.
Throws ValueError:
if n is negative or there are fewer items in the price list than the length of
the rod
"""
if n < 0:
msg = f"n must be greater than or equal to 0. Got n = {n}"
raise ValueError(msg)
if n > len(prices):
msg = (
"Each integral piece of rod must have a corresponding price. "
f"Got n = {n} but length of prices = {len(prices)}"
)
raise ValueError(msg)
def main():
prices = [6, 10, 12, 15, 20, 23]
n = len(prices)
# the best revenue comes from cutting the rod into 6 pieces, each
# of length 1 resulting in a revenue of 6 * 6 = 36.
expected_max_revenue = 36
max_rev_top_down = top_down_cut_rod(n, prices)
max_rev_bottom_up = bottom_up_cut_rod(n, prices)
max_rev_naive = naive_cut_rod_recursive(n, prices)
assert expected_max_revenue == max_rev_top_down
assert max_rev_top_down == max_rev_bottom_up
assert max_rev_bottom_up == max_rev_naive
if __name__ == "__main__":
main()
| """
This module provides two implementations for the rod-cutting problem:
1. A naive recursive implementation which has an exponential runtime
2. Two dynamic programming implementations which have quadratic runtime
The rod-cutting problem is the problem of finding the maximum possible revenue
obtainable from a rod of length ``n`` given a list of prices for each integral piece
of the rod. The maximum revenue can thus be obtained by cutting the rod and selling the
pieces separately or not cutting it at all if the price of it is the maximum obtainable.
"""
def naive_cut_rod_recursive(n: int, prices: list):
"""
Solves the rod-cutting problem via naively without using the benefit of dynamic
programming. The results is the same sub-problems are solved several times
leading to an exponential runtime
Runtime: O(2^n)
Arguments
-------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
Examples
--------
>>> naive_cut_rod_recursive(4, [1, 5, 8, 9])
10
>>> naive_cut_rod_recursive(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
if n == 0:
return 0
max_revue = float("-inf")
for i in range(1, n + 1):
max_revue = max(
max_revue, prices[i - 1] + naive_cut_rod_recursive(n - i, prices)
)
return max_revue
def top_down_cut_rod(n: int, prices: list):
"""
Constructs a top-down dynamic programming solution for the rod-cutting
problem via memoization. This function serves as a wrapper for
_top_down_cut_rod_recursive
Runtime: O(n^2)
Arguments
--------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Note
----
For convenience and because Python's lists using 0-indexing, length(max_rev) =
n + 1, to accommodate for the revenue obtainable from a rod of length 0.
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
Examples
-------
>>> top_down_cut_rod(4, [1, 5, 8, 9])
10
>>> top_down_cut_rod(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
max_rev = [float("-inf") for _ in range(n + 1)]
return _top_down_cut_rod_recursive(n, prices, max_rev)
def _top_down_cut_rod_recursive(n: int, prices: list, max_rev: list):
"""
Constructs a top-down dynamic programming solution for the rod-cutting problem
via memoization.
Runtime: O(n^2)
Arguments
--------
n: int, the length of the rod
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
max_rev: list, the computed maximum revenue for a piece of rod.
``max_rev[i]`` is the maximum revenue obtainable for a rod of length ``i``
Returns
-------
The maximum revenue obtainable for a rod of length n given the list of prices
for each piece.
"""
if max_rev[n] >= 0:
return max_rev[n]
elif n == 0:
return 0
else:
max_revenue = float("-inf")
for i in range(1, n + 1):
max_revenue = max(
max_revenue,
prices[i - 1] + _top_down_cut_rod_recursive(n - i, prices, max_rev),
)
max_rev[n] = max_revenue
return max_rev[n]
def bottom_up_cut_rod(n: int, prices: list):
"""
Constructs a bottom-up dynamic programming solution for the rod-cutting problem
Runtime: O(n^2)
Arguments
----------
n: int, the maximum length of the rod.
prices: list, the prices for each piece of rod. ``p[i-i]`` is the
price for a rod of length ``i``
Returns
-------
The maximum revenue obtainable from cutting a rod of length n given
the prices for each piece of rod p.
Examples
-------
>>> bottom_up_cut_rod(4, [1, 5, 8, 9])
10
>>> bottom_up_cut_rod(10, [1, 5, 8, 9, 10, 17, 17, 20, 24, 30])
30
"""
_enforce_args(n, prices)
# length(max_rev) = n + 1, to accommodate for the revenue obtainable from a rod of
# length 0.
max_rev = [float("-inf") for _ in range(n + 1)]
max_rev[0] = 0
for i in range(1, n + 1):
max_revenue_i = max_rev[i]
for j in range(1, i + 1):
max_revenue_i = max(max_revenue_i, prices[j - 1] + max_rev[i - j])
max_rev[i] = max_revenue_i
return max_rev[n]
def _enforce_args(n: int, prices: list):
"""
Basic checks on the arguments to the rod-cutting algorithms
n: int, the length of the rod
prices: list, the price list for each piece of rod.
Throws ValueError:
if n is negative or there are fewer items in the price list than the length of
the rod
"""
if n < 0:
msg = f"n must be greater than or equal to 0. Got n = {n}"
raise ValueError(msg)
if n > len(prices):
msg = (
"Each integral piece of rod must have a corresponding price. "
f"Got n = {n} but length of prices = {len(prices)}"
)
raise ValueError(msg)
def main():
prices = [6, 10, 12, 15, 20, 23]
n = len(prices)
# the best revenue comes from cutting the rod into 6 pieces, each
# of length 1 resulting in a revenue of 6 * 6 = 36.
expected_max_revenue = 36
max_rev_top_down = top_down_cut_rod(n, prices)
max_rev_bottom_up = bottom_up_cut_rod(n, prices)
max_rev_naive = naive_cut_rod_recursive(n, prices)
assert expected_max_revenue == max_rev_top_down
assert max_rev_top_down == max_rev_bottom_up
assert max_rev_bottom_up == max_rev_naive
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
In the Combination Sum problem, we are given a list consisting of distinct integers.
We need to find all the combinations whose sum equals to target given.
We can use an element more than one.
Time complexity(Average Case): O(n!)
Constraints:
1 <= candidates.length <= 30
2 <= candidates[i] <= 40
All elements of candidates are distinct.
1 <= target <= 40
"""
def backtrack(
candidates: list, path: list, answer: list, target: int, previous_index: int
) -> None:
"""
A recursive function that searches for possible combinations. Backtracks in case
of a bigger current combination value than the target value.
Parameters
----------
previous_index: Last index from the previous search
target: The value we need to obtain by summing our integers in the path list.
answer: A list of possible combinations
path: Current combination
candidates: A list of integers we can use.
"""
if target == 0:
answer.append(path.copy())
else:
for index in range(previous_index, len(candidates)):
if target >= candidates[index]:
path.append(candidates[index])
backtrack(candidates, path, answer, target - candidates[index], index)
path.pop(len(path) - 1)
def combination_sum(candidates: list, target: int) -> list:
"""
>>> combination_sum([2, 3, 5], 8)
[[2, 2, 2, 2], [2, 3, 3], [3, 5]]
>>> combination_sum([2, 3, 6, 7], 7)
[[2, 2, 3], [7]]
>>> combination_sum([-8, 2.3, 0], 1)
Traceback (most recent call last):
...
RecursionError: maximum recursion depth exceeded in comparison
"""
path = [] # type: list[int]
answer = [] # type: list[int]
backtrack(candidates, path, answer, target, 0)
return answer
def main() -> None:
print(combination_sum([-8, 2.3, 0], 1))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
In the Combination Sum problem, we are given a list consisting of distinct integers.
We need to find all the combinations whose sum equals to target given.
We can use an element more than one.
Time complexity(Average Case): O(n!)
Constraints:
1 <= candidates.length <= 30
2 <= candidates[i] <= 40
All elements of candidates are distinct.
1 <= target <= 40
"""
def backtrack(
candidates: list, path: list, answer: list, target: int, previous_index: int
) -> None:
"""
A recursive function that searches for possible combinations. Backtracks in case
of a bigger current combination value than the target value.
Parameters
----------
previous_index: Last index from the previous search
target: The value we need to obtain by summing our integers in the path list.
answer: A list of possible combinations
path: Current combination
candidates: A list of integers we can use.
"""
if target == 0:
answer.append(path.copy())
else:
for index in range(previous_index, len(candidates)):
if target >= candidates[index]:
path.append(candidates[index])
backtrack(candidates, path, answer, target - candidates[index], index)
path.pop(len(path) - 1)
def combination_sum(candidates: list, target: int) -> list:
"""
>>> combination_sum([2, 3, 5], 8)
[[2, 2, 2, 2], [2, 3, 3], [3, 5]]
>>> combination_sum([2, 3, 6, 7], 7)
[[2, 2, 3], [7]]
>>> combination_sum([-8, 2.3, 0], 1)
Traceback (most recent call last):
...
RecursionError: maximum recursion depth exceeded in comparison
"""
path = [] # type: list[int]
answer = [] # type: list[int]
backtrack(candidates, path, answer, target, 0)
return answer
def main() -> None:
print(combination_sum([-8, 2.3, 0], 1))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| from json import loads
from pathlib import Path
import numpy as np
from yulewalker import yulewalk
from audio_filters.butterworth_filter import make_highpass
from audio_filters.iir_filter import IIRFilter
data = loads((Path(__file__).resolve().parent / "loudness_curve.json").read_text())
class EqualLoudnessFilter:
r"""
An equal-loudness filter which compensates for the human ear's non-linear response
to sound.
This filter corrects this by cascading a yulewalk filter and a butterworth filter.
Designed for use with samplerate of 44.1kHz and above. If you're using a lower
samplerate, use with caution.
Code based on matlab implementation at https://bit.ly/3eqh2HU
(url shortened for ruff)
Target curve: https://i.imgur.com/3g2VfaM.png
Yulewalk response: https://i.imgur.com/J9LnJ4C.png
Butterworth and overall response: https://i.imgur.com/3g2VfaM.png
Images and original matlab implementation by David Robinson, 2001
"""
def __init__(self, samplerate: int = 44100) -> None:
self.yulewalk_filter = IIRFilter(10)
self.butterworth_filter = make_highpass(150, samplerate)
# pad the data to nyquist
curve_freqs = np.array(data["frequencies"] + [max(20000.0, samplerate / 2)])
curve_gains = np.array(data["gains"] + [140])
# Convert to angular frequency
freqs_normalized = curve_freqs / samplerate * 2
# Invert the curve and normalize to 0dB
gains_normalized = np.power(10, (np.min(curve_gains) - curve_gains) / 20)
# Scipy's `yulewalk` function is a stub, so we're using the
# `yulewalker` library instead.
# This function computes the coefficients using a least-squares
# fit to the specified curve.
ya, yb = yulewalk(10, freqs_normalized, gains_normalized)
self.yulewalk_filter.set_coefficients(ya, yb)
def process(self, sample: float) -> float:
"""
Process a single sample through both filters
>>> filt = EqualLoudnessFilter()
>>> filt.process(0.0)
0.0
"""
tmp = self.yulewalk_filter.process(sample)
return self.butterworth_filter.process(tmp)
| from json import loads
from pathlib import Path
import numpy as np
from yulewalker import yulewalk
from audio_filters.butterworth_filter import make_highpass
from audio_filters.iir_filter import IIRFilter
data = loads((Path(__file__).resolve().parent / "loudness_curve.json").read_text())
class EqualLoudnessFilter:
r"""
An equal-loudness filter which compensates for the human ear's non-linear response
to sound.
This filter corrects this by cascading a yulewalk filter and a butterworth filter.
Designed for use with samplerate of 44.1kHz and above. If you're using a lower
samplerate, use with caution.
Code based on matlab implementation at https://bit.ly/3eqh2HU
(url shortened for ruff)
Target curve: https://i.imgur.com/3g2VfaM.png
Yulewalk response: https://i.imgur.com/J9LnJ4C.png
Butterworth and overall response: https://i.imgur.com/3g2VfaM.png
Images and original matlab implementation by David Robinson, 2001
"""
def __init__(self, samplerate: int = 44100) -> None:
self.yulewalk_filter = IIRFilter(10)
self.butterworth_filter = make_highpass(150, samplerate)
# pad the data to nyquist
curve_freqs = np.array(data["frequencies"] + [max(20000.0, samplerate / 2)])
curve_gains = np.array(data["gains"] + [140])
# Convert to angular frequency
freqs_normalized = curve_freqs / samplerate * 2
# Invert the curve and normalize to 0dB
gains_normalized = np.power(10, (np.min(curve_gains) - curve_gains) / 20)
# Scipy's `yulewalk` function is a stub, so we're using the
# `yulewalker` library instead.
# This function computes the coefficients using a least-squares
# fit to the specified curve.
ya, yb = yulewalk(10, freqs_normalized, gains_normalized)
self.yulewalk_filter.set_coefficients(ya, yb)
def process(self, sample: float) -> float:
"""
Process a single sample through both filters
>>> filt = EqualLoudnessFilter()
>>> filt.process(0.0)
0.0
"""
tmp = self.yulewalk_filter.process(sample)
return self.butterworth_filter.process(tmp)
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| """
Odd even sort implementation.
https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
"""
def odd_even_sort(input_list: list) -> list:
"""
Sort input with odd even sort.
This algorithm uses the same idea of bubblesort,
but by first dividing in two phase (odd and even).
Originally developed for use on parallel processors
with local interconnections.
:param collection: mutable ordered sequence of elements
:return: same collection in ascending order
Examples:
>>> odd_even_sort([5 , 4 ,3 ,2 ,1])
[1, 2, 3, 4, 5]
>>> odd_even_sort([])
[]
>>> odd_even_sort([-10 ,-1 ,10 ,2])
[-10, -1, 2, 10]
>>> odd_even_sort([1 ,2 ,3 ,4])
[1, 2, 3, 4]
"""
is_sorted = False
while is_sorted is False: # Until all the indices are traversed keep looping
is_sorted = True
for i in range(0, len(input_list) - 1, 2): # iterating over all even indices
if input_list[i] > input_list[i + 1]:
input_list[i], input_list[i + 1] = input_list[i + 1], input_list[i]
# swapping if elements not in order
is_sorted = False
for i in range(1, len(input_list) - 1, 2): # iterating over all odd indices
if input_list[i] > input_list[i + 1]:
input_list[i], input_list[i + 1] = input_list[i + 1], input_list[i]
# swapping if elements not in order
is_sorted = False
return input_list
if __name__ == "__main__":
print("Enter list to be sorted")
input_list = [int(x) for x in input().split()]
# inputing elements of the list in one line
sorted_list = odd_even_sort(input_list)
print("The sorted list is")
print(sorted_list)
| """
Odd even sort implementation.
https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
"""
def odd_even_sort(input_list: list) -> list:
"""
Sort input with odd even sort.
This algorithm uses the same idea of bubblesort,
but by first dividing in two phase (odd and even).
Originally developed for use on parallel processors
with local interconnections.
:param collection: mutable ordered sequence of elements
:return: same collection in ascending order
Examples:
>>> odd_even_sort([5 , 4 ,3 ,2 ,1])
[1, 2, 3, 4, 5]
>>> odd_even_sort([])
[]
>>> odd_even_sort([-10 ,-1 ,10 ,2])
[-10, -1, 2, 10]
>>> odd_even_sort([1 ,2 ,3 ,4])
[1, 2, 3, 4]
"""
is_sorted = False
while is_sorted is False: # Until all the indices are traversed keep looping
is_sorted = True
for i in range(0, len(input_list) - 1, 2): # iterating over all even indices
if input_list[i] > input_list[i + 1]:
input_list[i], input_list[i + 1] = input_list[i + 1], input_list[i]
# swapping if elements not in order
is_sorted = False
for i in range(1, len(input_list) - 1, 2): # iterating over all odd indices
if input_list[i] > input_list[i + 1]:
input_list[i], input_list[i + 1] = input_list[i + 1], input_list[i]
# swapping if elements not in order
is_sorted = False
return input_list
if __name__ == "__main__":
print("Enter list to be sorted")
input_list = [int(x) for x in input().split()]
# inputing elements of the list in one line
sorted_list = odd_even_sort(input_list)
print("The sorted list is")
print(sorted_list)
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, n: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if n == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(n / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif n == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(n / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, n: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if n == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(n / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif n == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(n / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| -1 |
||
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| # Created by susmith98
from collections import Counter
from timeit import timeit
# Problem Description:
# Check if characters of the given string can be rearranged to form a palindrome.
# Counter is faster for long strings and non-Counter is faster for short strings.
def can_string_be_rearranged_as_palindrome_counter(
input_str: str = "",
) -> bool:
"""
A Palindrome is a String that reads the same forward as it does backwards.
Examples of Palindromes mom, dad, malayalam
>>> can_string_be_rearranged_as_palindrome_counter("Momo")
True
>>> can_string_be_rearranged_as_palindrome_counter("Mother")
False
>>> can_string_be_rearranged_as_palindrome_counter("Father")
False
>>> can_string_be_rearranged_as_palindrome_counter("A man a plan a canal Panama")
True
"""
return sum(c % 2 for c in Counter(input_str.replace(" ", "").lower()).values()) < 2
def can_string_be_rearranged_as_palindrome(input_str: str = "") -> bool:
"""
A Palindrome is a String that reads the same forward as it does backwards.
Examples of Palindromes mom, dad, malayalam
>>> can_string_be_rearranged_as_palindrome("Momo")
True
>>> can_string_be_rearranged_as_palindrome("Mother")
False
>>> can_string_be_rearranged_as_palindrome("Father")
False
>>> can_string_be_rearranged_as_palindrome_counter("A man a plan a canal Panama")
True
"""
if len(input_str) == 0:
return True
lower_case_input_str = input_str.replace(" ", "").lower()
# character_freq_dict: Stores the frequency of every character in the input string
character_freq_dict: dict[str, int] = {}
for character in lower_case_input_str:
character_freq_dict[character] = character_freq_dict.get(character, 0) + 1
"""
Above line of code is equivalent to:
1) Getting the frequency of current character till previous index
>>> character_freq = character_freq_dict.get(character, 0)
2) Incrementing the frequency of current character by 1
>>> character_freq = character_freq + 1
3) Updating the frequency of current character
>>> character_freq_dict[character] = character_freq
"""
"""
OBSERVATIONS:
Even length palindrome
-> Every character appears even no.of times.
Odd length palindrome
-> Every character appears even no.of times except for one character.
LOGIC:
Step 1: We'll count number of characters that appear odd number of times i.e oddChar
Step 2:If we find more than 1 character that appears odd number of times,
It is not possible to rearrange as a palindrome
"""
odd_char = 0
for character_count in character_freq_dict.values():
if character_count % 2:
odd_char += 1
if odd_char > 1:
return False
return True
def benchmark(input_str: str = "") -> None:
"""
Benchmark code for comparing above 2 functions
"""
print("\nFor string = ", input_str, ":")
print(
"> can_string_be_rearranged_as_palindrome_counter()",
"\tans =",
can_string_be_rearranged_as_palindrome_counter(input_str),
"\ttime =",
timeit(
"z.can_string_be_rearranged_as_palindrome_counter(z.check_str)",
setup="import __main__ as z",
),
"seconds",
)
print(
"> can_string_be_rearranged_as_palindrome()",
"\tans =",
can_string_be_rearranged_as_palindrome(input_str),
"\ttime =",
timeit(
"z.can_string_be_rearranged_as_palindrome(z.check_str)",
setup="import __main__ as z",
),
"seconds",
)
if __name__ == "__main__":
check_str = input(
"Enter string to determine if it can be rearranged as a palindrome or not: "
).strip()
benchmark(check_str)
status = can_string_be_rearranged_as_palindrome_counter(check_str)
print(f"{check_str} can {'' if status else 'not '}be rearranged as a palindrome")
| # Created by susmith98
from collections import Counter
from timeit import timeit
# Problem Description:
# Check if characters of the given string can be rearranged to form a palindrome.
# Counter is faster for long strings and non-Counter is faster for short strings.
def can_string_be_rearranged_as_palindrome_counter(
input_str: str = "",
) -> bool:
"""
A Palindrome is a String that reads the same forward as it does backwards.
Examples of Palindromes mom, dad, malayalam
>>> can_string_be_rearranged_as_palindrome_counter("Momo")
True
>>> can_string_be_rearranged_as_palindrome_counter("Mother")
False
>>> can_string_be_rearranged_as_palindrome_counter("Father")
False
>>> can_string_be_rearranged_as_palindrome_counter("A man a plan a canal Panama")
True
"""
return sum(c % 2 for c in Counter(input_str.replace(" ", "").lower()).values()) < 2
def can_string_be_rearranged_as_palindrome(input_str: str = "") -> bool:
"""
A Palindrome is a String that reads the same forward as it does backwards.
Examples of Palindromes mom, dad, malayalam
>>> can_string_be_rearranged_as_palindrome("Momo")
True
>>> can_string_be_rearranged_as_palindrome("Mother")
False
>>> can_string_be_rearranged_as_palindrome("Father")
False
>>> can_string_be_rearranged_as_palindrome_counter("A man a plan a canal Panama")
True
"""
if len(input_str) == 0:
return True
lower_case_input_str = input_str.replace(" ", "").lower()
# character_freq_dict: Stores the frequency of every character in the input string
character_freq_dict: dict[str, int] = {}
for character in lower_case_input_str:
character_freq_dict[character] = character_freq_dict.get(character, 0) + 1
"""
Above line of code is equivalent to:
1) Getting the frequency of current character till previous index
>>> character_freq = character_freq_dict.get(character, 0)
2) Incrementing the frequency of current character by 1
>>> character_freq = character_freq + 1
3) Updating the frequency of current character
>>> character_freq_dict[character] = character_freq
"""
"""
OBSERVATIONS:
Even length palindrome
-> Every character appears even no.of times.
Odd length palindrome
-> Every character appears even no.of times except for one character.
LOGIC:
Step 1: We'll count number of characters that appear odd number of times i.e oddChar
Step 2:If we find more than 1 character that appears odd number of times,
It is not possible to rearrange as a palindrome
"""
odd_char = 0
for character_count in character_freq_dict.values():
if character_count % 2:
odd_char += 1
if odd_char > 1:
return False
return True
def benchmark(input_str: str = "") -> None:
"""
Benchmark code for comparing above 2 functions
"""
print("\nFor string = ", input_str, ":")
print(
"> can_string_be_rearranged_as_palindrome_counter()",
"\tans =",
can_string_be_rearranged_as_palindrome_counter(input_str),
"\ttime =",
timeit(
"z.can_string_be_rearranged_as_palindrome_counter(z.check_str)",
setup="import __main__ as z",
),
"seconds",
)
print(
"> can_string_be_rearranged_as_palindrome()",
"\tans =",
can_string_be_rearranged_as_palindrome(input_str),
"\ttime =",
timeit(
"z.can_string_be_rearranged_as_palindrome(z.check_str)",
setup="import __main__ as z",
),
"seconds",
)
if __name__ == "__main__":
check_str = input(
"Enter string to determine if it can be rearranged as a palindrome or not: "
).strip()
benchmark(check_str)
status = can_string_be_rearranged_as_palindrome_counter(check_str)
print(f"{check_str} can {'' if status else 'not '}be rearranged as a palindrome")
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| def remove_duplicates(sentence: str) -> str:
"""
Remove duplicates from sentence
>>> remove_duplicates("Python is great and Java is also great")
'Java Python also and great is'
>>> remove_duplicates("Python is great and Java is also great")
'Java Python also and great is'
"""
return " ".join(sorted(set(sentence.split())))
if __name__ == "__main__":
import doctest
doctest.testmod()
| def remove_duplicates(sentence: str) -> str:
"""
Remove duplicates from sentence
>>> remove_duplicates("Python is great and Java is also great")
'Java Python also and great is'
>>> remove_duplicates("Python is great and Java is also great")
'Java Python also and great is'
"""
return " ".join(sorted(set(sentence.split())))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| from __future__ import annotations
DIRECTIONS = [
[-1, 0], # left
[0, -1], # down
[1, 0], # right
[0, 1], # up
]
# function to search the path
def search(
grid: list[list[int]],
init: list[int],
goal: list[int],
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
closed[init[0]][init[1]] = 1
action = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the action grid
x = init[0]
y = init[1]
g = 0
f = g + heuristic[x][y] # cost from starting cell to destination cell
cell = [[f, g, x, y]]
found = False # flag that is set when search is complete
resign = False # flag set if we can't find expand
while not found and not resign:
if len(cell) == 0:
raise ValueError("Algorithm is unable to find solution")
else: # to choose the least costliest action so as to move closer to the goal
cell.sort()
cell.reverse()
next_cell = cell.pop()
x = next_cell[2]
y = next_cell[3]
g = next_cell[1]
if x == goal[0] and y == goal[1]:
found = True
else:
for i in range(len(DIRECTIONS)): # to try out different valid actions
x2 = x + DIRECTIONS[i][0]
y2 = y + DIRECTIONS[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
f2 = g2 + heuristic[x2][y2]
cell.append([f2, g2, x2, y2])
closed[x2][y2] = 1
action[x2][y2] = i
invpath = []
x = goal[0]
y = goal[1]
invpath.append([x, y]) # we get the reverse path from here
while x != init[0] or y != init[1]:
x2 = x - DIRECTIONS[action[x][y]][0]
y2 = y - DIRECTIONS[action[x][y]][1]
x = x2
y = y2
invpath.append([x, y])
path = []
for i in range(len(invpath)):
path.append(invpath[len(invpath) - 1 - i])
return path, action
if __name__ == "__main__":
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
]
init = [0, 0]
# all coordinates are given in format [y,x]
goal = [len(grid) - 1, len(grid[0]) - 1]
cost = 1
# the cost map which pushes the path closer to the goal
heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
for i in range(len(grid)):
for j in range(len(grid[0])):
heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
if grid[i][j] == 1:
# added extra penalty in the heuristic map
heuristic[i][j] = 99
path, action = search(grid, init, goal, cost, heuristic)
print("ACTION MAP")
for i in range(len(action)):
print(action[i])
for i in range(len(path)):
print(path[i])
| from __future__ import annotations
DIRECTIONS = [
[-1, 0], # left
[0, -1], # down
[1, 0], # right
[0, 1], # up
]
# function to search the path
def search(
grid: list[list[int]],
init: list[int],
goal: list[int],
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
closed[init[0]][init[1]] = 1
action = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the action grid
x = init[0]
y = init[1]
g = 0
f = g + heuristic[x][y] # cost from starting cell to destination cell
cell = [[f, g, x, y]]
found = False # flag that is set when search is complete
resign = False # flag set if we can't find expand
while not found and not resign:
if len(cell) == 0:
raise ValueError("Algorithm is unable to find solution")
else: # to choose the least costliest action so as to move closer to the goal
cell.sort()
cell.reverse()
next_cell = cell.pop()
x = next_cell[2]
y = next_cell[3]
g = next_cell[1]
if x == goal[0] and y == goal[1]:
found = True
else:
for i in range(len(DIRECTIONS)): # to try out different valid actions
x2 = x + DIRECTIONS[i][0]
y2 = y + DIRECTIONS[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
f2 = g2 + heuristic[x2][y2]
cell.append([f2, g2, x2, y2])
closed[x2][y2] = 1
action[x2][y2] = i
invpath = []
x = goal[0]
y = goal[1]
invpath.append([x, y]) # we get the reverse path from here
while x != init[0] or y != init[1]:
x2 = x - DIRECTIONS[action[x][y]][0]
y2 = y - DIRECTIONS[action[x][y]][1]
x = x2
y = y2
invpath.append([x, y])
path = []
for i in range(len(invpath)):
path.append(invpath[len(invpath) - 1 - i])
return path, action
if __name__ == "__main__":
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
]
init = [0, 0]
# all coordinates are given in format [y,x]
goal = [len(grid) - 1, len(grid[0]) - 1]
cost = 1
# the cost map which pushes the path closer to the goal
heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
for i in range(len(grid)):
for j in range(len(grid[0])):
heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
if grid[i][j] == 1:
# added extra penalty in the heuristic map
heuristic[i][j] = 99
path, action = search(grid, init, goal, cost, heuristic)
print("ACTION MAP")
for i in range(len(action)):
print(action[i])
for i in range(len(path)):
print(path[i])
| -1 |
TheAlgorithms/Python | 8,802 | Dijkstra algorithm with binary grid | This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| linushenkel | "2023-06-07T17:52:08Z" | "2023-06-22T11:49:09Z" | 07e68128883b84fb7e342c6bce88863a05fbbf62 | 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 | Dijkstra algorithm with binary grid. This script implements the Dijkstra algorithm on a binary grid. The input grid (matrix) consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. The diagonal movement can be allowed or disallowed.
This version of the Dijkstra algorithm can be usefull in video game development or in special machine movement.
| # Information on binary shifts:
# https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types
# https://www.interviewcake.com/concept/java/bit-shift
def logical_left_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 positive integers.
'number' is the integer to be logically left shifted 'shift_amount' times.
i.e. (number << shift_amount)
Return the shifted binary representation.
>>> logical_left_shift(0, 1)
'0b00'
>>> logical_left_shift(1, 1)
'0b10'
>>> logical_left_shift(1, 5)
'0b100000'
>>> logical_left_shift(17, 2)
'0b1000100'
>>> logical_left_shift(1983, 4)
'0b111101111110000'
>>> logical_left_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))
binary_number += "0" * shift_amount
return binary_number
def logical_right_shift(number: int, shift_amount: int) -> str:
"""
Take in positive 2 integers.
'number' is the integer to be logically right shifted 'shift_amount' times.
i.e. (number >>> shift_amount)
Return the shifted binary representation.
>>> logical_right_shift(0, 1)
'0b0'
>>> logical_right_shift(1, 1)
'0b0'
>>> logical_right_shift(1, 5)
'0b0'
>>> logical_right_shift(17, 2)
'0b100'
>>> logical_right_shift(1983, 4)
'0b1111011'
>>> logical_right_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))[2:]
if shift_amount >= len(binary_number):
return "0b0"
shifted_binary_number = binary_number[: len(binary_number) - shift_amount]
return "0b" + shifted_binary_number
def arithmetic_right_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 integers.
'number' is the integer to be arithmetically right shifted 'shift_amount' times.
i.e. (number >> shift_amount)
Return the shifted binary representation.
>>> arithmetic_right_shift(0, 1)
'0b00'
>>> arithmetic_right_shift(1, 1)
'0b00'
>>> arithmetic_right_shift(-1, 1)
'0b11'
>>> arithmetic_right_shift(17, 2)
'0b000100'
>>> arithmetic_right_shift(-17, 2)
'0b111011'
>>> arithmetic_right_shift(-1983, 4)
'0b111110000100'
"""
if number >= 0: # Get binary representation of positive number
binary_number = "0" + str(bin(number)).strip("-")[2:]
else: # Get binary (2's complement) representation of negative number
binary_number_length = len(bin(number)[3:]) # Find 2's complement of number
binary_number = bin(abs(number) - (1 << binary_number_length))[3:]
binary_number = (
"1" + "0" * (binary_number_length - len(binary_number)) + binary_number
)
if shift_amount >= len(binary_number):
return "0b" + binary_number[0] * len(binary_number)
return (
"0b"
+ binary_number[0] * shift_amount
+ binary_number[: len(binary_number) - shift_amount]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Information on binary shifts:
# https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types
# https://www.interviewcake.com/concept/java/bit-shift
def logical_left_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 positive integers.
'number' is the integer to be logically left shifted 'shift_amount' times.
i.e. (number << shift_amount)
Return the shifted binary representation.
>>> logical_left_shift(0, 1)
'0b00'
>>> logical_left_shift(1, 1)
'0b10'
>>> logical_left_shift(1, 5)
'0b100000'
>>> logical_left_shift(17, 2)
'0b1000100'
>>> logical_left_shift(1983, 4)
'0b111101111110000'
>>> logical_left_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))
binary_number += "0" * shift_amount
return binary_number
def logical_right_shift(number: int, shift_amount: int) -> str:
"""
Take in positive 2 integers.
'number' is the integer to be logically right shifted 'shift_amount' times.
i.e. (number >>> shift_amount)
Return the shifted binary representation.
>>> logical_right_shift(0, 1)
'0b0'
>>> logical_right_shift(1, 1)
'0b0'
>>> logical_right_shift(1, 5)
'0b0'
>>> logical_right_shift(17, 2)
'0b100'
>>> logical_right_shift(1983, 4)
'0b1111011'
>>> logical_right_shift(1, -1)
Traceback (most recent call last):
...
ValueError: both inputs must be positive integers
"""
if number < 0 or shift_amount < 0:
raise ValueError("both inputs must be positive integers")
binary_number = str(bin(number))[2:]
if shift_amount >= len(binary_number):
return "0b0"
shifted_binary_number = binary_number[: len(binary_number) - shift_amount]
return "0b" + shifted_binary_number
def arithmetic_right_shift(number: int, shift_amount: int) -> str:
"""
Take in 2 integers.
'number' is the integer to be arithmetically right shifted 'shift_amount' times.
i.e. (number >> shift_amount)
Return the shifted binary representation.
>>> arithmetic_right_shift(0, 1)
'0b00'
>>> arithmetic_right_shift(1, 1)
'0b00'
>>> arithmetic_right_shift(-1, 1)
'0b11'
>>> arithmetic_right_shift(17, 2)
'0b000100'
>>> arithmetic_right_shift(-17, 2)
'0b111011'
>>> arithmetic_right_shift(-1983, 4)
'0b111110000100'
"""
if number >= 0: # Get binary representation of positive number
binary_number = "0" + str(bin(number)).strip("-")[2:]
else: # Get binary (2's complement) representation of negative number
binary_number_length = len(bin(number)[3:]) # Find 2's complement of number
binary_number = bin(abs(number) - (1 << binary_number_length))[3:]
binary_number = (
"1" + "0" * (binary_number_length - len(binary_number)) + binary_number
)
if shift_amount >= len(binary_number):
return "0b" + binary_number[0] * len(binary_number)
return (
"0b"
+ binary_number[0] * shift_amount
+ binary_number[: len(binary_number) - shift_amount]
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Norgate](boolean_algebra/norgate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Bloom Filter](data_structures/hashing/bloom_filter.py)
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
* [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Fuzzy Logic
* [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph List](graphs/graph_list.py)
* [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Knn Sklearn](machine_learning/knn_sklearn.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polymonial Regression](machine_learning/polymonial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
* [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
* [Archimedes Principle](physics/archimedes_principle.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 079
* [Sol1](project_euler/problem_079/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 094
* [Sol1](project_euler/problem_094/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 187
* [Sol1](project_euler/problem_187/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
* Problem 800
* [Sol1](project_euler/problem_800/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
* [Deutsch Jozsa](quantum/deutsch_jozsa.py)
* [Half Adder](quantum/half_adder.py)
* [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
* [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py)
* [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Convert Number To Words](web_programming/convert_number_to_words.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Norgate](boolean_algebra/norgate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Bloom Filter](data_structures/hashing/bloom_filter.py)
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Fuzzy Logic
* [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph List](graphs/graph_list.py)
* [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Knn Sklearn](machine_learning/knn_sklearn.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polymonial Regression](machine_learning/polymonial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
* [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
* [Archimedes Principle](physics/archimedes_principle.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 079
* [Sol1](project_euler/problem_079/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 094
* [Sol1](project_euler/problem_094/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 187
* [Sol1](project_euler/problem_187/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
* Problem 800
* [Sol1](project_euler/problem_800/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
* [Deutsch Jozsa](quantum/deutsch_jozsa.py)
* [Half Adder](quantum/half_adder.py)
* [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
* [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Binary Insertion Sort](sorts/binary_insertion_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py)
* [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [String Switch Case](strings/string_switch_case.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Convert Number To Words](web_programming/convert_number_to_words.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Convert a number to use the correct SI or Binary unit prefix.
Inspired by prefix_conversion.py file in this repository by lance-pyles
URL: https://en.wikipedia.org/wiki/Metric_prefix#List_of_SI_prefixes
URL: https://en.wikipedia.org/wiki/Binary_prefix
"""
from __future__ import annotations
from enum import Enum, unique
from typing import TypeVar
# Create a generic variable that can be 'Enum', or any subclass.
T = TypeVar("T", bound="Enum")
@unique
class BinaryUnit(Enum):
yotta = 80
zetta = 70
exa = 60
peta = 50
tera = 40
giga = 30
mega = 20
kilo = 10
@unique
class SIUnit(Enum):
yotta = 24
zetta = 21
exa = 18
peta = 15
tera = 12
giga = 9
mega = 6
kilo = 3
hecto = 2
deca = 1
deci = -1
centi = -2
milli = -3
micro = -6
nano = -9
pico = -12
femto = -15
atto = -18
zepto = -21
yocto = -24
@classmethod
def get_positive(cls: type[T]) -> dict:
"""
Returns a dictionary with only the elements of this enum
that has a positive value
>>> from itertools import islice
>>> positive = SIUnit.get_positive()
>>> inc = iter(positive.items())
>>> dict(islice(inc, len(positive) // 2))
{'yotta': 24, 'zetta': 21, 'exa': 18, 'peta': 15, 'tera': 12}
>>> dict(inc)
{'giga': 9, 'mega': 6, 'kilo': 3, 'hecto': 2, 'deca': 1}
"""
return {unit.name: unit.value for unit in cls if unit.value > 0}
@classmethod
def get_negative(cls: type[T]) -> dict:
"""
Returns a dictionary with only the elements of this enum
that has a negative value
@example
>>> from itertools import islice
>>> negative = SIUnit.get_negative()
>>> inc = iter(negative.items())
>>> dict(islice(inc, len(negative) // 2))
{'deci': -1, 'centi': -2, 'milli': -3, 'micro': -6, 'nano': -9}
>>> dict(inc)
{'pico': -12, 'femto': -15, 'atto': -18, 'zepto': -21, 'yocto': -24}
"""
return {unit.name: unit.value for unit in cls if unit.value < 0}
def add_si_prefix(value: float) -> str:
"""
Function that converts a number to his version with SI prefix
@input value (an integer)
@example:
>>> add_si_prefix(10000)
'10.0 kilo'
"""
prefixes = SIUnit.get_positive() if value > 0 else SIUnit.get_negative()
for name_prefix, value_prefix in prefixes.items():
numerical_part = value / (10**value_prefix)
if numerical_part > 1:
return f"{str(numerical_part)} {name_prefix}"
return str(value)
def add_binary_prefix(value: float) -> str:
"""
Function that converts a number to his version with Binary prefix
@input value (an integer)
@example:
>>> add_binary_prefix(65536)
'64.0 kilo'
"""
for prefix in BinaryUnit:
numerical_part = value / (2**prefix.value)
if numerical_part > 1:
return f"{str(numerical_part)} {prefix.name}"
return str(value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Convert a number to use the correct SI or Binary unit prefix.
Inspired by prefix_conversion.py file in this repository by lance-pyles
URL: https://en.wikipedia.org/wiki/Metric_prefix#List_of_SI_prefixes
URL: https://en.wikipedia.org/wiki/Binary_prefix
"""
from __future__ import annotations
from enum import Enum, unique
from typing import TypeVar
# Create a generic variable that can be 'Enum', or any subclass.
T = TypeVar("T", bound="Enum")
@unique
class BinaryUnit(Enum):
yotta = 80
zetta = 70
exa = 60
peta = 50
tera = 40
giga = 30
mega = 20
kilo = 10
@unique
class SIUnit(Enum):
yotta = 24
zetta = 21
exa = 18
peta = 15
tera = 12
giga = 9
mega = 6
kilo = 3
hecto = 2
deca = 1
deci = -1
centi = -2
milli = -3
micro = -6
nano = -9
pico = -12
femto = -15
atto = -18
zepto = -21
yocto = -24
@classmethod
def get_positive(cls: type[T]) -> dict:
"""
Returns a dictionary with only the elements of this enum
that has a positive value
>>> from itertools import islice
>>> positive = SIUnit.get_positive()
>>> inc = iter(positive.items())
>>> dict(islice(inc, len(positive) // 2))
{'yotta': 24, 'zetta': 21, 'exa': 18, 'peta': 15, 'tera': 12}
>>> dict(inc)
{'giga': 9, 'mega': 6, 'kilo': 3, 'hecto': 2, 'deca': 1}
"""
return {unit.name: unit.value for unit in cls if unit.value > 0}
@classmethod
def get_negative(cls: type[T]) -> dict:
"""
Returns a dictionary with only the elements of this enum
that has a negative value
@example
>>> from itertools import islice
>>> negative = SIUnit.get_negative()
>>> inc = iter(negative.items())
>>> dict(islice(inc, len(negative) // 2))
{'deci': -1, 'centi': -2, 'milli': -3, 'micro': -6, 'nano': -9}
>>> dict(inc)
{'pico': -12, 'femto': -15, 'atto': -18, 'zepto': -21, 'yocto': -24}
"""
return {unit.name: unit.value for unit in cls if unit.value < 0}
def add_si_prefix(value: float) -> str:
"""
Function that converts a number to his version with SI prefix
@input value (an integer)
@example:
>>> add_si_prefix(10000)
'10.0 kilo'
"""
prefixes = SIUnit.get_positive() if value > 0 else SIUnit.get_negative()
for name_prefix, value_prefix in prefixes.items():
numerical_part = value / (10**value_prefix)
if numerical_part > 1:
return f"{numerical_part!s} {name_prefix}"
return str(value)
def add_binary_prefix(value: float) -> str:
"""
Function that converts a number to his version with Binary prefix
@input value (an integer)
@example:
>>> add_binary_prefix(65536)
'64.0 kilo'
"""
for prefix in BinaryUnit:
numerical_part = value / (2**prefix.value)
if numerical_part > 1:
return f"{numerical_part!s} {prefix.name}"
return str(value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The RGB color model is an additive color model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name
of the model comes from the initials of the three additive primary colors, red,
green, and blue. Meanwhile, the HSV representation models how colors appear under
light. In it, colors are represented using three components: hue, saturation and
(brightness-)value. This file provides functions for converting colors from one
representation to the other.
(description adapted from https://en.wikipedia.org/wiki/RGB_color_model and
https://en.wikipedia.org/wiki/HSL_and_HSV).
"""
def hsv_to_rgb(hue: float, saturation: float, value: float) -> list[int]:
"""
Conversion from the HSV-representation to the RGB-representation.
Expected RGB-values taken from
https://www.rapidtables.com/convert/color/hsv-to-rgb.html
>>> hsv_to_rgb(0, 0, 0)
[0, 0, 0]
>>> hsv_to_rgb(0, 0, 1)
[255, 255, 255]
>>> hsv_to_rgb(0, 1, 1)
[255, 0, 0]
>>> hsv_to_rgb(60, 1, 1)
[255, 255, 0]
>>> hsv_to_rgb(120, 1, 1)
[0, 255, 0]
>>> hsv_to_rgb(240, 1, 1)
[0, 0, 255]
>>> hsv_to_rgb(300, 1, 1)
[255, 0, 255]
>>> hsv_to_rgb(180, 0.5, 0.5)
[64, 128, 128]
>>> hsv_to_rgb(234, 0.14, 0.88)
[193, 196, 224]
>>> hsv_to_rgb(330, 0.75, 0.5)
[128, 32, 80]
"""
if hue < 0 or hue > 360:
raise Exception("hue should be between 0 and 360")
if saturation < 0 or saturation > 1:
raise Exception("saturation should be between 0 and 1")
if value < 0 or value > 1:
raise Exception("value should be between 0 and 1")
chroma = value * saturation
hue_section = hue / 60
second_largest_component = chroma * (1 - abs(hue_section % 2 - 1))
match_value = value - chroma
if hue_section >= 0 and hue_section <= 1:
red = round(255 * (chroma + match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (match_value))
elif hue_section > 1 and hue_section <= 2:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (match_value))
elif hue_section > 2 and hue_section <= 3:
red = round(255 * (match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (second_largest_component + match_value))
elif hue_section > 3 and hue_section <= 4:
red = round(255 * (match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (chroma + match_value))
elif hue_section > 4 and hue_section <= 5:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (match_value))
blue = round(255 * (chroma + match_value))
else:
red = round(255 * (chroma + match_value))
green = round(255 * (match_value))
blue = round(255 * (second_largest_component + match_value))
return [red, green, blue]
def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
"""
Conversion from the RGB-representation to the HSV-representation.
The tested values are the reverse values from the hsv_to_rgb-doctests.
Function "approximately_equal_hsv" is needed because of small deviations due to
rounding for the RGB-values.
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 0), [0, 0, 0])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 255), [0, 0, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 0), [0, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 0), [60, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 255, 0), [120, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 255), [240, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 255), [300, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(64, 128, 128), [180, 0.5, 0.5])
True
>>> approximately_equal_hsv(rgb_to_hsv(193, 196, 224), [234, 0.14, 0.88])
True
>>> approximately_equal_hsv(rgb_to_hsv(128, 32, 80), [330, 0.75, 0.5])
True
"""
if red < 0 or red > 255:
raise Exception("red should be between 0 and 255")
if green < 0 or green > 255:
raise Exception("green should be between 0 and 255")
if blue < 0 or blue > 255:
raise Exception("blue should be between 0 and 255")
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
value = max(max(float_red, float_green), float_blue)
chroma = value - min(min(float_red, float_green), float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:
hue = 0.0
elif value == float_red:
hue = 60 * (0 + (float_green - float_blue) / chroma)
elif value == float_green:
hue = 60 * (2 + (float_blue - float_red) / chroma)
else:
hue = 60 * (4 + (float_red - float_green) / chroma)
hue = (hue + 360) % 360
return [hue, saturation, value]
def approximately_equal_hsv(hsv_1: list[float], hsv_2: list[float]) -> bool:
"""
Utility-function to check that two hsv-colors are approximately equal
>>> approximately_equal_hsv([0, 0, 0], [0, 0, 0])
True
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.500001, 0.30001])
True
>>> approximately_equal_hsv([0, 0, 0], [1, 0, 0])
False
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.6, 0.30001])
False
"""
check_hue = abs(hsv_1[0] - hsv_2[0]) < 0.2
check_saturation = abs(hsv_1[1] - hsv_2[1]) < 0.002
check_value = abs(hsv_1[2] - hsv_2[2]) < 0.002
return check_hue and check_saturation and check_value
| """
The RGB color model is an additive color model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name
of the model comes from the initials of the three additive primary colors, red,
green, and blue. Meanwhile, the HSV representation models how colors appear under
light. In it, colors are represented using three components: hue, saturation and
(brightness-)value. This file provides functions for converting colors from one
representation to the other.
(description adapted from https://en.wikipedia.org/wiki/RGB_color_model and
https://en.wikipedia.org/wiki/HSL_and_HSV).
"""
def hsv_to_rgb(hue: float, saturation: float, value: float) -> list[int]:
"""
Conversion from the HSV-representation to the RGB-representation.
Expected RGB-values taken from
https://www.rapidtables.com/convert/color/hsv-to-rgb.html
>>> hsv_to_rgb(0, 0, 0)
[0, 0, 0]
>>> hsv_to_rgb(0, 0, 1)
[255, 255, 255]
>>> hsv_to_rgb(0, 1, 1)
[255, 0, 0]
>>> hsv_to_rgb(60, 1, 1)
[255, 255, 0]
>>> hsv_to_rgb(120, 1, 1)
[0, 255, 0]
>>> hsv_to_rgb(240, 1, 1)
[0, 0, 255]
>>> hsv_to_rgb(300, 1, 1)
[255, 0, 255]
>>> hsv_to_rgb(180, 0.5, 0.5)
[64, 128, 128]
>>> hsv_to_rgb(234, 0.14, 0.88)
[193, 196, 224]
>>> hsv_to_rgb(330, 0.75, 0.5)
[128, 32, 80]
"""
if hue < 0 or hue > 360:
raise Exception("hue should be between 0 and 360")
if saturation < 0 or saturation > 1:
raise Exception("saturation should be between 0 and 1")
if value < 0 or value > 1:
raise Exception("value should be between 0 and 1")
chroma = value * saturation
hue_section = hue / 60
second_largest_component = chroma * (1 - abs(hue_section % 2 - 1))
match_value = value - chroma
if hue_section >= 0 and hue_section <= 1:
red = round(255 * (chroma + match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (match_value))
elif hue_section > 1 and hue_section <= 2:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (match_value))
elif hue_section > 2 and hue_section <= 3:
red = round(255 * (match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (second_largest_component + match_value))
elif hue_section > 3 and hue_section <= 4:
red = round(255 * (match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (chroma + match_value))
elif hue_section > 4 and hue_section <= 5:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (match_value))
blue = round(255 * (chroma + match_value))
else:
red = round(255 * (chroma + match_value))
green = round(255 * (match_value))
blue = round(255 * (second_largest_component + match_value))
return [red, green, blue]
def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
"""
Conversion from the RGB-representation to the HSV-representation.
The tested values are the reverse values from the hsv_to_rgb-doctests.
Function "approximately_equal_hsv" is needed because of small deviations due to
rounding for the RGB-values.
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 0), [0, 0, 0])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 255), [0, 0, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 0), [0, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 0), [60, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 255, 0), [120, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 255), [240, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 255), [300, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(64, 128, 128), [180, 0.5, 0.5])
True
>>> approximately_equal_hsv(rgb_to_hsv(193, 196, 224), [234, 0.14, 0.88])
True
>>> approximately_equal_hsv(rgb_to_hsv(128, 32, 80), [330, 0.75, 0.5])
True
"""
if red < 0 or red > 255:
raise Exception("red should be between 0 and 255")
if green < 0 or green > 255:
raise Exception("green should be between 0 and 255")
if blue < 0 or blue > 255:
raise Exception("blue should be between 0 and 255")
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
value = max(float_red, float_green, float_blue)
chroma = value - min(float_red, float_green, float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:
hue = 0.0
elif value == float_red:
hue = 60 * (0 + (float_green - float_blue) / chroma)
elif value == float_green:
hue = 60 * (2 + (float_blue - float_red) / chroma)
else:
hue = 60 * (4 + (float_red - float_green) / chroma)
hue = (hue + 360) % 360
return [hue, saturation, value]
def approximately_equal_hsv(hsv_1: list[float], hsv_2: list[float]) -> bool:
"""
Utility-function to check that two hsv-colors are approximately equal
>>> approximately_equal_hsv([0, 0, 0], [0, 0, 0])
True
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.500001, 0.30001])
True
>>> approximately_equal_hsv([0, 0, 0], [1, 0, 0])
False
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.6, 0.30001])
False
"""
check_hue = abs(hsv_1[0] - hsv_2[0]) < 0.2
check_saturation = abs(hsv_1[1] - hsv_2[1]) < 0.002
check_value = abs(hsv_1[2] - hsv_2[2]) < 0.002
return check_hue and check_saturation and check_value
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
PyTest's for Digital Image Processing
"""
import numpy as np
from cv2 import COLOR_BGR2GRAY, cvtColor, imread
from numpy import array, uint8
from PIL import Image
from digital_image_processing import change_contrast as cc
from digital_image_processing import convert_to_negative as cn
from digital_image_processing import sepia as sp
from digital_image_processing.dithering import burkes as bs
from digital_image_processing.edge_detection import canny
from digital_image_processing.filters import convolve as conv
from digital_image_processing.filters import gaussian_filter as gg
from digital_image_processing.filters import local_binary_pattern as lbp
from digital_image_processing.filters import median_filter as med
from digital_image_processing.filters import sobel_filter as sob
from digital_image_processing.resize import resize as rs
img = imread(r"digital_image_processing/image_data/lena_small.jpg")
gray = cvtColor(img, COLOR_BGR2GRAY)
# Test: convert_to_negative()
def test_convert_to_negative():
negative_img = cn.convert_to_negative(img)
# assert negative_img array for at least one True
assert negative_img.any()
# Test: change_contrast()
def test_change_contrast():
with Image.open("digital_image_processing/image_data/lena_small.jpg") as img:
# Work around assertion for response
assert str(cc.change_contrast(img, 110)).startswith(
"<PIL.Image.Image image mode=RGB size=100x100 at"
)
# canny.gen_gaussian_kernel()
def test_gen_gaussian_kernel():
resp = canny.gen_gaussian_kernel(9, sigma=1.4)
# Assert ambiguous array
assert resp.all()
# canny.py
def test_canny():
canny_img = imread("digital_image_processing/image_data/lena_small.jpg", 0)
# assert ambiguous array for all == True
assert canny_img.all()
canny_array = canny.canny(canny_img)
# assert canny array for at least one True
assert canny_array.any()
# filters/gaussian_filter.py
def test_gen_gaussian_kernel_filter():
assert gg.gaussian_filter(gray, 5, sigma=0.9).all()
def test_convolve_filter():
# laplace diagonals
laplace = array([[0.25, 0.5, 0.25], [0.5, -3, 0.5], [0.25, 0.5, 0.25]])
res = conv.img_convolve(gray, laplace).astype(uint8)
assert res.any()
def test_median_filter():
assert med.median_filter(gray, 3).any()
def test_sobel_filter():
grad, theta = sob.sobel_filter(gray)
assert grad.any() and theta.any()
def test_sepia():
sepia = sp.make_sepia(img, 20)
assert sepia.all()
def test_burkes(file_path: str = "digital_image_processing/image_data/lena_small.jpg"):
burkes = bs.Burkes(imread(file_path, 1), 120)
burkes.process()
assert burkes.output_img.any()
def test_nearest_neighbour(
file_path: str = "digital_image_processing/image_data/lena_small.jpg",
):
nn = rs.NearestNeighbour(imread(file_path, 1), 400, 200)
nn.process()
assert nn.output.any()
def test_local_binary_pattern():
file_path: str = "digital_image_processing/image_data/lena.jpg"
# Reading the image and converting it to grayscale.
image = imread(file_path, 0)
# Test for get_neighbors_pixel function() return not None
x_coordinate = 0
y_coordinate = 0
center = image[x_coordinate][y_coordinate]
neighbors_pixels = lbp.get_neighbors_pixel(
image, x_coordinate, y_coordinate, center
)
assert neighbors_pixels is not None
# Test for local_binary_pattern function()
# Create a numpy array as the same height and width of read image
lbp_image = np.zeros((image.shape[0], image.shape[1]))
# Iterating through the image and calculating the local binary pattern value
# for each pixel.
for i in range(0, image.shape[0]):
for j in range(0, image.shape[1]):
lbp_image[i][j] = lbp.local_binary_value(image, i, j)
assert lbp_image.any()
| """
PyTest's for Digital Image Processing
"""
import numpy as np
from cv2 import COLOR_BGR2GRAY, cvtColor, imread
from numpy import array, uint8
from PIL import Image
from digital_image_processing import change_contrast as cc
from digital_image_processing import convert_to_negative as cn
from digital_image_processing import sepia as sp
from digital_image_processing.dithering import burkes as bs
from digital_image_processing.edge_detection import canny
from digital_image_processing.filters import convolve as conv
from digital_image_processing.filters import gaussian_filter as gg
from digital_image_processing.filters import local_binary_pattern as lbp
from digital_image_processing.filters import median_filter as med
from digital_image_processing.filters import sobel_filter as sob
from digital_image_processing.resize import resize as rs
img = imread(r"digital_image_processing/image_data/lena_small.jpg")
gray = cvtColor(img, COLOR_BGR2GRAY)
# Test: convert_to_negative()
def test_convert_to_negative():
negative_img = cn.convert_to_negative(img)
# assert negative_img array for at least one True
assert negative_img.any()
# Test: change_contrast()
def test_change_contrast():
with Image.open("digital_image_processing/image_data/lena_small.jpg") as img:
# Work around assertion for response
assert str(cc.change_contrast(img, 110)).startswith(
"<PIL.Image.Image image mode=RGB size=100x100 at"
)
# canny.gen_gaussian_kernel()
def test_gen_gaussian_kernel():
resp = canny.gen_gaussian_kernel(9, sigma=1.4)
# Assert ambiguous array
assert resp.all()
# canny.py
def test_canny():
canny_img = imread("digital_image_processing/image_data/lena_small.jpg", 0)
# assert ambiguous array for all == True
assert canny_img.all()
canny_array = canny.canny(canny_img)
# assert canny array for at least one True
assert canny_array.any()
# filters/gaussian_filter.py
def test_gen_gaussian_kernel_filter():
assert gg.gaussian_filter(gray, 5, sigma=0.9).all()
def test_convolve_filter():
# laplace diagonals
laplace = array([[0.25, 0.5, 0.25], [0.5, -3, 0.5], [0.25, 0.5, 0.25]])
res = conv.img_convolve(gray, laplace).astype(uint8)
assert res.any()
def test_median_filter():
assert med.median_filter(gray, 3).any()
def test_sobel_filter():
grad, theta = sob.sobel_filter(gray)
assert grad.any() and theta.any()
def test_sepia():
sepia = sp.make_sepia(img, 20)
assert sepia.all()
def test_burkes(file_path: str = "digital_image_processing/image_data/lena_small.jpg"):
burkes = bs.Burkes(imread(file_path, 1), 120)
burkes.process()
assert burkes.output_img.any()
def test_nearest_neighbour(
file_path: str = "digital_image_processing/image_data/lena_small.jpg",
):
nn = rs.NearestNeighbour(imread(file_path, 1), 400, 200)
nn.process()
assert nn.output.any()
def test_local_binary_pattern():
file_path = "digital_image_processing/image_data/lena.jpg"
# Reading the image and converting it to grayscale.
image = imread(file_path, 0)
# Test for get_neighbors_pixel function() return not None
x_coordinate = 0
y_coordinate = 0
center = image[x_coordinate][y_coordinate]
neighbors_pixels = lbp.get_neighbors_pixel(
image, x_coordinate, y_coordinate, center
)
assert neighbors_pixels is not None
# Test for local_binary_pattern function()
# Create a numpy array as the same height and width of read image
lbp_image = np.zeros((image.shape[0], image.shape[1]))
# Iterating through the image and calculating the local binary pattern value
# for each pixel.
for i in range(0, image.shape[0]):
for j in range(0, image.shape[1]):
lbp_image[i][j] = lbp.local_binary_value(image, i, j)
assert lbp_image.any()
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of Dynamic Programming solution to the fibonacci
sequence problem.
"""
class Fibonacci:
def __init__(self) -> None:
self.sequence = [0, 1]
def get(self, index: int) -> list:
"""
Get the Fibonacci number of `index`. If the number does not exist,
calculate all missing numbers leading up to the number of `index`.
>>> Fibonacci().get(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
>>> Fibonacci().get(5)
[0, 1, 1, 2, 3]
"""
if (difference := index - (len(self.sequence) - 2)) >= 1:
for _ in range(difference):
self.sequence.append(self.sequence[-1] + self.sequence[-2])
return self.sequence[:index]
def main():
print(
"Fibonacci Series Using Dynamic Programming\n",
"Enter the index of the Fibonacci number you want to calculate ",
"in the prompt below. (To exit enter exit or Ctrl-C)\n",
sep="",
)
fibonacci = Fibonacci()
while True:
prompt: str = input(">> ")
if prompt in {"exit", "quit"}:
break
try:
index: int = int(prompt)
except ValueError:
print("Enter a number or 'exit'")
continue
print(fibonacci.get(index))
if __name__ == "__main__":
main()
| """
This is a pure Python implementation of Dynamic Programming solution to the fibonacci
sequence problem.
"""
class Fibonacci:
def __init__(self) -> None:
self.sequence = [0, 1]
def get(self, index: int) -> list:
"""
Get the Fibonacci number of `index`. If the number does not exist,
calculate all missing numbers leading up to the number of `index`.
>>> Fibonacci().get(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
>>> Fibonacci().get(5)
[0, 1, 1, 2, 3]
"""
if (difference := index - (len(self.sequence) - 2)) >= 1:
for _ in range(difference):
self.sequence.append(self.sequence[-1] + self.sequence[-2])
return self.sequence[:index]
def main() -> None:
print(
"Fibonacci Series Using Dynamic Programming\n",
"Enter the index of the Fibonacci number you want to calculate ",
"in the prompt below. (To exit enter exit or Ctrl-C)\n",
sep="",
)
fibonacci = Fibonacci()
while True:
prompt: str = input(">> ")
if prompt in {"exit", "quit"}:
break
try:
index: int = int(prompt)
except ValueError:
print("Enter a number or 'exit'")
continue
print(fibonacci.get(index))
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from collections.abc import Iterable
from typing import Union
import numpy as np
Vector = Union[Iterable[float], Iterable[int], np.ndarray]
VectorOut = Union[np.float64, int, float]
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance(np.array([0, 0, 0]), np.array([2, 2, 2]))
3.4641016151377544
>>> euclidean_distance(np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]))
8.0
>>> euclidean_distance([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2))
def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors without numpy.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance_no_np((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance_no_np([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2)
if __name__ == "__main__":
def benchmark() -> None:
"""
Benchmarks
"""
from timeit import timeit
print("Without Numpy")
print(
timeit(
"euclidean_distance_no_np([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
print("With Numpy")
print(
timeit(
"euclidean_distance([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
benchmark()
| from __future__ import annotations
import typing
from collections.abc import Iterable
import numpy as np
Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] # noqa: UP007
VectorOut = typing.Union[np.float64, int, float] # noqa: UP007
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance(np.array([0, 0, 0]), np.array([2, 2, 2]))
3.4641016151377544
>>> euclidean_distance(np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]))
8.0
>>> euclidean_distance([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2))
def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors without numpy.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance_no_np((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance_no_np([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2)
if __name__ == "__main__":
def benchmark() -> None:
"""
Benchmarks
"""
from timeit import timeit
print("Without Numpy")
print(
timeit(
"euclidean_distance_no_np([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
print("With Numpy")
print(
timeit(
"euclidean_distance([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
benchmark()
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Horizontal Projectile Motion problem in physics.
This algorithm solves a specific problem in which
the motion starts from the ground as can be seen below:
(v = 0)
* *
* *
* *
* *
* *
* *
GROUND GROUND
For more info: https://en.wikipedia.org/wiki/Projectile_motion
"""
# Importing packages
from math import radians as angle_to_radians
from math import sin
# Acceleration Constant on Earth (unit m/s^2)
g = 9.80665
def check_args(init_velocity: float, angle: float) -> None:
"""
Check that the arguments are valid
"""
# Ensure valid instance
if not isinstance(init_velocity, (int, float)):
raise TypeError("Invalid velocity. Should be a positive number.")
if not isinstance(angle, (int, float)):
raise TypeError("Invalid angle. Range is 1-90 degrees.")
# Ensure valid angle
if angle > 90 or angle < 1:
raise ValueError("Invalid angle. Range is 1-90 degrees.")
# Ensure valid velocity
if init_velocity < 0:
raise ValueError("Invalid velocity. Should be a positive number.")
def horizontal_distance(init_velocity: float, angle: float) -> float:
"""
Returns the horizontal distance that the object cover
Formula:
v_0^2 * sin(2 * alpha)
---------------------
g
v_0 - initial velocity
alpha - angle
>>> horizontal_distance(30, 45)
91.77
>>> horizontal_distance(100, 78)
414.76
>>> horizontal_distance(-1, 20)
Traceback (most recent call last):
...
ValueError: Invalid velocity. Should be a positive number.
>>> horizontal_distance(30, -20)
Traceback (most recent call last):
...
ValueError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(2 * angle)
return round(init_velocity**2 * sin(radians) / g, 2)
def max_height(init_velocity: float, angle: float) -> float:
"""
Returns the maximum height that the object reach
Formula:
v_0^2 * sin^2(alpha)
--------------------
2g
v_0 - initial velocity
alpha - angle
>>> max_height(30, 45)
22.94
>>> max_height(100, 78)
487.82
>>> max_height("a", 20)
Traceback (most recent call last):
...
TypeError: Invalid velocity. Should be a positive number.
>>> horizontal_distance(30, "b")
Traceback (most recent call last):
...
TypeError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(angle)
return round(init_velocity**2 * sin(radians) ** 2 / (2 * g), 2)
def total_time(init_velocity: float, angle: float) -> float:
"""
Returns total time of the motion
Formula:
2 * v_0 * sin(alpha)
--------------------
g
v_0 - initial velocity
alpha - angle
>>> total_time(30, 45)
4.33
>>> total_time(100, 78)
19.95
>>> total_time(-10, 40)
Traceback (most recent call last):
...
ValueError: Invalid velocity. Should be a positive number.
>>> total_time(30, "b")
Traceback (most recent call last):
...
TypeError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(angle)
return round(2 * init_velocity * sin(radians) / g, 2)
def test_motion() -> None:
"""
>>> test_motion()
"""
v0, angle = 25, 20
assert horizontal_distance(v0, angle) == 40.97
assert max_height(v0, angle) == 3.73
assert total_time(v0, angle) == 1.74
if __name__ == "__main__":
from doctest import testmod
testmod()
# Get input from user
init_vel = float(input("Initial Velocity: ").strip())
# Get input from user
angle = float(input("angle: ").strip())
# Print results
print()
print("Results: ")
print(f"Horizontal Distance: {str(horizontal_distance(init_vel, angle))} [m]")
print(f"Maximum Height: {str(max_height(init_vel, angle))} [m]")
print(f"Total Time: {str(total_time(init_vel, angle))} [s]")
| """
Horizontal Projectile Motion problem in physics.
This algorithm solves a specific problem in which
the motion starts from the ground as can be seen below:
(v = 0)
* *
* *
* *
* *
* *
* *
GROUND GROUND
For more info: https://en.wikipedia.org/wiki/Projectile_motion
"""
# Importing packages
from math import radians as angle_to_radians
from math import sin
# Acceleration Constant on Earth (unit m/s^2)
g = 9.80665
def check_args(init_velocity: float, angle: float) -> None:
"""
Check that the arguments are valid
"""
# Ensure valid instance
if not isinstance(init_velocity, (int, float)):
raise TypeError("Invalid velocity. Should be a positive number.")
if not isinstance(angle, (int, float)):
raise TypeError("Invalid angle. Range is 1-90 degrees.")
# Ensure valid angle
if angle > 90 or angle < 1:
raise ValueError("Invalid angle. Range is 1-90 degrees.")
# Ensure valid velocity
if init_velocity < 0:
raise ValueError("Invalid velocity. Should be a positive number.")
def horizontal_distance(init_velocity: float, angle: float) -> float:
"""
Returns the horizontal distance that the object cover
Formula:
v_0^2 * sin(2 * alpha)
---------------------
g
v_0 - initial velocity
alpha - angle
>>> horizontal_distance(30, 45)
91.77
>>> horizontal_distance(100, 78)
414.76
>>> horizontal_distance(-1, 20)
Traceback (most recent call last):
...
ValueError: Invalid velocity. Should be a positive number.
>>> horizontal_distance(30, -20)
Traceback (most recent call last):
...
ValueError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(2 * angle)
return round(init_velocity**2 * sin(radians) / g, 2)
def max_height(init_velocity: float, angle: float) -> float:
"""
Returns the maximum height that the object reach
Formula:
v_0^2 * sin^2(alpha)
--------------------
2g
v_0 - initial velocity
alpha - angle
>>> max_height(30, 45)
22.94
>>> max_height(100, 78)
487.82
>>> max_height("a", 20)
Traceback (most recent call last):
...
TypeError: Invalid velocity. Should be a positive number.
>>> horizontal_distance(30, "b")
Traceback (most recent call last):
...
TypeError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(angle)
return round(init_velocity**2 * sin(radians) ** 2 / (2 * g), 2)
def total_time(init_velocity: float, angle: float) -> float:
"""
Returns total time of the motion
Formula:
2 * v_0 * sin(alpha)
--------------------
g
v_0 - initial velocity
alpha - angle
>>> total_time(30, 45)
4.33
>>> total_time(100, 78)
19.95
>>> total_time(-10, 40)
Traceback (most recent call last):
...
ValueError: Invalid velocity. Should be a positive number.
>>> total_time(30, "b")
Traceback (most recent call last):
...
TypeError: Invalid angle. Range is 1-90 degrees.
"""
check_args(init_velocity, angle)
radians = angle_to_radians(angle)
return round(2 * init_velocity * sin(radians) / g, 2)
def test_motion() -> None:
"""
>>> test_motion()
"""
v0, angle = 25, 20
assert horizontal_distance(v0, angle) == 40.97
assert max_height(v0, angle) == 3.73
assert total_time(v0, angle) == 1.74
if __name__ == "__main__":
from doctest import testmod
testmod()
# Get input from user
init_vel = float(input("Initial Velocity: ").strip())
# Get input from user
angle = float(input("angle: ").strip())
# Print results
print()
print("Results: ")
print(f"Horizontal Distance: {horizontal_distance(init_vel, angle)!s} [m]")
print(f"Maximum Height: {max_height(init_vel, angle)!s} [m]")
print(f"Total Time: {total_time(init_vel, angle)!s} [s]")
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of tree traversal algorithms
"""
from __future__ import annotations
import queue
class TreeNode:
def __init__(self, data):
self.data = data
self.right = None
self.left = None
def build_tree():
print("\n********Press N to stop entering at any point of time********\n")
check = input("Enter the value of the root node: ").strip().lower() or "n"
if check == "n":
return None
q: queue.Queue = queue.Queue()
tree_node = TreeNode(int(check))
q.put(tree_node)
while not q.empty():
node_found = q.get()
msg = f"Enter the left node of {node_found.data}: "
check = input(msg).strip().lower() or "n"
if check == "n":
return tree_node
left_node = TreeNode(int(check))
node_found.left = left_node
q.put(left_node)
msg = f"Enter the right node of {node_found.data}: "
check = input(msg).strip().lower() or "n"
if check == "n":
return tree_node
right_node = TreeNode(int(check))
node_found.right = right_node
q.put(right_node)
return None
def pre_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> pre_order(root)
1,2,4,5,3,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
print(node.data, end=",")
pre_order(node.left)
pre_order(node.right)
def in_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> in_order(root)
4,2,5,1,6,3,7,
"""
if not isinstance(node, TreeNode) or not node:
return
in_order(node.left)
print(node.data, end=",")
in_order(node.right)
def post_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> post_order(root)
4,5,2,6,7,3,1,
"""
if not isinstance(node, TreeNode) or not node:
return
post_order(node.left)
post_order(node.right)
print(node.data, end=",")
def level_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> level_order(root)
1,2,3,4,5,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
q: queue.Queue = queue.Queue()
q.put(node)
while not q.empty():
node_dequeued = q.get()
print(node_dequeued.data, end=",")
if node_dequeued.left:
q.put(node_dequeued.left)
if node_dequeued.right:
q.put(node_dequeued.right)
def level_order_actual(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> level_order_actual(root)
1,
2,3,
4,5,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
q: queue.Queue = queue.Queue()
q.put(node)
while not q.empty():
list_ = []
while not q.empty():
node_dequeued = q.get()
print(node_dequeued.data, end=",")
if node_dequeued.left:
list_.append(node_dequeued.left)
if node_dequeued.right:
list_.append(node_dequeued.right)
print()
for node in list_:
q.put(node)
# iteration version
def pre_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> pre_order_iter(root)
1,2,4,5,3,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
stack: list[TreeNode] = []
n = node
while n or stack:
while n: # start from root node, find its left child
print(n.data, end=",")
stack.append(n)
n = n.left
# end of while means current node doesn't have left child
n = stack.pop()
# start to traverse its right child
n = n.right
def in_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> in_order_iter(root)
4,2,5,1,6,3,7,
"""
if not isinstance(node, TreeNode) or not node:
return
stack: list[TreeNode] = []
n = node
while n or stack:
while n:
stack.append(n)
n = n.left
n = stack.pop()
print(n.data, end=",")
n = n.right
def post_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> post_order_iter(root)
4,5,2,6,7,3,1,
"""
if not isinstance(node, TreeNode) or not node:
return
stack1, stack2 = [], []
n = node
stack1.append(n)
while stack1: # to find the reversed order of post order, store it in stack2
n = stack1.pop()
if n.left:
stack1.append(n.left)
if n.right:
stack1.append(n.right)
stack2.append(n)
while stack2: # pop up from stack2 will be the post order
print(stack2.pop().data, end=",")
def prompt(s: str = "", width=50, char="*") -> str:
if not s:
return "\n" + width * char
left, extra = divmod(width - len(s) - 2, 2)
return f"{left * char} {s} {(left + extra) * char}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(prompt("Binary Tree Traversals"))
node = build_tree()
print(prompt("Pre Order Traversal"))
pre_order(node)
print(prompt() + "\n")
print(prompt("In Order Traversal"))
in_order(node)
print(prompt() + "\n")
print(prompt("Post Order Traversal"))
post_order(node)
print(prompt() + "\n")
print(prompt("Level Order Traversal"))
level_order(node)
print(prompt() + "\n")
print(prompt("Actual Level Order Traversal"))
level_order_actual(node)
print("*" * 50 + "\n")
print(prompt("Pre Order Traversal - Iteration Version"))
pre_order_iter(node)
print(prompt() + "\n")
print(prompt("In Order Traversal - Iteration Version"))
in_order_iter(node)
print(prompt() + "\n")
print(prompt("Post Order Traversal - Iteration Version"))
post_order_iter(node)
print(prompt())
| """
This is pure Python implementation of tree traversal algorithms
"""
from __future__ import annotations
import queue
class TreeNode:
def __init__(self, data):
self.data = data
self.right = None
self.left = None
def build_tree() -> TreeNode:
print("\n********Press N to stop entering at any point of time********\n")
check = input("Enter the value of the root node: ").strip().lower()
q: queue.Queue = queue.Queue()
tree_node = TreeNode(int(check))
q.put(tree_node)
while not q.empty():
node_found = q.get()
msg = f"Enter the left node of {node_found.data}: "
check = input(msg).strip().lower() or "n"
if check == "n":
return tree_node
left_node = TreeNode(int(check))
node_found.left = left_node
q.put(left_node)
msg = f"Enter the right node of {node_found.data}: "
check = input(msg).strip().lower() or "n"
if check == "n":
return tree_node
right_node = TreeNode(int(check))
node_found.right = right_node
q.put(right_node)
raise
def pre_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> pre_order(root)
1,2,4,5,3,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
print(node.data, end=",")
pre_order(node.left)
pre_order(node.right)
def in_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> in_order(root)
4,2,5,1,6,3,7,
"""
if not isinstance(node, TreeNode) or not node:
return
in_order(node.left)
print(node.data, end=",")
in_order(node.right)
def post_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> post_order(root)
4,5,2,6,7,3,1,
"""
if not isinstance(node, TreeNode) or not node:
return
post_order(node.left)
post_order(node.right)
print(node.data, end=",")
def level_order(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> level_order(root)
1,2,3,4,5,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
q: queue.Queue = queue.Queue()
q.put(node)
while not q.empty():
node_dequeued = q.get()
print(node_dequeued.data, end=",")
if node_dequeued.left:
q.put(node_dequeued.left)
if node_dequeued.right:
q.put(node_dequeued.right)
def level_order_actual(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> level_order_actual(root)
1,
2,3,
4,5,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
q: queue.Queue = queue.Queue()
q.put(node)
while not q.empty():
list_ = []
while not q.empty():
node_dequeued = q.get()
print(node_dequeued.data, end=",")
if node_dequeued.left:
list_.append(node_dequeued.left)
if node_dequeued.right:
list_.append(node_dequeued.right)
print()
for node in list_:
q.put(node)
# iteration version
def pre_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> pre_order_iter(root)
1,2,4,5,3,6,7,
"""
if not isinstance(node, TreeNode) or not node:
return
stack: list[TreeNode] = []
n = node
while n or stack:
while n: # start from root node, find its left child
print(n.data, end=",")
stack.append(n)
n = n.left
# end of while means current node doesn't have left child
n = stack.pop()
# start to traverse its right child
n = n.right
def in_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> in_order_iter(root)
4,2,5,1,6,3,7,
"""
if not isinstance(node, TreeNode) or not node:
return
stack: list[TreeNode] = []
n = node
while n or stack:
while n:
stack.append(n)
n = n.left
n = stack.pop()
print(n.data, end=",")
n = n.right
def post_order_iter(node: TreeNode) -> None:
"""
>>> root = TreeNode(1)
>>> tree_node2 = TreeNode(2)
>>> tree_node3 = TreeNode(3)
>>> tree_node4 = TreeNode(4)
>>> tree_node5 = TreeNode(5)
>>> tree_node6 = TreeNode(6)
>>> tree_node7 = TreeNode(7)
>>> root.left, root.right = tree_node2, tree_node3
>>> tree_node2.left, tree_node2.right = tree_node4 , tree_node5
>>> tree_node3.left, tree_node3.right = tree_node6 , tree_node7
>>> post_order_iter(root)
4,5,2,6,7,3,1,
"""
if not isinstance(node, TreeNode) or not node:
return
stack1, stack2 = [], []
n = node
stack1.append(n)
while stack1: # to find the reversed order of post order, store it in stack2
n = stack1.pop()
if n.left:
stack1.append(n.left)
if n.right:
stack1.append(n.right)
stack2.append(n)
while stack2: # pop up from stack2 will be the post order
print(stack2.pop().data, end=",")
def prompt(s: str = "", width=50, char="*") -> str:
if not s:
return "\n" + width * char
left, extra = divmod(width - len(s) - 2, 2)
return f"{left * char} {s} {(left + extra) * char}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(prompt("Binary Tree Traversals"))
node: TreeNode = build_tree()
print(prompt("Pre Order Traversal"))
pre_order(node)
print(prompt() + "\n")
print(prompt("In Order Traversal"))
in_order(node)
print(prompt() + "\n")
print(prompt("Post Order Traversal"))
post_order(node)
print(prompt() + "\n")
print(prompt("Level Order Traversal"))
level_order(node)
print(prompt() + "\n")
print(prompt("Actual Level Order Traversal"))
level_order_actual(node)
print("*" * 50 + "\n")
print(prompt("Pre Order Traversal - Iteration Version"))
pre_order_iter(node)
print(prompt() + "\n")
print(prompt("In Order Traversal - Iteration Version"))
in_order_iter(node)
print(prompt() + "\n")
print(prompt("Post Order Traversal - Iteration Version"))
post_order_iter(node)
print(prompt())
| 1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
import os
from collections.abc import Iterator
def good_file_paths(top_dir: str = ".") -> Iterator[str]:
for dir_path, dir_names, filenames in os.walk(top_dir):
dir_names[:] = [d for d in dir_names if d != "scripts" and d[0] not in "._"]
for filename in filenames:
if filename == "__init__.py":
continue
if os.path.splitext(filename)[1] in (".py", ".ipynb"):
yield os.path.join(dir_path, filename).lstrip("./")
def md_prefix(i):
return f"{i * ' '}*" if i else "\n##"
def print_path(old_path: str, new_path: str) -> str:
old_parts = old_path.split(os.sep)
for i, new_part in enumerate(new_path.split(os.sep)):
if (i + 1 > len(old_parts) or old_parts[i] != new_part) and new_part:
print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
return new_path
def print_directory_md(top_dir: str = ".") -> None:
old_path = ""
for filepath in sorted(good_file_paths(top_dir)):
filepath, filename = os.path.split(filepath)
if filepath != old_path:
old_path = print_path(old_path, filepath)
indent = (filepath.count(os.sep) + 1) if filepath else 0
url = "/".join((filepath, filename)).replace(" ", "%20")
filename = os.path.splitext(filename.replace("_", " ").title())[0]
print(f"{md_prefix(indent)} [{filename}]({url})")
if __name__ == "__main__":
print_directory_md(".")
| #!/usr/bin/env python3
import os
from collections.abc import Iterator
def good_file_paths(top_dir: str = ".") -> Iterator[str]:
for dir_path, dir_names, filenames in os.walk(top_dir):
dir_names[:] = [d for d in dir_names if d != "scripts" and d[0] not in "._"]
for filename in filenames:
if filename == "__init__.py":
continue
if os.path.splitext(filename)[1] in (".py", ".ipynb"):
yield os.path.join(dir_path, filename).lstrip("./")
def md_prefix(i):
return f"{i * ' '}*" if i else "\n##"
def print_path(old_path: str, new_path: str) -> str:
old_parts = old_path.split(os.sep)
for i, new_part in enumerate(new_path.split(os.sep)):
if (i + 1 > len(old_parts) or old_parts[i] != new_part) and new_part:
print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
return new_path
def print_directory_md(top_dir: str = ".") -> None:
old_path = ""
for filepath in sorted(good_file_paths(top_dir)):
filepath, filename = os.path.split(filepath)
if filepath != old_path:
old_path = print_path(old_path, filepath)
indent = (filepath.count(os.sep) + 1) if filepath else 0
url = "/".join((filepath, filename)).replace(" ", "%20")
filename = os.path.splitext(filename.replace("_", " ").title())[0]
print(f"{md_prefix(indent)} [{filename}]({url})")
if __name__ == "__main__":
print_directory_md(".")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def find_min(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def find_min(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
The Burrows–Wheeler transform (BWT, also called block-sorting compression)
rearranges a character string into runs of similar characters. This is useful
for compression, since it tends to be easy to compress a string that has runs
of repeated characters by techniques such as move-to-front transform and
run-length encoding. More importantly, the transformation is reversible,
without needing to store any additional data except the position of the first
original character. The BWT is thus a "free" method of improving the efficiency
of text compression algorithms, costing only some extra computation.
"""
from __future__ import annotations
from typing import TypedDict
class BWTTransformDict(TypedDict):
bwt_string: str
idx_original_string: int
def all_rotations(s: str) -> list[str]:
"""
:param s: The string that will be rotated len(s) times.
:return: A list with the rotations.
:raises TypeError: If s is not an instance of str.
Examples:
>>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE
['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA',
'A|^BANAN', '|^BANANA']
>>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE
['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a',
'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d',
'_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca',
'aa_asa_da_cas']
>>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE
['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan',
'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab',
'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan']
>>> all_rotations(5)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
return [s[i:] + s[:i] for i in range(len(s))]
def bwt_transform(s: str) -> BWTTransformDict:
"""
:param s: The string that will be used at bwt algorithm
:return: the string composed of the last char of each row of the ordered
rotations and the index of the original string at ordered rotations list
:raises TypeError: If the s parameter type is not str
:raises ValueError: If the s parameter is empty
Examples:
>>> bwt_transform("^BANANA")
{'bwt_string': 'BNN^AAA', 'idx_original_string': 6}
>>> bwt_transform("a_asa_da_casa")
{'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3}
>>> bwt_transform("panamabanana")
{'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11}
>>> bwt_transform(4)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
>>> bwt_transform('')
Traceback (most recent call last):
...
ValueError: The parameter s must not be empty.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
if not s:
raise ValueError("The parameter s must not be empty.")
rotations = all_rotations(s)
rotations.sort() # sort the list of rotations in alphabetically order
# make a string composed of the last char of each rotation
response: BWTTransformDict = {
"bwt_string": "".join([word[-1] for word in rotations]),
"idx_original_string": rotations.index(s),
}
return response
def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
"""
:param bwt_string: The string returned from bwt algorithm execution
:param idx_original_string: A 0-based index of the string that was used to
generate bwt_string at ordered rotations list
:return: The string used to generate bwt_string when bwt was executed
:raises TypeError: If the bwt_string parameter type is not str
:raises ValueError: If the bwt_string parameter is empty
:raises TypeError: If the idx_original_string type is not int or if not
possible to cast it to int
:raises ValueError: If the idx_original_string value is lower than 0 or
greater than len(bwt_string) - 1
>>> reverse_bwt("BNN^AAA", 6)
'^BANANA'
>>> reverse_bwt("aaaadss_c__aa", 3)
'a_asa_da_casa'
>>> reverse_bwt("mnpbnnaaaaaa", 11)
'panamabanana'
>>> reverse_bwt(4, 11)
Traceback (most recent call last):
...
TypeError: The parameter bwt_string type must be str.
>>> reverse_bwt("", 11)
Traceback (most recent call last):
...
ValueError: The parameter bwt_string must not be empty.
>>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: The parameter idx_original_string type must be int or passive
of cast to int.
>>> reverse_bwt("mnpbnnaaaaaa", -1)
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must not be lower than 0.
>>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must be lower than
len(bwt_string).
>>> reverse_bwt("mnpbnnaaaaaa", 11.0)
'panamabanana'
>>> reverse_bwt("mnpbnnaaaaaa", 11.4)
'panamabanana'
"""
if not isinstance(bwt_string, str):
raise TypeError("The parameter bwt_string type must be str.")
if not bwt_string:
raise ValueError("The parameter bwt_string must not be empty.")
try:
idx_original_string = int(idx_original_string)
except ValueError:
raise TypeError(
"The parameter idx_original_string type must be int or passive"
" of cast to int."
)
if idx_original_string < 0:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
"The parameter idx_original_string must be lower than" " len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)
for _ in range(len(bwt_string)):
for i in range(len(bwt_string)):
ordered_rotations[i] = bwt_string[i] + ordered_rotations[i]
ordered_rotations.sort()
return ordered_rotations[idx_original_string]
if __name__ == "__main__":
entry_msg = "Provide a string that I will generate its BWT transform: "
s = input(entry_msg).strip()
result = bwt_transform(s)
print(
f"Burrows Wheeler transform for string '{s}' results "
f"in '{result['bwt_string']}'"
)
original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"])
print(
f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' "
f"we get original string '{original_string}'"
)
| """
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
The Burrows–Wheeler transform (BWT, also called block-sorting compression)
rearranges a character string into runs of similar characters. This is useful
for compression, since it tends to be easy to compress a string that has runs
of repeated characters by techniques such as move-to-front transform and
run-length encoding. More importantly, the transformation is reversible,
without needing to store any additional data except the position of the first
original character. The BWT is thus a "free" method of improving the efficiency
of text compression algorithms, costing only some extra computation.
"""
from __future__ import annotations
from typing import TypedDict
class BWTTransformDict(TypedDict):
bwt_string: str
idx_original_string: int
def all_rotations(s: str) -> list[str]:
"""
:param s: The string that will be rotated len(s) times.
:return: A list with the rotations.
:raises TypeError: If s is not an instance of str.
Examples:
>>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE
['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA',
'A|^BANAN', '|^BANANA']
>>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE
['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a',
'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d',
'_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca',
'aa_asa_da_cas']
>>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE
['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan',
'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab',
'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan']
>>> all_rotations(5)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
return [s[i:] + s[:i] for i in range(len(s))]
def bwt_transform(s: str) -> BWTTransformDict:
"""
:param s: The string that will be used at bwt algorithm
:return: the string composed of the last char of each row of the ordered
rotations and the index of the original string at ordered rotations list
:raises TypeError: If the s parameter type is not str
:raises ValueError: If the s parameter is empty
Examples:
>>> bwt_transform("^BANANA")
{'bwt_string': 'BNN^AAA', 'idx_original_string': 6}
>>> bwt_transform("a_asa_da_casa")
{'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3}
>>> bwt_transform("panamabanana")
{'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11}
>>> bwt_transform(4)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
>>> bwt_transform('')
Traceback (most recent call last):
...
ValueError: The parameter s must not be empty.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
if not s:
raise ValueError("The parameter s must not be empty.")
rotations = all_rotations(s)
rotations.sort() # sort the list of rotations in alphabetically order
# make a string composed of the last char of each rotation
response: BWTTransformDict = {
"bwt_string": "".join([word[-1] for word in rotations]),
"idx_original_string": rotations.index(s),
}
return response
def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
"""
:param bwt_string: The string returned from bwt algorithm execution
:param idx_original_string: A 0-based index of the string that was used to
generate bwt_string at ordered rotations list
:return: The string used to generate bwt_string when bwt was executed
:raises TypeError: If the bwt_string parameter type is not str
:raises ValueError: If the bwt_string parameter is empty
:raises TypeError: If the idx_original_string type is not int or if not
possible to cast it to int
:raises ValueError: If the idx_original_string value is lower than 0 or
greater than len(bwt_string) - 1
>>> reverse_bwt("BNN^AAA", 6)
'^BANANA'
>>> reverse_bwt("aaaadss_c__aa", 3)
'a_asa_da_casa'
>>> reverse_bwt("mnpbnnaaaaaa", 11)
'panamabanana'
>>> reverse_bwt(4, 11)
Traceback (most recent call last):
...
TypeError: The parameter bwt_string type must be str.
>>> reverse_bwt("", 11)
Traceback (most recent call last):
...
ValueError: The parameter bwt_string must not be empty.
>>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: The parameter idx_original_string type must be int or passive
of cast to int.
>>> reverse_bwt("mnpbnnaaaaaa", -1)
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must not be lower than 0.
>>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must be lower than
len(bwt_string).
>>> reverse_bwt("mnpbnnaaaaaa", 11.0)
'panamabanana'
>>> reverse_bwt("mnpbnnaaaaaa", 11.4)
'panamabanana'
"""
if not isinstance(bwt_string, str):
raise TypeError("The parameter bwt_string type must be str.")
if not bwt_string:
raise ValueError("The parameter bwt_string must not be empty.")
try:
idx_original_string = int(idx_original_string)
except ValueError:
raise TypeError(
"The parameter idx_original_string type must be int or passive"
" of cast to int."
)
if idx_original_string < 0:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
"The parameter idx_original_string must be lower than" " len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)
for _ in range(len(bwt_string)):
for i in range(len(bwt_string)):
ordered_rotations[i] = bwt_string[i] + ordered_rotations[i]
ordered_rotations.sort()
return ordered_rotations[idx_original_string]
if __name__ == "__main__":
entry_msg = "Provide a string that I will generate its BWT transform: "
s = input(entry_msg).strip()
result = bwt_transform(s)
print(
f"Burrows Wheeler transform for string '{s}' results "
f"in '{result['bwt_string']}'"
)
original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"])
print(
f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' "
f"we get original string '{original_string}'"
)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Normal Distribution QuickSort
QuickSort Algorithm where the pivot element is chosen randomly between first and last elements of the array, and the array elements are taken from Standard Normal Distribution.
## Array elements
The array elements are taken from a Standard Normal Distribution, having mean = 0 and standard deviation = 1.
### The code
```python
>>> import numpy as np
>>> from tempfile import TemporaryFile
>>> outfile = TemporaryFile()
>>> p = 100 # 100 elements are to be sorted
>>> mu, sigma = 0, 1 # mean and standard deviation
>>> X = np.random.normal(mu, sigma, p)
>>> np.save(outfile, X)
>>> 'The array is'
>>> X
```
------
#### The distribution of the array elements
```python
>>> mu, sigma = 0, 1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, p)
>>> count, bins, ignored = plt.hist(s, 30, normed=True)
>>> plt.plot(bins , 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
>>> plt.show()
```
------

------
## Comparing the numbers of comparisons
We can plot the function for Checking 'The Number of Comparisons' taking place between Normal Distribution QuickSort and Ordinary QuickSort:
```python
>>> import matplotlib.pyplot as plt
# Normal Distribution QuickSort is red
>>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,6,15,43,136,340,800,2156,6821,16325],linewidth=2, color='r')
# Ordinary QuickSort is green
>>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,4,16,67,122,362,949,2131,5086,12866],linewidth=2, color='g')
>>> plt.show()
```
| # Normal Distribution QuickSort
QuickSort Algorithm where the pivot element is chosen randomly between first and last elements of the array, and the array elements are taken from Standard Normal Distribution.
## Array elements
The array elements are taken from a Standard Normal Distribution, having mean = 0 and standard deviation = 1.
### The code
```python
>>> import numpy as np
>>> from tempfile import TemporaryFile
>>> outfile = TemporaryFile()
>>> p = 100 # 100 elements are to be sorted
>>> mu, sigma = 0, 1 # mean and standard deviation
>>> X = np.random.normal(mu, sigma, p)
>>> np.save(outfile, X)
>>> 'The array is'
>>> X
```
------
#### The distribution of the array elements
```python
>>> mu, sigma = 0, 1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, p)
>>> count, bins, ignored = plt.hist(s, 30, normed=True)
>>> plt.plot(bins , 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
>>> plt.show()
```
------

------
## Comparing the numbers of comparisons
We can plot the function for Checking 'The Number of Comparisons' taking place between Normal Distribution QuickSort and Ordinary QuickSort:
```python
>>> import matplotlib.pyplot as plt
# Normal Distribution QuickSort is red
>>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,6,15,43,136,340,800,2156,6821,16325],linewidth=2, color='r')
# Ordinary QuickSort is green
>>> plt.plot([1,2,4,16,32,64,128,256,512,1024,2048],[1,1,4,16,67,122,362,949,2131,5086,12866],linewidth=2, color='g')
>>> plt.show()
```
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import math
from collections.abc import Callable
def intersection(function: Callable[[float], float], x0: float, x1: float) -> float:
"""
function is the f we want to find its root
x0 and x1 are two random starting points
>>> intersection(lambda x: x ** 3 - 1, -5, 5)
0.9999999999954654
>>> intersection(lambda x: x ** 3 - 1, 5, 5)
Traceback (most recent call last):
...
ZeroDivisionError: float division by zero, could not find root
>>> intersection(lambda x: x ** 3 - 1, 100, 200)
1.0000000000003888
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 0, 2)
0.9999999998088019
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 2, 4)
2.9999999998088023
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 4, 1000)
3.0000000001786042
>>> intersection(math.sin, -math.pi, math.pi)
0.0
>>> intersection(math.cos, -math.pi, math.pi)
Traceback (most recent call last):
...
ZeroDivisionError: float division by zero, could not find root
"""
x_n: float = x0
x_n1: float = x1
while True:
if x_n == x_n1 or function(x_n1) == function(x_n):
raise ZeroDivisionError("float division by zero, could not find root")
x_n2: float = x_n1 - (
function(x_n1) / ((function(x_n1) - function(x_n)) / (x_n1 - x_n))
)
if abs(x_n2 - x_n1) < 10**-5:
return x_n2
x_n = x_n1
x_n1 = x_n2
def f(x: float) -> float:
return math.pow(x, 3) - (2 * x) - 5
if __name__ == "__main__":
print(intersection(f, 3, 3.5))
| import math
from collections.abc import Callable
def intersection(function: Callable[[float], float], x0: float, x1: float) -> float:
"""
function is the f we want to find its root
x0 and x1 are two random starting points
>>> intersection(lambda x: x ** 3 - 1, -5, 5)
0.9999999999954654
>>> intersection(lambda x: x ** 3 - 1, 5, 5)
Traceback (most recent call last):
...
ZeroDivisionError: float division by zero, could not find root
>>> intersection(lambda x: x ** 3 - 1, 100, 200)
1.0000000000003888
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 0, 2)
0.9999999998088019
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 2, 4)
2.9999999998088023
>>> intersection(lambda x: x ** 2 - 4 * x + 3, 4, 1000)
3.0000000001786042
>>> intersection(math.sin, -math.pi, math.pi)
0.0
>>> intersection(math.cos, -math.pi, math.pi)
Traceback (most recent call last):
...
ZeroDivisionError: float division by zero, could not find root
"""
x_n: float = x0
x_n1: float = x1
while True:
if x_n == x_n1 or function(x_n1) == function(x_n):
raise ZeroDivisionError("float division by zero, could not find root")
x_n2: float = x_n1 - (
function(x_n1) / ((function(x_n1) - function(x_n)) / (x_n1 - x_n))
)
if abs(x_n2 - x_n1) < 10**-5:
return x_n2
x_n = x_n1
x_n1 = x_n2
def f(x: float) -> float:
return math.pow(x, 3) - (2 * x) - 5
if __name__ == "__main__":
print(intersection(f, 3, 3.5))
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Description : Centripetal force is the force acting on an object in
curvilinear motion directed towards the axis of rotation
or centre of curvature.
The unit of centripetal force is newton.
The centripetal force is always directed perpendicular to the
direction of the object’s displacement. Using Newton’s second
law of motion, it is found that the centripetal force of an object
moving in a circular path always acts towards the centre of the circle.
The Centripetal Force Formula is given as the product of mass (in kg)
and tangential velocity (in meters per second) squared, divided by the
radius (in meters) that implies that on doubling the tangential velocity,
the centripetal force will be quadrupled. Mathematically it is written as:
F = mv²/r
Where, F is the Centripetal force, m is the mass of the object, v is the
speed or velocity of the object and r is the radius.
Reference: https://byjus.com/physics/centripetal-and-centrifugal-force/
"""
def centripetal(mass: float, velocity: float, radius: float) -> float:
"""
The Centripetal Force formula is given as: (m*v*v)/r
>>> round(centripetal(15.5,-30,10),2)
1395.0
>>> round(centripetal(10,15,5),2)
450.0
>>> round(centripetal(20,-50,15),2)
3333.33
>>> round(centripetal(12.25,40,25),2)
784.0
>>> round(centripetal(50,100,50),2)
10000.0
"""
if mass < 0:
raise ValueError("The mass of the body cannot be negative")
if radius <= 0:
raise ValueError("The radius is always a positive non zero integer")
return (mass * (velocity) ** 2) / radius
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| """
Description : Centripetal force is the force acting on an object in
curvilinear motion directed towards the axis of rotation
or centre of curvature.
The unit of centripetal force is newton.
The centripetal force is always directed perpendicular to the
direction of the object’s displacement. Using Newton’s second
law of motion, it is found that the centripetal force of an object
moving in a circular path always acts towards the centre of the circle.
The Centripetal Force Formula is given as the product of mass (in kg)
and tangential velocity (in meters per second) squared, divided by the
radius (in meters) that implies that on doubling the tangential velocity,
the centripetal force will be quadrupled. Mathematically it is written as:
F = mv²/r
Where, F is the Centripetal force, m is the mass of the object, v is the
speed or velocity of the object and r is the radius.
Reference: https://byjus.com/physics/centripetal-and-centrifugal-force/
"""
def centripetal(mass: float, velocity: float, radius: float) -> float:
"""
The Centripetal Force formula is given as: (m*v*v)/r
>>> round(centripetal(15.5,-30,10),2)
1395.0
>>> round(centripetal(10,15,5),2)
450.0
>>> round(centripetal(20,-50,15),2)
3333.33
>>> round(centripetal(12.25,40,25),2)
784.0
>>> round(centripetal(50,100,50),2)
10000.0
"""
if mass < 0:
raise ValueError("The mass of the body cannot be negative")
if radius <= 0:
raise ValueError("The radius is always a positive non zero integer")
return (mass * (velocity) ** 2) / radius
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The root-mean-square speed is essential in measuring the average speed of particles
contained in a gas, defined as,
-----------------
| Vrms = √3RT/M |
-----------------
In Kinetic Molecular Theory, gasified particles are in a condition of constant random
motion; each particle moves at a completely different pace, perpetually clashing and
changing directions consistently velocity is used to describe the movement of gas
particles, thereby taking into account both speed and direction. Although the velocity
of gaseous particles is constantly changing, the distribution of velocities does not
change.
We cannot gauge the velocity of every individual particle, thus we frequently reason
in terms of the particles average behavior. Particles moving in opposite directions
have velocities of opposite signs. Since gas particles are in random motion, it's
plausible that there'll be about as several moving in one direction as within the other
way, which means that the average velocity for a collection of gas particles equals
zero; as this value is unhelpful, the average of velocities can be determined using an
alternative method.
"""
UNIVERSAL_GAS_CONSTANT = 8.3144598
def rms_speed_of_molecule(temperature: float, molar_mass: float) -> float:
"""
>>> rms_speed_of_molecule(100, 2)
35.315279554323226
>>> rms_speed_of_molecule(273, 12)
23.821458421977443
"""
if temperature < 0:
raise Exception("Temperature cannot be less than 0 K")
if molar_mass <= 0:
raise Exception("Molar mass cannot be less than or equal to 0 kg/mol")
else:
return (3 * UNIVERSAL_GAS_CONSTANT * temperature / molar_mass) ** 0.5
if __name__ == "__main__":
import doctest
# run doctest
doctest.testmod()
# example
temperature = 300
molar_mass = 28
vrms = rms_speed_of_molecule(temperature, molar_mass)
print(f"Vrms of Nitrogen gas at 300 K is {vrms} m/s")
| """
The root-mean-square speed is essential in measuring the average speed of particles
contained in a gas, defined as,
-----------------
| Vrms = √3RT/M |
-----------------
In Kinetic Molecular Theory, gasified particles are in a condition of constant random
motion; each particle moves at a completely different pace, perpetually clashing and
changing directions consistently velocity is used to describe the movement of gas
particles, thereby taking into account both speed and direction. Although the velocity
of gaseous particles is constantly changing, the distribution of velocities does not
change.
We cannot gauge the velocity of every individual particle, thus we frequently reason
in terms of the particles average behavior. Particles moving in opposite directions
have velocities of opposite signs. Since gas particles are in random motion, it's
plausible that there'll be about as several moving in one direction as within the other
way, which means that the average velocity for a collection of gas particles equals
zero; as this value is unhelpful, the average of velocities can be determined using an
alternative method.
"""
UNIVERSAL_GAS_CONSTANT = 8.3144598
def rms_speed_of_molecule(temperature: float, molar_mass: float) -> float:
"""
>>> rms_speed_of_molecule(100, 2)
35.315279554323226
>>> rms_speed_of_molecule(273, 12)
23.821458421977443
"""
if temperature < 0:
raise Exception("Temperature cannot be less than 0 K")
if molar_mass <= 0:
raise Exception("Molar mass cannot be less than or equal to 0 kg/mol")
else:
return (3 * UNIVERSAL_GAS_CONSTANT * temperature / molar_mass) ** 0.5
if __name__ == "__main__":
import doctest
# run doctest
doctest.testmod()
# example
temperature = 300
molar_mass = 28
vrms = rms_speed_of_molecule(temperature, molar_mass)
print(f"Vrms of Nitrogen gas at 300 K is {vrms} m/s")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Blockchain
A Blockchain is a type of **distributed ledger** technology (DLT) that consists of growing list of records, called **blocks**, that are securely linked together using **cryptography**.
Let's breakdown the terminologies in the above definition. We find below terminologies,
- Digital Ledger Technology (DLT)
- Blocks
- Cryptography
## Digital Ledger Technology
It is otherwise called as distributed ledger technology. It is simply the opposite of centralized database. Firstly, what is a **ledger**? A ledger is a book or collection of accounts that records account transactions.
*Why is Blockchain addressed as digital ledger if it can record more than account transactions? What other transaction details and information can it hold?*
Digital Ledger Technology is just a ledger which is shared among multiple nodes. This way there exist no need for central authority to hold the info. Okay, how is it differentiated from central database and what are their benefits?
There is an organization which has 4 branches whose data are stored in a centralized database. So even if one branch needs any data from ledger they need an approval from database in charge. And if one hacks the central database he gets to tamper and control all the data.
Now lets assume every branch has a copy of the ledger and then once anything is added to the ledger by anyone branch it is gonna automatically reflect in all other ledgers available in other branch. This is done using Peer-to-peer network.
So this means even if information is tampered in one branch we can find out. If one branch is hacked we can be alerted ,so we can safeguard other branches. Now, assume these branches as computers or nodes and the ledger is a transaction record or digital receipt. If one ledger is hacked in a node we can detect since there will be a mismatch in comparison with other node information. So this is the concept of Digital Ledger Technology.
*Is it required for all nodes to have access to all information in other nodes? Wouldn't this require enormous storage space in each node?*
## Blocks
In short a block is nothing but collections of records with a labelled header. These are connected cryptographically. Once a new block is added to a chain, the previous block is connected, more precisely said as locked and hence, will remain unaltered. We can understand this concept once we get a clear understanding of working mechanism of blockchain.
## Cryptography
It is the practice and study of secure communication techniques in the midst of adversarial behavior. More broadly, cryptography is the creation and analysis of protocols that prevent third parties or the general public from accessing private messages.
*Which cryptography technology is most widely used in blockchain and why?*
So, in general, blockchain technology is a distributed record holder which records the information about ownership of an asset. To define precisely,
> Blockchain is a distributed, immutable ledger that makes it easier to record transactions and track assets in a corporate network.
An asset could be tangible (such as a house, car, cash, or land) or intangible (such as a business) (intellectual property, patents, copyrights, branding). A blockchain network can track and sell almost anything of value, lowering risk and costs for everyone involved.
So this is all about introduction to blockchain technology. To learn more about the topic refer below links....
* <https://en.wikipedia.org/wiki/Blockchain>
* <https://en.wikipedia.org/wiki/Chinese_remainder_theorem>
* <https://en.wikipedia.org/wiki/Diophantine_equation>
* <https://www.geeksforgeeks.org/modular-division/>
| # Blockchain
A Blockchain is a type of **distributed ledger** technology (DLT) that consists of growing list of records, called **blocks**, that are securely linked together using **cryptography**.
Let's breakdown the terminologies in the above definition. We find below terminologies,
- Digital Ledger Technology (DLT)
- Blocks
- Cryptography
## Digital Ledger Technology
It is otherwise called as distributed ledger technology. It is simply the opposite of centralized database. Firstly, what is a **ledger**? A ledger is a book or collection of accounts that records account transactions.
*Why is Blockchain addressed as digital ledger if it can record more than account transactions? What other transaction details and information can it hold?*
Digital Ledger Technology is just a ledger which is shared among multiple nodes. This way there exist no need for central authority to hold the info. Okay, how is it differentiated from central database and what are their benefits?
There is an organization which has 4 branches whose data are stored in a centralized database. So even if one branch needs any data from ledger they need an approval from database in charge. And if one hacks the central database he gets to tamper and control all the data.
Now lets assume every branch has a copy of the ledger and then once anything is added to the ledger by anyone branch it is gonna automatically reflect in all other ledgers available in other branch. This is done using Peer-to-peer network.
So this means even if information is tampered in one branch we can find out. If one branch is hacked we can be alerted ,so we can safeguard other branches. Now, assume these branches as computers or nodes and the ledger is a transaction record or digital receipt. If one ledger is hacked in a node we can detect since there will be a mismatch in comparison with other node information. So this is the concept of Digital Ledger Technology.
*Is it required for all nodes to have access to all information in other nodes? Wouldn't this require enormous storage space in each node?*
## Blocks
In short a block is nothing but collections of records with a labelled header. These are connected cryptographically. Once a new block is added to a chain, the previous block is connected, more precisely said as locked and hence, will remain unaltered. We can understand this concept once we get a clear understanding of working mechanism of blockchain.
## Cryptography
It is the practice and study of secure communication techniques in the midst of adversarial behavior. More broadly, cryptography is the creation and analysis of protocols that prevent third parties or the general public from accessing private messages.
*Which cryptography technology is most widely used in blockchain and why?*
So, in general, blockchain technology is a distributed record holder which records the information about ownership of an asset. To define precisely,
> Blockchain is a distributed, immutable ledger that makes it easier to record transactions and track assets in a corporate network.
An asset could be tangible (such as a house, car, cash, or land) or intangible (such as a business) (intellectual property, patents, copyrights, branding). A blockchain network can track and sell almost anything of value, lowering risk and costs for everyone involved.
So this is all about introduction to blockchain technology. To learn more about the topic refer below links....
* <https://en.wikipedia.org/wiki/Blockchain>
* <https://en.wikipedia.org/wiki/Chinese_remainder_theorem>
* <https://en.wikipedia.org/wiki/Diophantine_equation>
* <https://www.geeksforgeeks.org/modular-division/>
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
total = 0
for col in range(row + 1, columns):
total += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - total) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
total = 0
for col in range(row + 1, columns):
total += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - total) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import shutil
import requests
def get_apod_data(api_key: str, download: bool = False, path: str = ".") -> dict:
"""
Get the APOD(Astronomical Picture of the day) data
Get your API Key from: https://api.nasa.gov/
"""
url = "https://api.nasa.gov/planetary/apod"
return requests.get(url, params={"api_key": api_key}).json()
def save_apod(api_key: str, path: str = ".") -> dict:
apod_data = get_apod_data(api_key)
img_url = apod_data["url"]
img_name = img_url.split("/")[-1]
response = requests.get(img_url, stream=True)
with open(f"{path}/{img_name}", "wb+") as img_file:
shutil.copyfileobj(response.raw, img_file)
del response
return apod_data
def get_archive_data(query: str) -> dict:
"""
Get the data of a particular query from NASA archives
"""
url = "https://images-api.nasa.gov/search"
return requests.get(url, params={"q": query}).json()
if __name__ == "__main__":
print(save_apod("YOUR API KEY"))
apollo_2011_items = get_archive_data("apollo 2011")["collection"]["items"]
print(apollo_2011_items[0]["data"][0]["description"])
| import shutil
import requests
def get_apod_data(api_key: str, download: bool = False, path: str = ".") -> dict:
"""
Get the APOD(Astronomical Picture of the day) data
Get your API Key from: https://api.nasa.gov/
"""
url = "https://api.nasa.gov/planetary/apod"
return requests.get(url, params={"api_key": api_key}).json()
def save_apod(api_key: str, path: str = ".") -> dict:
apod_data = get_apod_data(api_key)
img_url = apod_data["url"]
img_name = img_url.split("/")[-1]
response = requests.get(img_url, stream=True)
with open(f"{path}/{img_name}", "wb+") as img_file:
shutil.copyfileobj(response.raw, img_file)
del response
return apod_data
def get_archive_data(query: str) -> dict:
"""
Get the data of a particular query from NASA archives
"""
url = "https://images-api.nasa.gov/search"
return requests.get(url, params={"q": query}).json()
if __name__ == "__main__":
print(save_apod("YOUR API KEY"))
apollo_2011_items = get_archive_data("apollo 2011")["collection"]["items"]
print(apollo_2011_items[0]["data"][0]["description"])
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implemented an algorithm using opencv to convert a colored image into its negative
"""
from cv2 import destroyAllWindows, imread, imshow, waitKey
def convert_to_negative(img):
# getting number of pixels in the image
pixel_h, pixel_v = img.shape[0], img.shape[1]
# converting each pixel's color to its negative
for i in range(pixel_h):
for j in range(pixel_v):
img[i][j] = [255, 255, 255] - img[i][j]
return img
if __name__ == "__main__":
# read original image
img = imread("image_data/lena.jpg", 1)
# convert to its negative
neg = convert_to_negative(img)
# show result image
imshow("negative of original image", img)
waitKey(0)
destroyAllWindows()
| """
Implemented an algorithm using opencv to convert a colored image into its negative
"""
from cv2 import destroyAllWindows, imread, imshow, waitKey
def convert_to_negative(img):
# getting number of pixels in the image
pixel_h, pixel_v = img.shape[0], img.shape[1]
# converting each pixel's color to its negative
for i in range(pixel_h):
for j in range(pixel_v):
img[i][j] = [255, 255, 255] - img[i][j]
return img
if __name__ == "__main__":
# read original image
img = imread("image_data/lena.jpg", 1)
# convert to its negative
neg = convert_to_negative(img)
# show result image
imshow("negative of original image", img)
waitKey(0)
destroyAllWindows()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Title : Calculating the Hubble Parameter
Description : The Hubble parameter H is the Universe expansion rate
in any time. In cosmology is customary to use the redshift redshift
in place of time, becausethe redshift is directily mensure
in the light of galaxies moving away from us.
So, the general relation that we obtain is
H = hubble_constant*(radiation_density*(redshift+1)**4
+ matter_density*(redshift+1)**3
+ curvature*(redshift+1)**2 + dark_energy)**(1/2)
where radiation_density, matter_density, dark_energy are the relativity
(the percentage) energy densities that exist
in the Universe today. Here, matter_density is the
sum of the barion density and the
dark matter. Curvature is the curvature parameter and can be written in term
of the densities by the completeness
curvature = 1 - (matter_density + radiation_density + dark_energy)
Source :
https://www.sciencedirect.com/topics/mathematics/hubble-parameter
"""
def hubble_parameter(
hubble_constant: float,
radiation_density: float,
matter_density: float,
dark_energy: float,
redshift: float,
) -> float:
"""
Input Parameters
----------------
hubble_constant: Hubble constante is the expansion rate today usually
given in km/(s*Mpc)
radiation_density: relative radiation density today
matter_density: relative mass density today
dark_energy: relative dark energy density today
redshift: the light redshift
Returns
-------
result : Hubble parameter in and the unit km/s/Mpc (the unit can be
changed if you want, just need to change the unit of the Hubble constant)
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density=-0.3, dark_energy=0.7, redshift=1)
Traceback (most recent call last):
...
ValueError: All input parameters must be positive
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density= 1.2, dark_energy=0.7, redshift=1)
Traceback (most recent call last):
...
ValueError: Relative densities cannot be greater than one
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density= 0.3, dark_energy=0.7, redshift=0)
68.3
"""
parameters = [redshift, radiation_density, matter_density, dark_energy]
if any(p < 0 for p in parameters):
raise ValueError("All input parameters must be positive")
if any(p > 1 for p in parameters[1:4]):
raise ValueError("Relative densities cannot be greater than one")
else:
curvature = 1 - (matter_density + radiation_density + dark_energy)
e_2 = (
radiation_density * (redshift + 1) ** 4
+ matter_density * (redshift + 1) ** 3
+ curvature * (redshift + 1) ** 2
+ dark_energy
)
hubble = hubble_constant * e_2 ** (1 / 2)
return hubble
if __name__ == "__main__":
import doctest
# run doctest
doctest.testmod()
# demo LCDM approximation
matter_density = 0.3
print(
hubble_parameter(
hubble_constant=68.3,
radiation_density=1e-4,
matter_density=matter_density,
dark_energy=1 - matter_density,
redshift=0,
)
)
| """
Title : Calculating the Hubble Parameter
Description : The Hubble parameter H is the Universe expansion rate
in any time. In cosmology is customary to use the redshift redshift
in place of time, becausethe redshift is directily mensure
in the light of galaxies moving away from us.
So, the general relation that we obtain is
H = hubble_constant*(radiation_density*(redshift+1)**4
+ matter_density*(redshift+1)**3
+ curvature*(redshift+1)**2 + dark_energy)**(1/2)
where radiation_density, matter_density, dark_energy are the relativity
(the percentage) energy densities that exist
in the Universe today. Here, matter_density is the
sum of the barion density and the
dark matter. Curvature is the curvature parameter and can be written in term
of the densities by the completeness
curvature = 1 - (matter_density + radiation_density + dark_energy)
Source :
https://www.sciencedirect.com/topics/mathematics/hubble-parameter
"""
def hubble_parameter(
hubble_constant: float,
radiation_density: float,
matter_density: float,
dark_energy: float,
redshift: float,
) -> float:
"""
Input Parameters
----------------
hubble_constant: Hubble constante is the expansion rate today usually
given in km/(s*Mpc)
radiation_density: relative radiation density today
matter_density: relative mass density today
dark_energy: relative dark energy density today
redshift: the light redshift
Returns
-------
result : Hubble parameter in and the unit km/s/Mpc (the unit can be
changed if you want, just need to change the unit of the Hubble constant)
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density=-0.3, dark_energy=0.7, redshift=1)
Traceback (most recent call last):
...
ValueError: All input parameters must be positive
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density= 1.2, dark_energy=0.7, redshift=1)
Traceback (most recent call last):
...
ValueError: Relative densities cannot be greater than one
>>> hubble_parameter(hubble_constant=68.3, radiation_density=1e-4,
... matter_density= 0.3, dark_energy=0.7, redshift=0)
68.3
"""
parameters = [redshift, radiation_density, matter_density, dark_energy]
if any(p < 0 for p in parameters):
raise ValueError("All input parameters must be positive")
if any(p > 1 for p in parameters[1:4]):
raise ValueError("Relative densities cannot be greater than one")
else:
curvature = 1 - (matter_density + radiation_density + dark_energy)
e_2 = (
radiation_density * (redshift + 1) ** 4
+ matter_density * (redshift + 1) ** 3
+ curvature * (redshift + 1) ** 2
+ dark_energy
)
hubble = hubble_constant * e_2 ** (1 / 2)
return hubble
if __name__ == "__main__":
import doctest
# run doctest
doctest.testmod()
# demo LCDM approximation
matter_density = 0.3
print(
hubble_parameter(
hubble_constant=68.3,
radiation_density=1e-4,
matter_density=matter_density,
dark_energy=1 - matter_density,
redshift=0,
)
)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections.abc import Callable
import numpy as np
def explicit_euler(
ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float
) -> np.ndarray:
"""Calculate numeric solution at each step to an ODE using Euler's Method
For reference to Euler's method refer to https://en.wikipedia.org/wiki/Euler_method.
Args:
ode_func (Callable): The ordinary differential equation
as a function of x and y.
y0 (float): The initial value for y.
x0 (float): The initial value for x.
step_size (float): The increment value for x.
x_end (float): The final value of x to be calculated.
Returns:
np.ndarray: Solution of y for every step in x.
>>> # the exact solution is math.exp(x)
>>> def f(x, y):
... return y
>>> y0 = 1
>>> y = explicit_euler(f, y0, 0.0, 0.01, 5)
>>> y[-1]
144.77277243257308
"""
n = int(np.ceil((x_end - x0) / step_size))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
y[k + 1] = y[k] + step_size * ode_func(x, y[k])
x += step_size
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
| from collections.abc import Callable
import numpy as np
def explicit_euler(
ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float
) -> np.ndarray:
"""Calculate numeric solution at each step to an ODE using Euler's Method
For reference to Euler's method refer to https://en.wikipedia.org/wiki/Euler_method.
Args:
ode_func (Callable): The ordinary differential equation
as a function of x and y.
y0 (float): The initial value for y.
x0 (float): The initial value for x.
step_size (float): The increment value for x.
x_end (float): The final value of x to be calculated.
Returns:
np.ndarray: Solution of y for every step in x.
>>> # the exact solution is math.exp(x)
>>> def f(x, y):
... return y
>>> y0 = 1
>>> y = explicit_euler(f, y0, 0.0, 0.01, 5)
>>> y[-1]
144.77277243257308
"""
n = int(np.ceil((x_end - x0) / step_size))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
y[k + 1] = y[k] + step_size * ode_func(x, y[k])
x += step_size
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research
for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century
unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century
(1 Jan 1901 to 31 Dec 2000)?
"""
def solution():
"""Returns the number of mondays that fall on the first of the month during
the twentieth century (1 Jan 1901 to 31 Dec 2000)?
>>> solution()
171
"""
days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
day = 6
month = 1
year = 1901
sundays = 0
while year < 2001:
day += 7
if (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0):
if day > days_per_month[month - 1] and month != 2:
month += 1
day = day - days_per_month[month - 2]
elif day > 29 and month == 2:
month += 1
day = day - 29
else:
if day > days_per_month[month - 1]:
month += 1
day = day - days_per_month[month - 2]
if month > 12:
year += 1
month = 1
if year < 2001 and day == 1:
sundays += 1
return sundays
if __name__ == "__main__":
print(solution())
| """
Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research
for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century
unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century
(1 Jan 1901 to 31 Dec 2000)?
"""
def solution():
"""Returns the number of mondays that fall on the first of the month during
the twentieth century (1 Jan 1901 to 31 Dec 2000)?
>>> solution()
171
"""
days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
day = 6
month = 1
year = 1901
sundays = 0
while year < 2001:
day += 7
if (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0):
if day > days_per_month[month - 1] and month != 2:
month += 1
day = day - days_per_month[month - 2]
elif day > 29 and month == 2:
month += 1
day = day - 29
else:
if day > days_per_month[month - 1]:
month += 1
day = day - days_per_month[month - 2]
if month > 12:
year += 1
month = 1
if year < 2001 and day == 1:
sundays += 1
return sundays
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def double_factorial(n: int) -> int:
"""
Compute double factorial using recursive method.
Recursion can be costly for large numbers.
To learn about the theory behind this algorithm:
https://en.wikipedia.org/wiki/Double_factorial
>>> import math
>>> all(double_factorial(i) == math.prod(range(i, 0, -2)) for i in range(20))
True
>>> double_factorial(0.1)
Traceback (most recent call last):
...
ValueError: double_factorial() only accepts integral values
>>> double_factorial(-1)
Traceback (most recent call last):
...
ValueError: double_factorial() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("double_factorial() only accepts integral values")
if n < 0:
raise ValueError("double_factorial() not defined for negative values")
return 1 if n <= 1 else n * double_factorial(n - 2)
if __name__ == "__main__":
import doctest
doctest.testmod()
| def double_factorial(n: int) -> int:
"""
Compute double factorial using recursive method.
Recursion can be costly for large numbers.
To learn about the theory behind this algorithm:
https://en.wikipedia.org/wiki/Double_factorial
>>> import math
>>> all(double_factorial(i) == math.prod(range(i, 0, -2)) for i in range(20))
True
>>> double_factorial(0.1)
Traceback (most recent call last):
...
ValueError: double_factorial() only accepts integral values
>>> double_factorial(-1)
Traceback (most recent call last):
...
ValueError: double_factorial() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("double_factorial() only accepts integral values")
if n < 0:
raise ValueError("double_factorial() not defined for negative values")
return 1 if n <= 1 else n * double_factorial(n - 2)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
wiki: https://en.wikipedia.org/wiki/Anagram
"""
from collections import defaultdict
def check_anagrams(first_str: str, second_str: str) -> bool:
"""
Two strings are anagrams if they are made up of the same letters but are
arranged differently (ignoring the case).
>>> check_anagrams('Silent', 'Listen')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('There', 'Their')
False
"""
first_str = first_str.lower().strip()
second_str = second_str.lower().strip()
# Remove whitespace
first_str = first_str.replace(" ", "")
second_str = second_str.replace(" ", "")
# Strings of different lengths are not anagrams
if len(first_str) != len(second_str):
return False
# Default values for count should be 0
count: defaultdict[str, int] = defaultdict(int)
# For each character in input strings,
# increment count in the corresponding
for i in range(len(first_str)):
count[first_str[i]] += 1
count[second_str[i]] -= 1
return all(_count == 0 for _count in count.values())
if __name__ == "__main__":
from doctest import testmod
testmod()
input_a = input("Enter the first string ").strip()
input_b = input("Enter the second string ").strip()
status = check_anagrams(input_a, input_b)
print(f"{input_a} and {input_b} are {'' if status else 'not '}anagrams.")
| """
wiki: https://en.wikipedia.org/wiki/Anagram
"""
from collections import defaultdict
def check_anagrams(first_str: str, second_str: str) -> bool:
"""
Two strings are anagrams if they are made up of the same letters but are
arranged differently (ignoring the case).
>>> check_anagrams('Silent', 'Listen')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('There', 'Their')
False
"""
first_str = first_str.lower().strip()
second_str = second_str.lower().strip()
# Remove whitespace
first_str = first_str.replace(" ", "")
second_str = second_str.replace(" ", "")
# Strings of different lengths are not anagrams
if len(first_str) != len(second_str):
return False
# Default values for count should be 0
count: defaultdict[str, int] = defaultdict(int)
# For each character in input strings,
# increment count in the corresponding
for i in range(len(first_str)):
count[first_str[i]] += 1
count[second_str[i]] -= 1
return all(_count == 0 for _count in count.values())
if __name__ == "__main__":
from doctest import testmod
testmod()
input_a = input("Enter the first string ").strip()
input_b = input("Enter the second string ").strip()
status = check_anagrams(input_a, input_b)
print(f"{input_a} and {input_b} are {'' if status else 'not '}anagrams.")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.wikipedia.org/wiki/Circular_convolution
"""
Circular convolution, also known as cyclic convolution,
is a special case of periodic convolution, which is the convolution of two
periodic functions that have the same period. Periodic convolution arises,
for example, in the context of the discrete-time Fourier transform (DTFT).
In particular, the DTFT of the product of two discrete sequences is the periodic
convolution of the DTFTs of the individual sequences. And each DTFT is a periodic
summation of a continuous Fourier transform function.
Source: https://en.wikipedia.org/wiki/Circular_convolution
"""
import doctest
from collections import deque
import numpy as np
class CircularConvolution:
"""
This class stores the first and second signal and performs the circular convolution
"""
def __init__(self) -> None:
"""
First signal and second signal are stored as 1-D array
"""
self.first_signal = [2, 1, 2, -1]
self.second_signal = [1, 2, 3, 4]
def circular_convolution(self) -> list[float]:
"""
This function performs the circular convolution of the first and second signal
using matrix method
Usage:
>>> import circular_convolution as cc
>>> convolution = cc.CircularConvolution()
>>> convolution.circular_convolution()
[10, 10, 6, 14]
>>> convolution.first_signal = [0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6]
>>> convolution.second_signal = [0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5]
>>> convolution.circular_convolution()
[5.2, 6.0, 6.48, 6.64, 6.48, 6.0, 5.2, 4.08]
>>> convolution.first_signal = [-1, 1, 2, -2]
>>> convolution.second_signal = [0.5, 1, -1, 2, 0.75]
>>> convolution.circular_convolution()
[6.25, -3.0, 1.5, -2.0, -2.75]
>>> convolution.first_signal = [1, -1, 2, 3, -1]
>>> convolution.second_signal = [1, 2, 3]
>>> convolution.circular_convolution()
[8, -2, 3, 4, 11]
"""
length_first_signal = len(self.first_signal)
length_second_signal = len(self.second_signal)
max_length = max(length_first_signal, length_second_signal)
# create a zero matrix of max_length x max_length
matrix = [[0] * max_length for i in range(max_length)]
# fills the smaller signal with zeros to make both signals of same length
if length_first_signal < length_second_signal:
self.first_signal += [0] * (max_length - length_first_signal)
elif length_first_signal > length_second_signal:
self.second_signal += [0] * (max_length - length_second_signal)
"""
Fills the matrix in the following way assuming 'x' is the signal of length 4
[
[x[0], x[3], x[2], x[1]],
[x[1], x[0], x[3], x[2]],
[x[2], x[1], x[0], x[3]],
[x[3], x[2], x[1], x[0]]
]
"""
for i in range(max_length):
rotated_signal = deque(self.second_signal)
rotated_signal.rotate(i)
for j, item in enumerate(rotated_signal):
matrix[i][j] += item
# multiply the matrix with the first signal
final_signal = np.matmul(np.transpose(matrix), np.transpose(self.first_signal))
# rounding-off to two decimal places
return [round(i, 2) for i in final_signal]
if __name__ == "__main__":
doctest.testmod()
| # https://en.wikipedia.org/wiki/Circular_convolution
"""
Circular convolution, also known as cyclic convolution,
is a special case of periodic convolution, which is the convolution of two
periodic functions that have the same period. Periodic convolution arises,
for example, in the context of the discrete-time Fourier transform (DTFT).
In particular, the DTFT of the product of two discrete sequences is the periodic
convolution of the DTFTs of the individual sequences. And each DTFT is a periodic
summation of a continuous Fourier transform function.
Source: https://en.wikipedia.org/wiki/Circular_convolution
"""
import doctest
from collections import deque
import numpy as np
class CircularConvolution:
"""
This class stores the first and second signal and performs the circular convolution
"""
def __init__(self) -> None:
"""
First signal and second signal are stored as 1-D array
"""
self.first_signal = [2, 1, 2, -1]
self.second_signal = [1, 2, 3, 4]
def circular_convolution(self) -> list[float]:
"""
This function performs the circular convolution of the first and second signal
using matrix method
Usage:
>>> import circular_convolution as cc
>>> convolution = cc.CircularConvolution()
>>> convolution.circular_convolution()
[10, 10, 6, 14]
>>> convolution.first_signal = [0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6]
>>> convolution.second_signal = [0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5]
>>> convolution.circular_convolution()
[5.2, 6.0, 6.48, 6.64, 6.48, 6.0, 5.2, 4.08]
>>> convolution.first_signal = [-1, 1, 2, -2]
>>> convolution.second_signal = [0.5, 1, -1, 2, 0.75]
>>> convolution.circular_convolution()
[6.25, -3.0, 1.5, -2.0, -2.75]
>>> convolution.first_signal = [1, -1, 2, 3, -1]
>>> convolution.second_signal = [1, 2, 3]
>>> convolution.circular_convolution()
[8, -2, 3, 4, 11]
"""
length_first_signal = len(self.first_signal)
length_second_signal = len(self.second_signal)
max_length = max(length_first_signal, length_second_signal)
# create a zero matrix of max_length x max_length
matrix = [[0] * max_length for i in range(max_length)]
# fills the smaller signal with zeros to make both signals of same length
if length_first_signal < length_second_signal:
self.first_signal += [0] * (max_length - length_first_signal)
elif length_first_signal > length_second_signal:
self.second_signal += [0] * (max_length - length_second_signal)
"""
Fills the matrix in the following way assuming 'x' is the signal of length 4
[
[x[0], x[3], x[2], x[1]],
[x[1], x[0], x[3], x[2]],
[x[2], x[1], x[0], x[3]],
[x[3], x[2], x[1], x[0]]
]
"""
for i in range(max_length):
rotated_signal = deque(self.second_signal)
rotated_signal.rotate(i)
for j, item in enumerate(rotated_signal):
matrix[i][j] += item
# multiply the matrix with the first signal
final_signal = np.matmul(np.transpose(matrix), np.transpose(self.first_signal))
# rounding-off to two decimal places
return [round(i, 2) for i in final_signal]
if __name__ == "__main__":
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Queue represented by a pseudo stack (represented by a list with pop and append)"""
from typing import Any
class Queue:
def __init__(self):
self.stack = []
self.length = 0
def __str__(self):
printed = "<" + str(self.stack)[1:-1] + ">"
return printed
"""Enqueues {@code item}
@param item
item to enqueue"""
def put(self, item: Any) -> None:
self.stack.append(item)
self.length = self.length + 1
"""Dequeues {@code item}
@requirement: |self.length| > 0
@return dequeued
item that was dequeued"""
def get(self) -> Any:
self.rotate(1)
dequeued = self.stack[self.length - 1]
self.stack = self.stack[:-1]
self.rotate(self.length - 1)
self.length = self.length - 1
return dequeued
"""Rotates the queue {@code rotation} times
@param rotation
number of times to rotate queue"""
def rotate(self, rotation: int) -> None:
for _ in range(rotation):
temp = self.stack[0]
self.stack = self.stack[1:]
self.put(temp)
self.length = self.length - 1
"""Reports item at the front of self
@return item at front of self.stack"""
def front(self) -> Any:
front = self.get()
self.put(front)
self.rotate(self.length - 1)
return front
"""Returns the length of this.stack"""
def size(self) -> int:
return self.length
| """Queue represented by a pseudo stack (represented by a list with pop and append)"""
from typing import Any
class Queue:
def __init__(self):
self.stack = []
self.length = 0
def __str__(self):
printed = "<" + str(self.stack)[1:-1] + ">"
return printed
"""Enqueues {@code item}
@param item
item to enqueue"""
def put(self, item: Any) -> None:
self.stack.append(item)
self.length = self.length + 1
"""Dequeues {@code item}
@requirement: |self.length| > 0
@return dequeued
item that was dequeued"""
def get(self) -> Any:
self.rotate(1)
dequeued = self.stack[self.length - 1]
self.stack = self.stack[:-1]
self.rotate(self.length - 1)
self.length = self.length - 1
return dequeued
"""Rotates the queue {@code rotation} times
@param rotation
number of times to rotate queue"""
def rotate(self, rotation: int) -> None:
for _ in range(rotation):
temp = self.stack[0]
self.stack = self.stack[1:]
self.put(temp)
self.length = self.length - 1
"""Reports item at the front of self
@return item at front of self.stack"""
def front(self) -> Any:
front = self.get()
self.put(front)
self.rotate(self.length - 1)
return front
"""Returns the length of this.stack"""
def size(self) -> int:
return self.length
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of fibonacci search.
Resources used:
https://en.wikipedia.org/wiki/Fibonacci_search_technique
For doctests run following command:
python3 -m doctest -v fibonacci_search.py
For manual testing run:
python3 fibonacci_search.py
"""
from functools import lru_cache
@lru_cache
def fibonacci(k: int) -> int:
"""Finds fibonacci number in index k.
Parameters
----------
k :
Index of fibonacci.
Returns
-------
int
Fibonacci number in position k.
>>> fibonacci(0)
0
>>> fibonacci(2)
1
>>> fibonacci(5)
5
>>> fibonacci(15)
610
>>> fibonacci('a')
Traceback (most recent call last):
TypeError: k must be an integer.
>>> fibonacci(-5)
Traceback (most recent call last):
ValueError: k integer must be greater or equal to zero.
"""
if not isinstance(k, int):
raise TypeError("k must be an integer.")
if k < 0:
raise ValueError("k integer must be greater or equal to zero.")
if k == 0:
return 0
elif k == 1:
return 1
else:
return fibonacci(k - 1) + fibonacci(k - 2)
def fibonacci_search(arr: list, val: int) -> int:
"""A pure Python implementation of a fibonacci search algorithm.
Parameters
----------
arr
List of sorted elements.
val
Element to search in list.
Returns
-------
int
The index of the element in the array.
-1 if the element is not found.
>>> fibonacci_search([4, 5, 6, 7], 4)
0
>>> fibonacci_search([4, 5, 6, 7], -10)
-1
>>> fibonacci_search([-18, 2], -18)
0
>>> fibonacci_search([5], 5)
0
>>> fibonacci_search(['a', 'c', 'd'], 'c')
1
>>> fibonacci_search(['a', 'c', 'd'], 'f')
-1
>>> fibonacci_search([], 1)
-1
>>> fibonacci_search([.1, .4 , 7], .4)
1
>>> fibonacci_search([], 9)
-1
>>> fibonacci_search(list(range(100)), 63)
63
>>> fibonacci_search(list(range(100)), 99)
99
>>> fibonacci_search(list(range(-100, 100, 3)), -97)
1
>>> fibonacci_search(list(range(-100, 100, 3)), 0)
-1
>>> fibonacci_search(list(range(-100, 100, 5)), 0)
20
>>> fibonacci_search(list(range(-100, 100, 5)), 95)
39
"""
len_list = len(arr)
# Find m such that F_m >= n where F_i is the i_th fibonacci number.
i = 0
while True:
if fibonacci(i) >= len_list:
fibb_k = i
break
i += 1
offset = 0
while fibb_k > 0:
index_k = min(
offset + fibonacci(fibb_k - 1), len_list - 1
) # Prevent out of range
item_k_1 = arr[index_k]
if item_k_1 == val:
return index_k
elif val < item_k_1:
fibb_k -= 1
elif val > item_k_1:
offset += fibonacci(fibb_k - 1)
fibb_k -= 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
This is pure Python implementation of fibonacci search.
Resources used:
https://en.wikipedia.org/wiki/Fibonacci_search_technique
For doctests run following command:
python3 -m doctest -v fibonacci_search.py
For manual testing run:
python3 fibonacci_search.py
"""
from functools import lru_cache
@lru_cache
def fibonacci(k: int) -> int:
"""Finds fibonacci number in index k.
Parameters
----------
k :
Index of fibonacci.
Returns
-------
int
Fibonacci number in position k.
>>> fibonacci(0)
0
>>> fibonacci(2)
1
>>> fibonacci(5)
5
>>> fibonacci(15)
610
>>> fibonacci('a')
Traceback (most recent call last):
TypeError: k must be an integer.
>>> fibonacci(-5)
Traceback (most recent call last):
ValueError: k integer must be greater or equal to zero.
"""
if not isinstance(k, int):
raise TypeError("k must be an integer.")
if k < 0:
raise ValueError("k integer must be greater or equal to zero.")
if k == 0:
return 0
elif k == 1:
return 1
else:
return fibonacci(k - 1) + fibonacci(k - 2)
def fibonacci_search(arr: list, val: int) -> int:
"""A pure Python implementation of a fibonacci search algorithm.
Parameters
----------
arr
List of sorted elements.
val
Element to search in list.
Returns
-------
int
The index of the element in the array.
-1 if the element is not found.
>>> fibonacci_search([4, 5, 6, 7], 4)
0
>>> fibonacci_search([4, 5, 6, 7], -10)
-1
>>> fibonacci_search([-18, 2], -18)
0
>>> fibonacci_search([5], 5)
0
>>> fibonacci_search(['a', 'c', 'd'], 'c')
1
>>> fibonacci_search(['a', 'c', 'd'], 'f')
-1
>>> fibonacci_search([], 1)
-1
>>> fibonacci_search([.1, .4 , 7], .4)
1
>>> fibonacci_search([], 9)
-1
>>> fibonacci_search(list(range(100)), 63)
63
>>> fibonacci_search(list(range(100)), 99)
99
>>> fibonacci_search(list(range(-100, 100, 3)), -97)
1
>>> fibonacci_search(list(range(-100, 100, 3)), 0)
-1
>>> fibonacci_search(list(range(-100, 100, 5)), 0)
20
>>> fibonacci_search(list(range(-100, 100, 5)), 95)
39
"""
len_list = len(arr)
# Find m such that F_m >= n where F_i is the i_th fibonacci number.
i = 0
while True:
if fibonacci(i) >= len_list:
fibb_k = i
break
i += 1
offset = 0
while fibb_k > 0:
index_k = min(
offset + fibonacci(fibb_k - 1), len_list - 1
) # Prevent out of range
item_k_1 = arr[index_k]
if item_k_1 == val:
return index_k
elif val < item_k_1:
fibb_k -= 1
elif val > item_k_1:
offset += fibonacci(fibb_k - 1)
fibb_k -= 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Any
def viterbi(
observations_space: list,
states_space: list,
initial_probabilities: dict,
transition_probabilities: dict,
emission_probabilities: dict,
) -> list:
"""
Viterbi Algorithm, to find the most likely path of
states from the start and the expected output.
https://en.wikipedia.org/wiki/Viterbi_algorithm
sdafads
Wikipedia example
>>> observations = ["normal", "cold", "dizzy"]
>>> states = ["Healthy", "Fever"]
>>> start_p = {"Healthy": 0.6, "Fever": 0.4}
>>> trans_p = {
... "Healthy": {"Healthy": 0.7, "Fever": 0.3},
... "Fever": {"Healthy": 0.4, "Fever": 0.6},
... }
>>> emit_p = {
... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
... }
>>> viterbi(observations, states, start_p, trans_p, emit_p)
['Healthy', 'Healthy', 'Fever']
>>> viterbi((), states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, (), start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, {}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, start_p, {}, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, start_p, trans_p, {})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi("invalid", states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: observations_space must be a list
>>> viterbi(["valid", 123], states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: observations_space must be a list of strings
>>> viterbi(observations, "invalid", start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: states_space must be a list
>>> viterbi(observations, ["valid", 123], start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: states_space must be a list of strings
>>> viterbi(observations, states, "invalid", trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities must be a dict
>>> viterbi(observations, states, {2:2}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities all keys must be strings
>>> viterbi(observations, states, {"a":2}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities all values must be float
>>> viterbi(observations, states, start_p, "invalid", emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities must be a dict
>>> viterbi(observations, states, start_p, {"a":2}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all values must be dict
>>> viterbi(observations, states, start_p, {2:{2:2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> viterbi(observations, states, start_p, {"a":{2:2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> viterbi(observations, states, start_p, {"a":{"b":2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities nested dictionary all values must be float
>>> viterbi(observations, states, start_p, trans_p, "invalid")
Traceback (most recent call last):
...
ValueError: emission_probabilities must be a dict
>>> viterbi(observations, states, start_p, trans_p, None)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
_validation(
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
)
# Creates data structures and fill initial step
probabilities: dict = {}
pointers: dict = {}
for state in states_space:
observation = observations_space[0]
probabilities[(state, observation)] = (
initial_probabilities[state] * emission_probabilities[state][observation]
)
pointers[(state, observation)] = None
# Fills the data structure with the probabilities of
# different transitions and pointers to previous states
for o in range(1, len(observations_space)):
observation = observations_space[o]
prior_observation = observations_space[o - 1]
for state in states_space:
# Calculates the argmax for probability function
arg_max = ""
max_probability = -1
for k_state in states_space:
probability = (
probabilities[(k_state, prior_observation)]
* transition_probabilities[k_state][state]
* emission_probabilities[state][observation]
)
if probability > max_probability:
max_probability = probability
arg_max = k_state
# Update probabilities and pointers dicts
probabilities[(state, observation)] = (
probabilities[(arg_max, prior_observation)]
* transition_probabilities[arg_max][state]
* emission_probabilities[state][observation]
)
pointers[(state, observation)] = arg_max
# The final observation
final_observation = observations_space[len(observations_space) - 1]
# argmax for given final observation
arg_max = ""
max_probability = -1
for k_state in states_space:
probability = probabilities[(k_state, final_observation)]
if probability > max_probability:
max_probability = probability
arg_max = k_state
last_state = arg_max
# Process pointers backwards
previous = last_state
result = []
for o in range(len(observations_space) - 1, -1, -1):
result.append(previous)
previous = pointers[previous, observations_space[o]]
result.reverse()
return result
def _validation(
observations_space: Any,
states_space: Any,
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> observations = ["normal", "cold", "dizzy"]
>>> states = ["Healthy", "Fever"]
>>> start_p = {"Healthy": 0.6, "Fever": 0.4}
>>> trans_p = {
... "Healthy": {"Healthy": 0.7, "Fever": 0.3},
... "Fever": {"Healthy": 0.4, "Fever": 0.6},
... }
>>> emit_p = {
... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
... }
>>> _validation(observations, states, start_p, trans_p, emit_p)
>>> _validation([], states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
_validate_not_empty(
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
)
_validate_lists(observations_space, states_space)
_validate_dicts(
initial_probabilities, transition_probabilities, emission_probabilities
)
def _validate_not_empty(
observations_space: Any,
states_space: Any,
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> _validate_not_empty(["a"], ["b"], {"c":0.5},
... {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
>>> _validate_not_empty(["a"], ["b"], {"c":0.5}, {}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> _validate_not_empty(["a"], ["b"], None, {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
if not all(
[
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
]
):
raise ValueError("There's an empty parameter")
def _validate_lists(observations_space: Any, states_space: Any) -> None:
"""
>>> _validate_lists(["a"], ["b"])
>>> _validate_lists(1234, ["b"])
Traceback (most recent call last):
...
ValueError: observations_space must be a list
>>> _validate_lists(["a"], [3])
Traceback (most recent call last):
...
ValueError: states_space must be a list of strings
"""
_validate_list(observations_space, "observations_space")
_validate_list(states_space, "states_space")
def _validate_list(_object: Any, var_name: str) -> None:
"""
>>> _validate_list(["a"], "mock_name")
>>> _validate_list("a", "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a list
>>> _validate_list([0.5], "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a list of strings
"""
if not isinstance(_object, list):
raise ValueError(f"{var_name} must be a list")
else:
for x in _object:
if not isinstance(x, str):
raise ValueError(f"{var_name} must be a list of strings")
def _validate_dicts(
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
>>> _validate_dicts("invalid", {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: initial_probabilities must be a dict
>>> _validate_dicts({"c":0.5}, {2: {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {2: 0.7}})
Traceback (most recent call last):
...
ValueError: emission_probabilities all keys must be strings
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": "h"}})
Traceback (most recent call last):
...
ValueError: emission_probabilities nested dictionary all values must be float
"""
_validate_dict(initial_probabilities, "initial_probabilities", float)
_validate_nested_dict(transition_probabilities, "transition_probabilities")
_validate_nested_dict(emission_probabilities, "emission_probabilities")
def _validate_nested_dict(_object: Any, var_name: str) -> None:
"""
>>> _validate_nested_dict({"a":{"b": 0.5}}, "mock_name")
>>> _validate_nested_dict("invalid", "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a dict
>>> _validate_nested_dict({"a": 8}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name all values must be dict
>>> _validate_nested_dict({"a":{2: 0.5}}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name all keys must be strings
>>> _validate_nested_dict({"a":{"b": 4}}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name nested dictionary all values must be float
"""
_validate_dict(_object, var_name, dict)
for x in _object.values():
_validate_dict(x, var_name, float, True)
def _validate_dict(
_object: Any, var_name: str, value_type: type, nested: bool = False
) -> None:
"""
>>> _validate_dict({"b": 0.5}, "mock_name", float)
>>> _validate_dict("invalid", "mock_name", float)
Traceback (most recent call last):
...
ValueError: mock_name must be a dict
>>> _validate_dict({"a": 8}, "mock_name", dict)
Traceback (most recent call last):
...
ValueError: mock_name all values must be dict
>>> _validate_dict({2: 0.5}, "mock_name",float, True)
Traceback (most recent call last):
...
ValueError: mock_name all keys must be strings
>>> _validate_dict({"b": 4}, "mock_name", float,True)
Traceback (most recent call last):
...
ValueError: mock_name nested dictionary all values must be float
"""
if not isinstance(_object, dict):
raise ValueError(f"{var_name} must be a dict")
if not all(isinstance(x, str) for x in _object):
raise ValueError(f"{var_name} all keys must be strings")
if not all(isinstance(x, value_type) for x in _object.values()):
nested_text = "nested dictionary " if nested else ""
raise ValueError(
f"{var_name} {nested_text}all values must be {value_type.__name__}"
)
if __name__ == "__main__":
from doctest import testmod
testmod()
| from typing import Any
def viterbi(
observations_space: list,
states_space: list,
initial_probabilities: dict,
transition_probabilities: dict,
emission_probabilities: dict,
) -> list:
"""
Viterbi Algorithm, to find the most likely path of
states from the start and the expected output.
https://en.wikipedia.org/wiki/Viterbi_algorithm
sdafads
Wikipedia example
>>> observations = ["normal", "cold", "dizzy"]
>>> states = ["Healthy", "Fever"]
>>> start_p = {"Healthy": 0.6, "Fever": 0.4}
>>> trans_p = {
... "Healthy": {"Healthy": 0.7, "Fever": 0.3},
... "Fever": {"Healthy": 0.4, "Fever": 0.6},
... }
>>> emit_p = {
... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
... }
>>> viterbi(observations, states, start_p, trans_p, emit_p)
['Healthy', 'Healthy', 'Fever']
>>> viterbi((), states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, (), start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, {}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, start_p, {}, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi(observations, states, start_p, trans_p, {})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> viterbi("invalid", states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: observations_space must be a list
>>> viterbi(["valid", 123], states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: observations_space must be a list of strings
>>> viterbi(observations, "invalid", start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: states_space must be a list
>>> viterbi(observations, ["valid", 123], start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: states_space must be a list of strings
>>> viterbi(observations, states, "invalid", trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities must be a dict
>>> viterbi(observations, states, {2:2}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities all keys must be strings
>>> viterbi(observations, states, {"a":2}, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: initial_probabilities all values must be float
>>> viterbi(observations, states, start_p, "invalid", emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities must be a dict
>>> viterbi(observations, states, start_p, {"a":2}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all values must be dict
>>> viterbi(observations, states, start_p, {2:{2:2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> viterbi(observations, states, start_p, {"a":{2:2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> viterbi(observations, states, start_p, {"a":{"b":2}}, emit_p)
Traceback (most recent call last):
...
ValueError: transition_probabilities nested dictionary all values must be float
>>> viterbi(observations, states, start_p, trans_p, "invalid")
Traceback (most recent call last):
...
ValueError: emission_probabilities must be a dict
>>> viterbi(observations, states, start_p, trans_p, None)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
_validation(
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
)
# Creates data structures and fill initial step
probabilities: dict = {}
pointers: dict = {}
for state in states_space:
observation = observations_space[0]
probabilities[(state, observation)] = (
initial_probabilities[state] * emission_probabilities[state][observation]
)
pointers[(state, observation)] = None
# Fills the data structure with the probabilities of
# different transitions and pointers to previous states
for o in range(1, len(observations_space)):
observation = observations_space[o]
prior_observation = observations_space[o - 1]
for state in states_space:
# Calculates the argmax for probability function
arg_max = ""
max_probability = -1
for k_state in states_space:
probability = (
probabilities[(k_state, prior_observation)]
* transition_probabilities[k_state][state]
* emission_probabilities[state][observation]
)
if probability > max_probability:
max_probability = probability
arg_max = k_state
# Update probabilities and pointers dicts
probabilities[(state, observation)] = (
probabilities[(arg_max, prior_observation)]
* transition_probabilities[arg_max][state]
* emission_probabilities[state][observation]
)
pointers[(state, observation)] = arg_max
# The final observation
final_observation = observations_space[len(observations_space) - 1]
# argmax for given final observation
arg_max = ""
max_probability = -1
for k_state in states_space:
probability = probabilities[(k_state, final_observation)]
if probability > max_probability:
max_probability = probability
arg_max = k_state
last_state = arg_max
# Process pointers backwards
previous = last_state
result = []
for o in range(len(observations_space) - 1, -1, -1):
result.append(previous)
previous = pointers[previous, observations_space[o]]
result.reverse()
return result
def _validation(
observations_space: Any,
states_space: Any,
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> observations = ["normal", "cold", "dizzy"]
>>> states = ["Healthy", "Fever"]
>>> start_p = {"Healthy": 0.6, "Fever": 0.4}
>>> trans_p = {
... "Healthy": {"Healthy": 0.7, "Fever": 0.3},
... "Fever": {"Healthy": 0.4, "Fever": 0.6},
... }
>>> emit_p = {
... "Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
... "Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
... }
>>> _validation(observations, states, start_p, trans_p, emit_p)
>>> _validation([], states, start_p, trans_p, emit_p)
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
_validate_not_empty(
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
)
_validate_lists(observations_space, states_space)
_validate_dicts(
initial_probabilities, transition_probabilities, emission_probabilities
)
def _validate_not_empty(
observations_space: Any,
states_space: Any,
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> _validate_not_empty(["a"], ["b"], {"c":0.5},
... {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
>>> _validate_not_empty(["a"], ["b"], {"c":0.5}, {}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
>>> _validate_not_empty(["a"], ["b"], None, {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: There's an empty parameter
"""
if not all(
[
observations_space,
states_space,
initial_probabilities,
transition_probabilities,
emission_probabilities,
]
):
raise ValueError("There's an empty parameter")
def _validate_lists(observations_space: Any, states_space: Any) -> None:
"""
>>> _validate_lists(["a"], ["b"])
>>> _validate_lists(1234, ["b"])
Traceback (most recent call last):
...
ValueError: observations_space must be a list
>>> _validate_lists(["a"], [3])
Traceback (most recent call last):
...
ValueError: states_space must be a list of strings
"""
_validate_list(observations_space, "observations_space")
_validate_list(states_space, "states_space")
def _validate_list(_object: Any, var_name: str) -> None:
"""
>>> _validate_list(["a"], "mock_name")
>>> _validate_list("a", "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a list
>>> _validate_list([0.5], "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a list of strings
"""
if not isinstance(_object, list):
raise ValueError(f"{var_name} must be a list")
else:
for x in _object:
if not isinstance(x, str):
raise ValueError(f"{var_name} must be a list of strings")
def _validate_dicts(
initial_probabilities: Any,
transition_probabilities: Any,
emission_probabilities: Any,
) -> None:
"""
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
>>> _validate_dicts("invalid", {"d": {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: initial_probabilities must be a dict
>>> _validate_dicts({"c":0.5}, {2: {"e": 0.6}}, {"f": {"g": 0.7}})
Traceback (most recent call last):
...
ValueError: transition_probabilities all keys must be strings
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {2: 0.7}})
Traceback (most recent call last):
...
ValueError: emission_probabilities all keys must be strings
>>> _validate_dicts({"c":0.5}, {"d": {"e": 0.6}}, {"f": {"g": "h"}})
Traceback (most recent call last):
...
ValueError: emission_probabilities nested dictionary all values must be float
"""
_validate_dict(initial_probabilities, "initial_probabilities", float)
_validate_nested_dict(transition_probabilities, "transition_probabilities")
_validate_nested_dict(emission_probabilities, "emission_probabilities")
def _validate_nested_dict(_object: Any, var_name: str) -> None:
"""
>>> _validate_nested_dict({"a":{"b": 0.5}}, "mock_name")
>>> _validate_nested_dict("invalid", "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name must be a dict
>>> _validate_nested_dict({"a": 8}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name all values must be dict
>>> _validate_nested_dict({"a":{2: 0.5}}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name all keys must be strings
>>> _validate_nested_dict({"a":{"b": 4}}, "mock_name")
Traceback (most recent call last):
...
ValueError: mock_name nested dictionary all values must be float
"""
_validate_dict(_object, var_name, dict)
for x in _object.values():
_validate_dict(x, var_name, float, True)
def _validate_dict(
_object: Any, var_name: str, value_type: type, nested: bool = False
) -> None:
"""
>>> _validate_dict({"b": 0.5}, "mock_name", float)
>>> _validate_dict("invalid", "mock_name", float)
Traceback (most recent call last):
...
ValueError: mock_name must be a dict
>>> _validate_dict({"a": 8}, "mock_name", dict)
Traceback (most recent call last):
...
ValueError: mock_name all values must be dict
>>> _validate_dict({2: 0.5}, "mock_name",float, True)
Traceback (most recent call last):
...
ValueError: mock_name all keys must be strings
>>> _validate_dict({"b": 4}, "mock_name", float,True)
Traceback (most recent call last):
...
ValueError: mock_name nested dictionary all values must be float
"""
if not isinstance(_object, dict):
raise ValueError(f"{var_name} must be a dict")
if not all(isinstance(x, str) for x in _object):
raise ValueError(f"{var_name} all keys must be strings")
if not all(isinstance(x, value_type) for x in _object.values()):
nested_text = "nested dictionary " if nested else ""
raise ValueError(
f"{var_name} {nested_text}all values must be {value_type.__name__}"
)
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
even_fibs = []
a, b = 0, 1
while b <= n:
if b % 2 == 0:
even_fibs.append(b)
a, b = b, a + b
return sum(even_fibs)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
even_fibs = []
a, b = 0, 1
while b <= n:
if b % 2 == 0:
even_fibs.append(b)
a, b = b, a + b
return sum(even_fibs)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Check if three points are collinear in 3D.
In short, the idea is that we are able to create a triangle using three points,
and the area of that triangle can determine if the three points are collinear or not.
First, we create two vectors with the same initial point from the three points,
then we will calculate the cross-product of them.
The length of the cross vector is numerically equal to the area of a parallelogram.
Finally, the area of the triangle is equal to half of the area of the parallelogram.
Since we are only differentiating between zero and anything else,
we can get rid of the square root when calculating the length of the vector,
and also the division by two at the end.
From a second perspective, if the two vectors are parallel and overlapping,
we can't get a nonzero perpendicular vector,
since there will be an infinite number of orthogonal vectors.
To simplify the solution we will not calculate the length,
but we will decide directly from the vector whether it is equal to (0, 0, 0) or not.
Read More:
https://math.stackexchange.com/a/1951650
"""
Vector3d = tuple[float, float, float]
Point3d = tuple[float, float, float]
def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d:
"""
Pass two points to get the vector from them in the form (x, y, z).
>>> create_vector((0, 0, 0), (1, 1, 1))
(1, 1, 1)
>>> create_vector((45, 70, 24), (47, 32, 1))
(2, -38, -23)
>>> create_vector((-14, -1, -8), (-7, 6, 4))
(7, 7, 12)
"""
x = end_point2[0] - end_point1[0]
y = end_point2[1] - end_point1[1]
z = end_point2[2] - end_point1[2]
return (x, y, z)
def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d:
"""
Get the cross of the two vectors AB and AC.
I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process.
Read More:
https://en.wikipedia.org/wiki/Cross_product
https://en.wikipedia.org/wiki/Determinant
>>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2))
(-55, 22, 11)
>>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1))
(0, 0, 0)
>>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12))
(-36, -48, 27)
>>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33))
(-123.2594, 277.15110000000004, 129.11260000000001)
"""
x = ab[1] * ac[2] - ab[2] * ac[1] # *i
y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j
z = ab[0] * ac[1] - ab[1] * ac[0] # *k
return (x, y, z)
def is_zero_vector(vector: Vector3d, accuracy: int) -> bool:
"""
Check if vector is equal to (0, 0, 0) of not.
Sine the algorithm is very accurate, we will never get a zero vector,
so we need to round the vector axis,
because we want a result that is either True or False.
In other applications, we can return a float that represents the collinearity ratio.
>>> is_zero_vector((0, 0, 0), accuracy=10)
True
>>> is_zero_vector((15, 74, 32), accuracy=10)
False
>>> is_zero_vector((-15, -74, -32), accuracy=10)
False
"""
return tuple(round(x, accuracy) for x in vector) == (0, 0, 0)
def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool:
"""
Check if three points are collinear or not.
1- Create tow vectors AB and AC.
2- Get the cross vector of the tow vectors.
3- Calcolate the length of the cross vector.
4- If the length is zero then the points are collinear, else they are not.
The use of the accuracy parameter is explained in is_zero_vector docstring.
>>> are_collinear((4.802293498137402, 3.536233125455244, 0),
... (-2.186788107953106, -9.24561398001649, 7.141509524846482),
... (1.530169574640268, -2.447927606600034, 3.343487096469054))
True
>>> are_collinear((-6, -2, 6),
... (6.200213806439997, -4.930157614926678, -4.482371908289856),
... (-4.085171149525941, -2.459889509029438, 4.354787180795383))
True
>>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666),
... (-3.682816335934376, 5.753788986533145, 9.490993909044244),
... (1.962903518985307, 3.741415730125627, 7))
False
>>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993),
... (-3.546599383667157, -4.630005261513976, 3.208784032924246),
... (-2.564606140206386, 3.937845170672183, 7))
False
"""
ab = create_vector(a, b)
ac = create_vector(a, c)
return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
| """
Check if three points are collinear in 3D.
In short, the idea is that we are able to create a triangle using three points,
and the area of that triangle can determine if the three points are collinear or not.
First, we create two vectors with the same initial point from the three points,
then we will calculate the cross-product of them.
The length of the cross vector is numerically equal to the area of a parallelogram.
Finally, the area of the triangle is equal to half of the area of the parallelogram.
Since we are only differentiating between zero and anything else,
we can get rid of the square root when calculating the length of the vector,
and also the division by two at the end.
From a second perspective, if the two vectors are parallel and overlapping,
we can't get a nonzero perpendicular vector,
since there will be an infinite number of orthogonal vectors.
To simplify the solution we will not calculate the length,
but we will decide directly from the vector whether it is equal to (0, 0, 0) or not.
Read More:
https://math.stackexchange.com/a/1951650
"""
Vector3d = tuple[float, float, float]
Point3d = tuple[float, float, float]
def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d:
"""
Pass two points to get the vector from them in the form (x, y, z).
>>> create_vector((0, 0, 0), (1, 1, 1))
(1, 1, 1)
>>> create_vector((45, 70, 24), (47, 32, 1))
(2, -38, -23)
>>> create_vector((-14, -1, -8), (-7, 6, 4))
(7, 7, 12)
"""
x = end_point2[0] - end_point1[0]
y = end_point2[1] - end_point1[1]
z = end_point2[2] - end_point1[2]
return (x, y, z)
def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d:
"""
Get the cross of the two vectors AB and AC.
I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process.
Read More:
https://en.wikipedia.org/wiki/Cross_product
https://en.wikipedia.org/wiki/Determinant
>>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2))
(-55, 22, 11)
>>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1))
(0, 0, 0)
>>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12))
(-36, -48, 27)
>>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33))
(-123.2594, 277.15110000000004, 129.11260000000001)
"""
x = ab[1] * ac[2] - ab[2] * ac[1] # *i
y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j
z = ab[0] * ac[1] - ab[1] * ac[0] # *k
return (x, y, z)
def is_zero_vector(vector: Vector3d, accuracy: int) -> bool:
"""
Check if vector is equal to (0, 0, 0) of not.
Sine the algorithm is very accurate, we will never get a zero vector,
so we need to round the vector axis,
because we want a result that is either True or False.
In other applications, we can return a float that represents the collinearity ratio.
>>> is_zero_vector((0, 0, 0), accuracy=10)
True
>>> is_zero_vector((15, 74, 32), accuracy=10)
False
>>> is_zero_vector((-15, -74, -32), accuracy=10)
False
"""
return tuple(round(x, accuracy) for x in vector) == (0, 0, 0)
def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool:
"""
Check if three points are collinear or not.
1- Create tow vectors AB and AC.
2- Get the cross vector of the tow vectors.
3- Calcolate the length of the cross vector.
4- If the length is zero then the points are collinear, else they are not.
The use of the accuracy parameter is explained in is_zero_vector docstring.
>>> are_collinear((4.802293498137402, 3.536233125455244, 0),
... (-2.186788107953106, -9.24561398001649, 7.141509524846482),
... (1.530169574640268, -2.447927606600034, 3.343487096469054))
True
>>> are_collinear((-6, -2, 6),
... (6.200213806439997, -4.930157614926678, -4.482371908289856),
... (-4.085171149525941, -2.459889509029438, 4.354787180795383))
True
>>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666),
... (-3.682816335934376, 5.753788986533145, 9.490993909044244),
... (1.962903518985307, 3.741415730125627, 7))
False
>>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993),
... (-3.546599383667157, -4.630005261513976, 3.208784032924246),
... (-2.564606140206386, 3.937845170672183, 7))
False
"""
ab = create_vector(a, b)
ac = create_vector(a, c)
return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def search_in_a_sorted_matrix(
mat: list[list[int]], m: int, n: int, key: int | float
) -> None:
"""
>>> search_in_a_sorted_matrix(
... [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 5)
Key 5 found at row- 1 column- 2
>>> search_in_a_sorted_matrix(
... [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 21)
Key 21 not found
>>> search_in_a_sorted_matrix(
... [[2.1, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 2.1)
Key 2.1 found at row- 1 column- 1
>>> search_in_a_sorted_matrix(
... [[2.1, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 2.2)
Key 2.2 not found
"""
i, j = m - 1, 0
while i >= 0 and j < n:
if key == mat[i][j]:
print(f"Key {key} found at row- {i + 1} column- {j + 1}")
return
if key < mat[i][j]:
i -= 1
else:
j += 1
print(f"Key {key} not found")
def main() -> None:
mat = [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]]
x = int(input("Enter the element to be searched:"))
print(mat)
search_in_a_sorted_matrix(mat, len(mat), len(mat[0]), x)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| from __future__ import annotations
def search_in_a_sorted_matrix(
mat: list[list[int]], m: int, n: int, key: int | float
) -> None:
"""
>>> search_in_a_sorted_matrix(
... [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 5)
Key 5 found at row- 1 column- 2
>>> search_in_a_sorted_matrix(
... [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 21)
Key 21 not found
>>> search_in_a_sorted_matrix(
... [[2.1, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 2.1)
Key 2.1 found at row- 1 column- 1
>>> search_in_a_sorted_matrix(
... [[2.1, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 2.2)
Key 2.2 not found
"""
i, j = m - 1, 0
while i >= 0 and j < n:
if key == mat[i][j]:
print(f"Key {key} found at row- {i + 1} column- {j + 1}")
return
if key < mat[i][j]:
i -= 1
else:
j += 1
print(f"Key {key} not found")
def main() -> None:
mat = [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]]
x = int(input("Enter the element to be searched:"))
print(mat)
search_in_a_sorted_matrix(mat, len(mat), len(mat[0]), x)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| INF = float("inf")
class Dinic:
def __init__(self, n):
self.lvl = [0] * n
self.ptr = [0] * n
self.q = [0] * n
self.adj = [[] for _ in range(n)]
"""
Here we will add our edges containing with the following parameters:
vertex closest to source, vertex closest to sink and flow capacity
through that edge ...
"""
def add_edge(self, a, b, c, rcap=0):
self.adj[a].append([b, len(self.adj[b]), c, 0])
self.adj[b].append([a, len(self.adj[a]) - 1, rcap, 0])
# This is a sample depth first search to be used at max_flow
def depth_first_search(self, vertex, sink, flow):
if vertex == sink or not flow:
return flow
for i in range(self.ptr[vertex], len(self.adj[vertex])):
e = self.adj[vertex][i]
if self.lvl[e[0]] == self.lvl[vertex] + 1:
p = self.depth_first_search(e[0], sink, min(flow, e[2] - e[3]))
if p:
self.adj[vertex][i][3] += p
self.adj[e[0]][e[1]][3] -= p
return p
self.ptr[vertex] = self.ptr[vertex] + 1
return 0
# Here we calculate the flow that reaches the sink
def max_flow(self, source, sink):
flow, self.q[0] = 0, source
for l in range(31): # noqa: E741 l = 30 maybe faster for random data
while True:
self.lvl, self.ptr = [0] * len(self.q), [0] * len(self.q)
qi, qe, self.lvl[source] = 0, 1, 1
while qi < qe and not self.lvl[sink]:
v = self.q[qi]
qi += 1
for e in self.adj[v]:
if not self.lvl[e[0]] and (e[2] - e[3]) >> (30 - l):
self.q[qe] = e[0]
qe += 1
self.lvl[e[0]] = self.lvl[v] + 1
p = self.depth_first_search(source, sink, INF)
while p:
flow += p
p = self.depth_first_search(source, sink, INF)
if not self.lvl[sink]:
break
return flow
# Example to use
"""
Will be a bipartite graph, than it has the vertices near the source(4)
and the vertices near the sink(4)
"""
# Here we make a graphs with 10 vertex(source and sink includes)
graph = Dinic(10)
source = 0
sink = 9
"""
Now we add the vertices next to the font in the font with 1 capacity in this edge
(source -> source vertices)
"""
for vertex in range(1, 5):
graph.add_edge(source, vertex, 1)
"""
We will do the same thing for the vertices near the sink, but from vertex to sink
(sink vertices -> sink)
"""
for vertex in range(5, 9):
graph.add_edge(vertex, sink, 1)
"""
Finally we add the verices near the sink to the vertices near the source.
(source vertices -> sink vertices)
"""
for vertex in range(1, 5):
graph.add_edge(vertex, vertex + 4, 1)
# Now we can know that is the maximum flow(source -> sink)
print(graph.max_flow(source, sink))
| INF = float("inf")
class Dinic:
def __init__(self, n):
self.lvl = [0] * n
self.ptr = [0] * n
self.q = [0] * n
self.adj = [[] for _ in range(n)]
"""
Here we will add our edges containing with the following parameters:
vertex closest to source, vertex closest to sink and flow capacity
through that edge ...
"""
def add_edge(self, a, b, c, rcap=0):
self.adj[a].append([b, len(self.adj[b]), c, 0])
self.adj[b].append([a, len(self.adj[a]) - 1, rcap, 0])
# This is a sample depth first search to be used at max_flow
def depth_first_search(self, vertex, sink, flow):
if vertex == sink or not flow:
return flow
for i in range(self.ptr[vertex], len(self.adj[vertex])):
e = self.adj[vertex][i]
if self.lvl[e[0]] == self.lvl[vertex] + 1:
p = self.depth_first_search(e[0], sink, min(flow, e[2] - e[3]))
if p:
self.adj[vertex][i][3] += p
self.adj[e[0]][e[1]][3] -= p
return p
self.ptr[vertex] = self.ptr[vertex] + 1
return 0
# Here we calculate the flow that reaches the sink
def max_flow(self, source, sink):
flow, self.q[0] = 0, source
for l in range(31): # noqa: E741 l = 30 maybe faster for random data
while True:
self.lvl, self.ptr = [0] * len(self.q), [0] * len(self.q)
qi, qe, self.lvl[source] = 0, 1, 1
while qi < qe and not self.lvl[sink]:
v = self.q[qi]
qi += 1
for e in self.adj[v]:
if not self.lvl[e[0]] and (e[2] - e[3]) >> (30 - l):
self.q[qe] = e[0]
qe += 1
self.lvl[e[0]] = self.lvl[v] + 1
p = self.depth_first_search(source, sink, INF)
while p:
flow += p
p = self.depth_first_search(source, sink, INF)
if not self.lvl[sink]:
break
return flow
# Example to use
"""
Will be a bipartite graph, than it has the vertices near the source(4)
and the vertices near the sink(4)
"""
# Here we make a graphs with 10 vertex(source and sink includes)
graph = Dinic(10)
source = 0
sink = 9
"""
Now we add the vertices next to the font in the font with 1 capacity in this edge
(source -> source vertices)
"""
for vertex in range(1, 5):
graph.add_edge(source, vertex, 1)
"""
We will do the same thing for the vertices near the sink, but from vertex to sink
(sink vertices -> sink)
"""
for vertex in range(5, 9):
graph.add_edge(vertex, sink, 1)
"""
Finally we add the verices near the sink to the vertices near the source.
(source vertices -> sink vertices)
"""
for vertex in range(1, 5):
graph.add_edge(vertex, vertex + 4, 1)
# Now we can know that is the maximum flow(source -> sink)
print(graph.max_flow(source, sink))
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def permute(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> from itertools import permutations
>>> numbers= [1,2,3]
>>> all(list(nums) in permute(numbers) for nums in permutations(numbers))
True
"""
result = []
if len(nums) == 1:
return [nums.copy()]
for _ in range(len(nums)):
n = nums.pop(0)
permutations = permute(nums)
for perm in permutations:
perm.append(n)
result.extend(permutations)
nums.append(n)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| def permute(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> from itertools import permutations
>>> numbers= [1,2,3]
>>> all(list(nums) in permute(numbers) for nums in permutations(numbers))
True
"""
result = []
if len(nums) == 1:
return [nums.copy()]
for _ in range(len(nums)):
n = nums.pop(0)
permutations = permute(nums)
for perm in permutations:
perm.append(n)
result.extend(permutations)
nums.append(n)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Python program to implement Pigeonhole Sorting in python
# Algorithm for the pigeonhole sorting
def pigeonhole_sort(a):
"""
>>> a = [8, 3, 2, 7, 4, 6, 8]
>>> b = sorted(a) # a nondestructive sort
>>> pigeonhole_sort(a) # a destructive sort
>>> a == b
True
"""
# size of range of values in the list (ie, number of pigeonholes we need)
min_val = min(a) # min() finds the minimum value
max_val = max(a) # max() finds the maximum value
size = max_val - min_val + 1 # size is difference of max and min values plus one
# list of pigeonholes of size equal to the variable size
holes = [0] * size
# Populate the pigeonholes.
for x in a:
assert isinstance(x, int), "integers only please"
holes[x - min_val] += 1
# Putting the elements back into the array in an order.
i = 0
for count in range(size):
while holes[count] > 0:
holes[count] -= 1
a[i] = count + min_val
i += 1
def main():
a = [8, 3, 2, 7, 4, 6, 8]
pigeonhole_sort(a)
print("Sorted order is:", " ".join(a))
if __name__ == "__main__":
main()
| # Python program to implement Pigeonhole Sorting in python
# Algorithm for the pigeonhole sorting
def pigeonhole_sort(a):
"""
>>> a = [8, 3, 2, 7, 4, 6, 8]
>>> b = sorted(a) # a nondestructive sort
>>> pigeonhole_sort(a) # a destructive sort
>>> a == b
True
"""
# size of range of values in the list (ie, number of pigeonholes we need)
min_val = min(a) # min() finds the minimum value
max_val = max(a) # max() finds the maximum value
size = max_val - min_val + 1 # size is difference of max and min values plus one
# list of pigeonholes of size equal to the variable size
holes = [0] * size
# Populate the pigeonholes.
for x in a:
assert isinstance(x, int), "integers only please"
holes[x - min_val] += 1
# Putting the elements back into the array in an order.
i = 0
for count in range(size):
while holes[count] > 0:
holes[count] -= 1
a[i] = count + min_val
i += 1
def main():
a = [8, 3, 2, 7, 4, 6, 8]
pigeonhole_sort(a)
print("Sorted order is:", " ".join(a))
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Algorithms to determine if a string is palindrome
test_data = {
"MALAYALAM": True,
"String": False,
"rotor": True,
"level": True,
"A": True,
"BB": True,
"ABC": False,
"amanaplanacanalpanama": True, # "a man a plan a canal panama"
}
# Ensure our test data is valid
assert all((key == key[::-1]) is value for key, value in test_data.items())
def is_palindrome(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome(key) is value for key, value in test_data.items())
True
"""
start_i = 0
end_i = len(s) - 1
while start_i < end_i:
if s[start_i] == s[end_i]:
start_i += 1
end_i -= 1
else:
return False
return True
def is_palindrome_recursive(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome_recursive(key) is value for key, value in test_data.items())
True
"""
if len(s) <= 1:
return True
if s[0] == s[len(s) - 1]:
return is_palindrome_recursive(s[1:-1])
else:
return False
def is_palindrome_slice(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome_slice(key) is value for key, value in test_data.items())
True
"""
return s == s[::-1]
if __name__ == "__main__":
for key, value in test_data.items():
assert is_palindrome(key) is is_palindrome_recursive(key)
assert is_palindrome(key) is is_palindrome_slice(key)
print(f"{key:21} {value}")
print("a man a plan a canal panama")
| # Algorithms to determine if a string is palindrome
test_data = {
"MALAYALAM": True,
"String": False,
"rotor": True,
"level": True,
"A": True,
"BB": True,
"ABC": False,
"amanaplanacanalpanama": True, # "a man a plan a canal panama"
}
# Ensure our test data is valid
assert all((key == key[::-1]) is value for key, value in test_data.items())
def is_palindrome(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome(key) is value for key, value in test_data.items())
True
"""
start_i = 0
end_i = len(s) - 1
while start_i < end_i:
if s[start_i] == s[end_i]:
start_i += 1
end_i -= 1
else:
return False
return True
def is_palindrome_recursive(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome_recursive(key) is value for key, value in test_data.items())
True
"""
if len(s) <= 1:
return True
if s[0] == s[len(s) - 1]:
return is_palindrome_recursive(s[1:-1])
else:
return False
def is_palindrome_slice(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
>>> all(is_palindrome_slice(key) is value for key, value in test_data.items())
True
"""
return s == s[::-1]
if __name__ == "__main__":
for key, value in test_data.items():
assert is_palindrome(key) is is_palindrome_recursive(key)
assert is_palindrome(key) is is_palindrome_slice(key)
print(f"{key:21} {value}")
print("a man a plan a canal panama")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Reference: https://en.wikipedia.org/wiki/Gaussian_function
"""
from numpy import exp, pi, sqrt
def gaussian(x, mu: float = 0.0, sigma: float = 1.0) -> int:
"""
>>> gaussian(1)
0.24197072451914337
>>> gaussian(24)
3.342714441794458e-126
>>> gaussian(1, 4, 2)
0.06475879783294587
>>> gaussian(1, 5, 3)
0.05467002489199788
Supports NumPy Arrays
Use numpy.meshgrid with this to generate gaussian blur on images.
>>> import numpy as np
>>> x = np.arange(15)
>>> gaussian(x)
array([3.98942280e-01, 2.41970725e-01, 5.39909665e-02, 4.43184841e-03,
1.33830226e-04, 1.48671951e-06, 6.07588285e-09, 9.13472041e-12,
5.05227108e-15, 1.02797736e-18, 7.69459863e-23, 2.11881925e-27,
2.14638374e-32, 7.99882776e-38, 1.09660656e-43])
>>> gaussian(15)
5.530709549844416e-50
>>> gaussian([1,2, 'string'])
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'list' and 'float'
>>> gaussian('hello world')
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'str' and 'float'
>>> gaussian(10**234) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
OverflowError: (34, 'Result too large')
>>> gaussian(10**-326)
0.3989422804014327
>>> gaussian(2523, mu=234234, sigma=3425)
0.0
"""
return 1 / sqrt(2 * pi * sigma**2) * exp(-((x - mu) ** 2) / (2 * sigma**2))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Reference: https://en.wikipedia.org/wiki/Gaussian_function
"""
from numpy import exp, pi, sqrt
def gaussian(x, mu: float = 0.0, sigma: float = 1.0) -> int:
"""
>>> gaussian(1)
0.24197072451914337
>>> gaussian(24)
3.342714441794458e-126
>>> gaussian(1, 4, 2)
0.06475879783294587
>>> gaussian(1, 5, 3)
0.05467002489199788
Supports NumPy Arrays
Use numpy.meshgrid with this to generate gaussian blur on images.
>>> import numpy as np
>>> x = np.arange(15)
>>> gaussian(x)
array([3.98942280e-01, 2.41970725e-01, 5.39909665e-02, 4.43184841e-03,
1.33830226e-04, 1.48671951e-06, 6.07588285e-09, 9.13472041e-12,
5.05227108e-15, 1.02797736e-18, 7.69459863e-23, 2.11881925e-27,
2.14638374e-32, 7.99882776e-38, 1.09660656e-43])
>>> gaussian(15)
5.530709549844416e-50
>>> gaussian([1,2, 'string'])
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'list' and 'float'
>>> gaussian('hello world')
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'str' and 'float'
>>> gaussian(10**234) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
OverflowError: (34, 'Result too large')
>>> gaussian(10**-326)
0.3989422804014327
>>> gaussian(2523, mu=234234, sigma=3425)
0.0
"""
return 1 / sqrt(2 * pi * sigma**2) * exp(-((x - mu) ** 2) / (2 * sigma**2))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from collections.abc import Callable
from typing import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LRU Cache
>>> DoubleLinkedListNode(1,1)
Node: key: 1, val: 1, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return (
f"Node: key: {self.key}, val: {self.val}, "
f"has next: {bool(self.next)}, has prev: {bool(self.prev)}"
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LRU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: None, val: None, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LRUCache(Generic[T, U]):
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.put(1, 1)
>>> cache.put(2, 2)
>>> cache.get(1)
1
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 2, has next: True, has prev: True,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
2: Node: key: 2, val: 2, has next: True, has prev: True}
>>> cache.put(3, 3)
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: 3, val: 3, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
3: Node: key: 3, val: 3, has next: True, has prev: True}
>>> cache.get(2) is None
True
>>> cache.put(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LRUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.put(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List.
Returns None if key is not present in cache
"""
# Note: pythonic interface would throw KeyError rather than return None
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def put(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node (oldest) when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert (
self.list.remove(first_node) is not None
) # node guaranteed to be in list assert node.key is not None
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
# bump node to the end of the list, update value
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls, size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LRU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LRUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].put(args[0], result)
return result
def cache_info() -> LRUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info) # noqa: B010
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections.abc import Callable
from typing import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LRU Cache
>>> DoubleLinkedListNode(1,1)
Node: key: 1, val: 1, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return (
f"Node: key: {self.key}, val: {self.val}, "
f"has next: {bool(self.next)}, has prev: {bool(self.prev)}"
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LRU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: None, val: None, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LRUCache(Generic[T, U]):
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.put(1, 1)
>>> cache.put(2, 2)
>>> cache.get(1)
1
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 2, has next: True, has prev: True,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
2: Node: key: 2, val: 2, has next: True, has prev: True}
>>> cache.put(3, 3)
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: 3, val: 3, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
3: Node: key: 3, val: 3, has next: True, has prev: True}
>>> cache.get(2) is None
True
>>> cache.put(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LRUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.put(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List.
Returns None if key is not present in cache
"""
# Note: pythonic interface would throw KeyError rather than return None
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def put(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node (oldest) when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert (
self.list.remove(first_node) is not None
) # node guaranteed to be in list assert node.key is not None
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
# bump node to the end of the list, update value
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls, size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LRU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LRUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].put(args[0], result)
return result
def cache_info() -> LRUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info) # noqa: B010
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Finding the shortest path in 0-1-graph in O(E + V) which is faster than dijkstra.
0-1-graph is the weighted graph with the weights equal to 0 or 1.
Link: https://codeforces.com/blog/entry/22276
"""
from __future__ import annotations
from collections import deque
from collections.abc import Iterator
from dataclasses import dataclass
@dataclass
class Edge:
"""Weighted directed graph edge."""
destination_vertex: int
weight: int
class AdjacencyList:
"""Graph adjacency list."""
def __init__(self, size: int):
self._graph: list[list[Edge]] = [[] for _ in range(size)]
self._size = size
def __getitem__(self, vertex: int) -> Iterator[Edge]:
"""Get all the vertices adjacent to the given one."""
return iter(self._graph[vertex])
@property
def size(self):
return self._size
def add_edge(self, from_vertex: int, to_vertex: int, weight: int):
"""
>>> g = AdjacencyList(2)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(1, 0, 1)
>>> list(g[0])
[Edge(destination_vertex=1, weight=0)]
>>> list(g[1])
[Edge(destination_vertex=0, weight=1)]
>>> g.add_edge(0, 1, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.add_edge(0, 2, 1)
Traceback (most recent call last):
...
ValueError: Vertex indexes must be in [0; size).
"""
if weight not in (0, 1):
raise ValueError("Edge weight must be either 0 or 1.")
if to_vertex < 0 or to_vertex >= self.size:
raise ValueError("Vertex indexes must be in [0; size).")
self._graph[from_vertex].append(Edge(to_vertex, weight))
def get_shortest_path(self, start_vertex: int, finish_vertex: int) -> int | None:
"""
Return the shortest distance from start_vertex to finish_vertex in 0-1-graph.
1 1 1
0--------->3 6--------7>------->8
| ^ ^ ^ |1
| | | |0 v
0| |0 1| 9-------->10
| | | ^ 1
v | | |0
1--------->2<-------4------->5
0 1 1
>>> g = AdjacencyList(11)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(0, 3, 1)
>>> g.add_edge(1, 2, 0)
>>> g.add_edge(2, 3, 0)
>>> g.add_edge(4, 2, 1)
>>> g.add_edge(4, 5, 1)
>>> g.add_edge(4, 6, 1)
>>> g.add_edge(5, 9, 0)
>>> g.add_edge(6, 7, 1)
>>> g.add_edge(7, 8, 1)
>>> g.add_edge(8, 10, 1)
>>> g.add_edge(9, 7, 0)
>>> g.add_edge(9, 10, 1)
>>> g.add_edge(1, 2, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.get_shortest_path(0, 3)
0
>>> g.get_shortest_path(0, 4)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
>>> g.get_shortest_path(4, 10)
2
>>> g.get_shortest_path(4, 8)
2
>>> g.get_shortest_path(0, 1)
0
>>> g.get_shortest_path(1, 0)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
"""
queue = deque([start_vertex])
distances: list[int | None] = [None] * self.size
distances[start_vertex] = 0
while queue:
current_vertex = queue.popleft()
current_distance = distances[current_vertex]
if current_distance is None:
continue
for edge in self[current_vertex]:
new_distance = current_distance + edge.weight
dest_vertex_distance = distances[edge.destination_vertex]
if (
isinstance(dest_vertex_distance, int)
and new_distance >= dest_vertex_distance
):
continue
distances[edge.destination_vertex] = new_distance
if edge.weight == 0:
queue.appendleft(edge.destination_vertex)
else:
queue.append(edge.destination_vertex)
if distances[finish_vertex] is None:
raise ValueError("No path from start_vertex to finish_vertex.")
return distances[finish_vertex]
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Finding the shortest path in 0-1-graph in O(E + V) which is faster than dijkstra.
0-1-graph is the weighted graph with the weights equal to 0 or 1.
Link: https://codeforces.com/blog/entry/22276
"""
from __future__ import annotations
from collections import deque
from collections.abc import Iterator
from dataclasses import dataclass
@dataclass
class Edge:
"""Weighted directed graph edge."""
destination_vertex: int
weight: int
class AdjacencyList:
"""Graph adjacency list."""
def __init__(self, size: int):
self._graph: list[list[Edge]] = [[] for _ in range(size)]
self._size = size
def __getitem__(self, vertex: int) -> Iterator[Edge]:
"""Get all the vertices adjacent to the given one."""
return iter(self._graph[vertex])
@property
def size(self):
return self._size
def add_edge(self, from_vertex: int, to_vertex: int, weight: int):
"""
>>> g = AdjacencyList(2)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(1, 0, 1)
>>> list(g[0])
[Edge(destination_vertex=1, weight=0)]
>>> list(g[1])
[Edge(destination_vertex=0, weight=1)]
>>> g.add_edge(0, 1, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.add_edge(0, 2, 1)
Traceback (most recent call last):
...
ValueError: Vertex indexes must be in [0; size).
"""
if weight not in (0, 1):
raise ValueError("Edge weight must be either 0 or 1.")
if to_vertex < 0 or to_vertex >= self.size:
raise ValueError("Vertex indexes must be in [0; size).")
self._graph[from_vertex].append(Edge(to_vertex, weight))
def get_shortest_path(self, start_vertex: int, finish_vertex: int) -> int | None:
"""
Return the shortest distance from start_vertex to finish_vertex in 0-1-graph.
1 1 1
0--------->3 6--------7>------->8
| ^ ^ ^ |1
| | | |0 v
0| |0 1| 9-------->10
| | | ^ 1
v | | |0
1--------->2<-------4------->5
0 1 1
>>> g = AdjacencyList(11)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(0, 3, 1)
>>> g.add_edge(1, 2, 0)
>>> g.add_edge(2, 3, 0)
>>> g.add_edge(4, 2, 1)
>>> g.add_edge(4, 5, 1)
>>> g.add_edge(4, 6, 1)
>>> g.add_edge(5, 9, 0)
>>> g.add_edge(6, 7, 1)
>>> g.add_edge(7, 8, 1)
>>> g.add_edge(8, 10, 1)
>>> g.add_edge(9, 7, 0)
>>> g.add_edge(9, 10, 1)
>>> g.add_edge(1, 2, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.get_shortest_path(0, 3)
0
>>> g.get_shortest_path(0, 4)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
>>> g.get_shortest_path(4, 10)
2
>>> g.get_shortest_path(4, 8)
2
>>> g.get_shortest_path(0, 1)
0
>>> g.get_shortest_path(1, 0)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
"""
queue = deque([start_vertex])
distances: list[int | None] = [None] * self.size
distances[start_vertex] = 0
while queue:
current_vertex = queue.popleft()
current_distance = distances[current_vertex]
if current_distance is None:
continue
for edge in self[current_vertex]:
new_distance = current_distance + edge.weight
dest_vertex_distance = distances[edge.destination_vertex]
if (
isinstance(dest_vertex_distance, int)
and new_distance >= dest_vertex_distance
):
continue
distances[edge.destination_vertex] = new_distance
if edge.weight == 0:
queue.appendleft(edge.destination_vertex)
else:
queue.append(edge.destination_vertex)
if distances[finish_vertex] is None:
raise ValueError("No path from start_vertex to finish_vertex.")
return distances[finish_vertex]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Bi-directional Dijkstra's algorithm.
A bi-directional approach is an efficient and
less time consuming optimization for Dijkstra's
searching algorithm
Reference: shorturl.at/exHM7
"""
# Author: Swayam Singh (https://github.com/practice404)
from queue import PriorityQueue
from typing import Any
import numpy as np
def pass_and_relaxation(
graph: dict,
v: str,
visited_forward: set,
visited_backward: set,
cst_fwd: dict,
cst_bwd: dict,
queue: PriorityQueue,
parent: dict,
shortest_distance: float | int,
) -> float | int:
for nxt, d in graph[v]:
if nxt in visited_forward:
continue
old_cost_f = cst_fwd.get(nxt, np.inf)
new_cost_f = cst_fwd[v] + d
if new_cost_f < old_cost_f:
queue.put((new_cost_f, nxt))
cst_fwd[nxt] = new_cost_f
parent[nxt] = v
if nxt in visited_backward:
if cst_fwd[v] + d + cst_bwd[nxt] < shortest_distance:
shortest_distance = cst_fwd[v] + d + cst_bwd[nxt]
return shortest_distance
def bidirectional_dij(
source: str, destination: str, graph_forward: dict, graph_backward: dict
) -> int:
"""
Bi-directional Dijkstra's algorithm.
Returns:
shortest_path_distance (int): length of the shortest path.
Warnings:
If the destination is not reachable, function returns -1
>>> bidirectional_dij("E", "F", graph_fwd, graph_bwd)
3
"""
shortest_path_distance = -1
visited_forward = set()
visited_backward = set()
cst_fwd = {source: 0}
cst_bwd = {destination: 0}
parent_forward = {source: None}
parent_backward = {destination: None}
queue_forward: PriorityQueue[Any] = PriorityQueue()
queue_backward: PriorityQueue[Any] = PriorityQueue()
shortest_distance = np.inf
queue_forward.put((0, source))
queue_backward.put((0, destination))
if source == destination:
return 0
while not queue_forward.empty() and not queue_backward.empty():
_, v_fwd = queue_forward.get()
visited_forward.add(v_fwd)
_, v_bwd = queue_backward.get()
visited_backward.add(v_bwd)
shortest_distance = pass_and_relaxation(
graph_forward,
v_fwd,
visited_forward,
visited_backward,
cst_fwd,
cst_bwd,
queue_forward,
parent_forward,
shortest_distance,
)
shortest_distance = pass_and_relaxation(
graph_backward,
v_bwd,
visited_backward,
visited_forward,
cst_bwd,
cst_fwd,
queue_backward,
parent_backward,
shortest_distance,
)
if cst_fwd[v_fwd] + cst_bwd[v_bwd] >= shortest_distance:
break
if shortest_distance != np.inf:
shortest_path_distance = shortest_distance
return shortest_path_distance
graph_fwd = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
graph_bwd = {
"B": [["E", 1]],
"C": [["B", 1]],
"D": [["C", 1]],
"F": [["D", 1], ["G", 1]],
"E": [[None, np.inf]],
"G": [["E", 2]],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Bi-directional Dijkstra's algorithm.
A bi-directional approach is an efficient and
less time consuming optimization for Dijkstra's
searching algorithm
Reference: shorturl.at/exHM7
"""
# Author: Swayam Singh (https://github.com/practice404)
from queue import PriorityQueue
from typing import Any
import numpy as np
def pass_and_relaxation(
graph: dict,
v: str,
visited_forward: set,
visited_backward: set,
cst_fwd: dict,
cst_bwd: dict,
queue: PriorityQueue,
parent: dict,
shortest_distance: float | int,
) -> float | int:
for nxt, d in graph[v]:
if nxt in visited_forward:
continue
old_cost_f = cst_fwd.get(nxt, np.inf)
new_cost_f = cst_fwd[v] + d
if new_cost_f < old_cost_f:
queue.put((new_cost_f, nxt))
cst_fwd[nxt] = new_cost_f
parent[nxt] = v
if nxt in visited_backward:
if cst_fwd[v] + d + cst_bwd[nxt] < shortest_distance:
shortest_distance = cst_fwd[v] + d + cst_bwd[nxt]
return shortest_distance
def bidirectional_dij(
source: str, destination: str, graph_forward: dict, graph_backward: dict
) -> int:
"""
Bi-directional Dijkstra's algorithm.
Returns:
shortest_path_distance (int): length of the shortest path.
Warnings:
If the destination is not reachable, function returns -1
>>> bidirectional_dij("E", "F", graph_fwd, graph_bwd)
3
"""
shortest_path_distance = -1
visited_forward = set()
visited_backward = set()
cst_fwd = {source: 0}
cst_bwd = {destination: 0}
parent_forward = {source: None}
parent_backward = {destination: None}
queue_forward: PriorityQueue[Any] = PriorityQueue()
queue_backward: PriorityQueue[Any] = PriorityQueue()
shortest_distance = np.inf
queue_forward.put((0, source))
queue_backward.put((0, destination))
if source == destination:
return 0
while not queue_forward.empty() and not queue_backward.empty():
_, v_fwd = queue_forward.get()
visited_forward.add(v_fwd)
_, v_bwd = queue_backward.get()
visited_backward.add(v_bwd)
shortest_distance = pass_and_relaxation(
graph_forward,
v_fwd,
visited_forward,
visited_backward,
cst_fwd,
cst_bwd,
queue_forward,
parent_forward,
shortest_distance,
)
shortest_distance = pass_and_relaxation(
graph_backward,
v_bwd,
visited_backward,
visited_forward,
cst_bwd,
cst_fwd,
queue_backward,
parent_backward,
shortest_distance,
)
if cst_fwd[v_fwd] + cst_bwd[v_bwd] >= shortest_distance:
break
if shortest_distance != np.inf:
shortest_path_distance = shortest_distance
return shortest_path_distance
graph_fwd = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
graph_bwd = {
"B": [["E", 1]],
"C": [["B", 1]],
"D": [["C", 1]],
"F": [["D", 1], ["G", 1]],
"E": [[None, np.inf]],
"G": [["E", 2]],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
# Divide and Conquer algorithm
def find_min(nums: list[int | float], left: int, right: int) -> int | float:
"""
find min value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: min in nums
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_min(nums, 0, len(nums) - 1) == min(nums)
True
True
True
True
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_min(nums, 0, len(nums) - 1) == min(nums)
True
>>> find_min([], 0, 0)
Traceback (most recent call last):
...
ValueError: find_min() arg is an empty sequence
>>> find_min(nums, 0, len(nums)) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> find_min(nums, -len(nums), -1) == min(nums)
True
>>> find_min(nums, -len(nums) - 1, -1) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
"""
if len(nums) == 0:
raise ValueError("find_min() arg is an empty sequence")
if (
left >= len(nums)
or left < -len(nums)
or right >= len(nums)
or right < -len(nums)
):
raise IndexError("list index out of range")
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_min = find_min(nums, left, mid) # find min in range[left, mid]
right_min = find_min(nums, mid + 1, right) # find min in range[mid + 1, right]
return left_min if left_min <= right_min else right_min
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| from __future__ import annotations
# Divide and Conquer algorithm
def find_min(nums: list[int | float], left: int, right: int) -> int | float:
"""
find min value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: min in nums
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_min(nums, 0, len(nums) - 1) == min(nums)
True
True
True
True
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_min(nums, 0, len(nums) - 1) == min(nums)
True
>>> find_min([], 0, 0)
Traceback (most recent call last):
...
ValueError: find_min() arg is an empty sequence
>>> find_min(nums, 0, len(nums)) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> find_min(nums, -len(nums), -1) == min(nums)
True
>>> find_min(nums, -len(nums) - 1, -1) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
"""
if len(nums) == 0:
raise ValueError("find_min() arg is an empty sequence")
if (
left >= len(nums)
or left < -len(nums)
or right >= len(nums)
or right < -len(nums)
):
raise IndexError("list index out of range")
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_min = find_min(nums, left, mid) # find min in range[left, mid]
right_min = find_min(nums, mid + 1, right) # find min in range[mid + 1, right]
return left_min if left_min <= right_min else right_min
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return factorial(n) // (factorial(k) * factorial(n - k))
if __name__ == "__main__":
print(
"The number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}\n",
)
print(
"If a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.",
)
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return factorial(n) // (factorial(k) * factorial(n - k))
if __name__ == "__main__":
print(
"The number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}\n",
)
print(
"If a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.",
)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://www.geeksforgeeks.org/newton-forward-backward-interpolation/
from __future__ import annotations
import math
# for calculating u value
def ucal(u: float, p: int) -> float:
"""
>>> ucal(1, 2)
0
>>> ucal(1.1, 2)
0.11000000000000011
>>> ucal(1.2, 2)
0.23999999999999994
"""
temp = u
for i in range(1, p):
temp = temp * (u - i)
return temp
def main() -> None:
n = int(input("enter the numbers of values: "))
y: list[list[float]] = []
for _ in range(n):
y.append([])
for i in range(n):
for j in range(n):
y[i].append(j)
y[i][j] = 0
print("enter the values of parameters in a list: ")
x = list(map(int, input().split()))
print("enter the values of corresponding parameters: ")
for i in range(n):
y[i][0] = float(input())
value = int(input("enter the value to interpolate: "))
u = (value - x[0]) / (x[1] - x[0])
# for calculating forward difference table
for i in range(1, n):
for j in range(n - i):
y[j][i] = y[j + 1][i - 1] - y[j][i - 1]
summ = y[0][0]
for i in range(1, n):
summ += (ucal(u, i) * y[0][i]) / math.factorial(i)
print(f"the value at {value} is {summ}")
if __name__ == "__main__":
main()
| # https://www.geeksforgeeks.org/newton-forward-backward-interpolation/
from __future__ import annotations
import math
# for calculating u value
def ucal(u: float, p: int) -> float:
"""
>>> ucal(1, 2)
0
>>> ucal(1.1, 2)
0.11000000000000011
>>> ucal(1.2, 2)
0.23999999999999994
"""
temp = u
for i in range(1, p):
temp = temp * (u - i)
return temp
def main() -> None:
n = int(input("enter the numbers of values: "))
y: list[list[float]] = []
for _ in range(n):
y.append([])
for i in range(n):
for j in range(n):
y[i].append(j)
y[i][j] = 0
print("enter the values of parameters in a list: ")
x = list(map(int, input().split()))
print("enter the values of corresponding parameters: ")
for i in range(n):
y[i][0] = float(input())
value = int(input("enter the value to interpolate: "))
u = (value - x[0]) / (x[1] - x[0])
# for calculating forward difference table
for i in range(1, n):
for j in range(n - i):
y[j][i] = y[j + 1][i - 1] - y[j][i - 1]
summ = y[0][0]
for i in range(1, n):
summ += (ucal(u, i) * y[0][i]) / math.factorial(i)
print(f"the value at {value} is {summ}")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Sum of all nodes in a binary tree.
Python implementation:
O(n) time complexity - Recurses through :meth:`depth_first_search`
with each element.
O(n) space complexity - At any point in time maximum number of stack
frames that could be in memory is `n`
"""
from __future__ import annotations
from collections.abc import Iterator
class Node:
"""
A Node has a value variable and pointers to Nodes to its left and right.
"""
def __init__(self, value: int) -> None:
self.value = value
self.left: Node | None = None
self.right: Node | None = None
class BinaryTreeNodeSum:
r"""
The below tree looks like this
10
/ \
5 -3
/ / \
12 8 0
>>> tree = Node(10)
>>> sum(BinaryTreeNodeSum(tree))
10
>>> tree.left = Node(5)
>>> sum(BinaryTreeNodeSum(tree))
15
>>> tree.right = Node(-3)
>>> sum(BinaryTreeNodeSum(tree))
12
>>> tree.left.left = Node(12)
>>> sum(BinaryTreeNodeSum(tree))
24
>>> tree.right.left = Node(8)
>>> tree.right.right = Node(0)
>>> sum(BinaryTreeNodeSum(tree))
32
"""
def __init__(self, tree: Node) -> None:
self.tree = tree
def depth_first_search(self, node: Node | None) -> int:
if node is None:
return 0
return node.value + (
self.depth_first_search(node.left) + self.depth_first_search(node.right)
)
def __iter__(self) -> Iterator[int]:
yield self.depth_first_search(self.tree)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Sum of all nodes in a binary tree.
Python implementation:
O(n) time complexity - Recurses through :meth:`depth_first_search`
with each element.
O(n) space complexity - At any point in time maximum number of stack
frames that could be in memory is `n`
"""
from __future__ import annotations
from collections.abc import Iterator
class Node:
"""
A Node has a value variable and pointers to Nodes to its left and right.
"""
def __init__(self, value: int) -> None:
self.value = value
self.left: Node | None = None
self.right: Node | None = None
class BinaryTreeNodeSum:
r"""
The below tree looks like this
10
/ \
5 -3
/ / \
12 8 0
>>> tree = Node(10)
>>> sum(BinaryTreeNodeSum(tree))
10
>>> tree.left = Node(5)
>>> sum(BinaryTreeNodeSum(tree))
15
>>> tree.right = Node(-3)
>>> sum(BinaryTreeNodeSum(tree))
12
>>> tree.left.left = Node(12)
>>> sum(BinaryTreeNodeSum(tree))
24
>>> tree.right.left = Node(8)
>>> tree.right.right = Node(0)
>>> sum(BinaryTreeNodeSum(tree))
32
"""
def __init__(self, tree: Node) -> None:
self.tree = tree
def depth_first_search(self, node: Node | None) -> int:
if node is None:
return 0
return node.value + (
self.depth_first_search(node.left) + self.depth_first_search(node.right)
)
def __iter__(self) -> Iterator[int]:
yield self.depth_first_search(self.tree)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Compression
Data compression is everywhere, you need it to store data without taking too much space.
Either the compression lose some data (then we talk about lossy compression, such as .jpg) or it does not (and then it is lossless compression, such as .png)
Lossless compression is mainly used for archive purpose as it allow storing data without losing information about the file archived. On the other hand, lossy compression is used for transfer of file where quality isn't necessarily what is required (i.e: images on Twitter).
* <https://www.sciencedirect.com/topics/computer-science/compression-algorithm>
* <https://en.wikipedia.org/wiki/Data_compression>
* <https://en.wikipedia.org/wiki/Pigeonhole_principle>
| # Compression
Data compression is everywhere, you need it to store data without taking too much space.
Either the compression lose some data (then we talk about lossy compression, such as .jpg) or it does not (and then it is lossless compression, such as .png)
Lossless compression is mainly used for archive purpose as it allow storing data without losing information about the file archived. On the other hand, lossy compression is used for transfer of file where quality isn't necessarily what is required (i.e: images on Twitter).
* <https://www.sciencedirect.com/topics/computer-science/compression-algorithm>
* <https://en.wikipedia.org/wiki/Data_compression>
* <https://en.wikipedia.org/wiki/Pigeonhole_principle>
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def binary_search(lst, item, start, end):
if start == end:
return start if lst[start] > item else start + 1
if start > end:
return start
mid = (start + end) // 2
if lst[mid] < item:
return binary_search(lst, item, mid + 1, end)
elif lst[mid] > item:
return binary_search(lst, item, start, mid - 1)
else:
return mid
def insertion_sort(lst):
length = len(lst)
for index in range(1, length):
value = lst[index]
pos = binary_search(lst, value, 0, index - 1)
lst = lst[:pos] + [value] + lst[pos:index] + lst[index + 1 :]
return lst
def merge(left, right):
if not left:
return right
if not right:
return left
if left[0] < right[0]:
return [left[0], *merge(left[1:], right)]
return [right[0], *merge(left, right[1:])]
def tim_sort(lst):
"""
>>> tim_sort("Python")
['P', 'h', 'n', 'o', 't', 'y']
>>> tim_sort((1.1, 1, 0, -1, -1.1))
[-1.1, -1, 0, 1, 1.1]
>>> tim_sort(list(reversed(list(range(7)))))
[0, 1, 2, 3, 4, 5, 6]
>>> tim_sort([3, 2, 1]) == insertion_sort([3, 2, 1])
True
>>> tim_sort([3, 2, 1]) == sorted([3, 2, 1])
True
"""
length = len(lst)
runs, sorted_runs = [], []
new_run = [lst[0]]
sorted_array = []
i = 1
while i < length:
if lst[i] < lst[i - 1]:
runs.append(new_run)
new_run = [lst[i]]
else:
new_run.append(lst[i])
i += 1
runs.append(new_run)
for run in runs:
sorted_runs.append(insertion_sort(run))
for run in sorted_runs:
sorted_array = merge(sorted_array, run)
return sorted_array
def main():
lst = [5, 9, 10, 3, -4, 5, 178, 92, 46, -18, 0, 7]
sorted_lst = tim_sort(lst)
print(sorted_lst)
if __name__ == "__main__":
main()
| def binary_search(lst, item, start, end):
if start == end:
return start if lst[start] > item else start + 1
if start > end:
return start
mid = (start + end) // 2
if lst[mid] < item:
return binary_search(lst, item, mid + 1, end)
elif lst[mid] > item:
return binary_search(lst, item, start, mid - 1)
else:
return mid
def insertion_sort(lst):
length = len(lst)
for index in range(1, length):
value = lst[index]
pos = binary_search(lst, value, 0, index - 1)
lst = lst[:pos] + [value] + lst[pos:index] + lst[index + 1 :]
return lst
def merge(left, right):
if not left:
return right
if not right:
return left
if left[0] < right[0]:
return [left[0], *merge(left[1:], right)]
return [right[0], *merge(left, right[1:])]
def tim_sort(lst):
"""
>>> tim_sort("Python")
['P', 'h', 'n', 'o', 't', 'y']
>>> tim_sort((1.1, 1, 0, -1, -1.1))
[-1.1, -1, 0, 1, 1.1]
>>> tim_sort(list(reversed(list(range(7)))))
[0, 1, 2, 3, 4, 5, 6]
>>> tim_sort([3, 2, 1]) == insertion_sort([3, 2, 1])
True
>>> tim_sort([3, 2, 1]) == sorted([3, 2, 1])
True
"""
length = len(lst)
runs, sorted_runs = [], []
new_run = [lst[0]]
sorted_array = []
i = 1
while i < length:
if lst[i] < lst[i - 1]:
runs.append(new_run)
new_run = [lst[i]]
else:
new_run.append(lst[i])
i += 1
runs.append(new_run)
for run in runs:
sorted_runs.append(insertion_sort(run))
for run in sorted_runs:
sorted_array = merge(sorted_array, run)
return sorted_array
def main():
lst = [5, 9, 10, 3, -4, 5, 178, 92, 46, -18, 0, 7]
sorted_lst = tim_sort(lst)
print(sorted_lst)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The Reverse Polish Nation also known as Polish postfix notation
or simply postfix notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
"""
from __future__ import annotations
from typing import Any
def evaluate_postfix(postfix_notation: list) -> int:
"""
>>> evaluate_postfix(["2", "1", "+", "3", "*"])
9
>>> evaluate_postfix(["4", "13", "5", "/", "+"])
6
>>> evaluate_postfix([])
0
"""
if not postfix_notation:
return 0
operations = {"+", "-", "*", "/"}
stack: list[Any] = []
for token in postfix_notation:
if token in operations:
b, a = stack.pop(), stack.pop()
if token == "+":
stack.append(a + b)
elif token == "-":
stack.append(a - b)
elif token == "*":
stack.append(a * b)
else:
if a * b < 0 and a % b != 0:
stack.append(a // b + 1)
else:
stack.append(a // b)
else:
stack.append(int(token))
return stack.pop()
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
The Reverse Polish Nation also known as Polish postfix notation
or simply postfix notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
"""
from __future__ import annotations
from typing import Any
def evaluate_postfix(postfix_notation: list) -> int:
"""
>>> evaluate_postfix(["2", "1", "+", "3", "*"])
9
>>> evaluate_postfix(["4", "13", "5", "/", "+"])
6
>>> evaluate_postfix([])
0
"""
if not postfix_notation:
return 0
operations = {"+", "-", "*", "/"}
stack: list[Any] = []
for token in postfix_notation:
if token in operations:
b, a = stack.pop(), stack.pop()
if token == "+":
stack.append(a + b)
elif token == "-":
stack.append(a - b)
elif token == "*":
stack.append(a * b)
else:
if a * b < 0 and a % b != 0:
stack.append(a // b + 1)
else:
stack.append(a // b)
else:
stack.append(int(token))
return stack.pop()
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
fib = [0, 1]
i = 0
while fib[i] <= n:
fib.append(fib[i] + fib[i + 1])
if fib[i + 2] > n:
break
i += 1
total = 0
for j in range(len(fib) - 1):
if fib[j] % 2 == 0:
total += fib[j]
return total
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
fib = [0, 1]
i = 0
while fib[i] <= n:
fib.append(fib[i] + fib[i + 1])
if fib[i + 2] > n:
break
i += 1
total = 0
for j in range(len(fib) - 1):
if fib[j] % 2 == 0:
total += fib[j]
return total
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import doctest
import projectq
from projectq.ops import H, Measure
def get_random_number(quantum_engine: projectq.cengines._main.MainEngine) -> int:
"""
>>> isinstance(get_random_number(projectq.MainEngine()), int)
True
"""
qubit = quantum_engine.allocate_qubit()
H | qubit
Measure | qubit
return int(qubit)
if __name__ == "__main__":
doctest.testmod()
# initialises a new quantum backend
quantum_engine = projectq.MainEngine()
# Generate a list of 10 random numbers
random_numbers_list = [get_random_number(quantum_engine) for _ in range(10)]
# Flushes the quantum engine from memory
quantum_engine.flush()
print("Random numbers", random_numbers_list)
| import doctest
import projectq
from projectq.ops import H, Measure
def get_random_number(quantum_engine: projectq.cengines._main.MainEngine) -> int:
"""
>>> isinstance(get_random_number(projectq.MainEngine()), int)
True
"""
qubit = quantum_engine.allocate_qubit()
H | qubit
Measure | qubit
return int(qubit)
if __name__ == "__main__":
doctest.testmod()
# initialises a new quantum backend
quantum_engine = projectq.MainEngine()
# Generate a list of 10 random numbers
random_numbers_list = [get_random_number(quantum_engine) for _ in range(10)]
# Flushes the quantum engine from memory
quantum_engine.flush()
print("Random numbers", random_numbers_list)
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import random
class Point:
def __init__(self, x: float, y: float) -> None:
self.x = x
self.y = y
def is_in_unit_circle(self) -> bool:
"""
True, if the point lies in the unit circle
False, otherwise
"""
return (self.x**2 + self.y**2) <= 1
@classmethod
def random_unit_square(cls):
"""
Generates a point randomly drawn from the unit square [0, 1) x [0, 1).
"""
return cls(x=random.random(), y=random.random())
def estimate_pi(number_of_simulations: int) -> float:
"""
Generates an estimate of the mathematical constant PI.
See https://en.wikipedia.org/wiki/Monte_Carlo_method#Overview
The estimate is generated by Monte Carlo simulations. Let U be uniformly drawn from
the unit square [0, 1) x [0, 1). The probability that U lies in the unit circle is:
P[U in unit circle] = 1/4 PI
and therefore
PI = 4 * P[U in unit circle]
We can get an estimate of the probability P[U in unit circle].
See https://en.wikipedia.org/wiki/Empirical_probability by:
1. Draw a point uniformly from the unit square.
2. Repeat the first step n times and count the number of points in the unit
circle, which is called m.
3. An estimate of P[U in unit circle] is m/n
"""
if number_of_simulations < 1:
raise ValueError("At least one simulation is necessary to estimate PI.")
number_in_unit_circle = 0
for _ in range(number_of_simulations):
random_point = Point.random_unit_square()
if random_point.is_in_unit_circle():
number_in_unit_circle += 1
return 4 * number_in_unit_circle / number_of_simulations
if __name__ == "__main__":
# import doctest
# doctest.testmod()
from math import pi
prompt = "Please enter the desired number of Monte Carlo simulations: "
my_pi = estimate_pi(int(input(prompt).strip()))
print(f"An estimate of PI is {my_pi} with an error of {abs(my_pi - pi)}")
| import random
class Point:
def __init__(self, x: float, y: float) -> None:
self.x = x
self.y = y
def is_in_unit_circle(self) -> bool:
"""
True, if the point lies in the unit circle
False, otherwise
"""
return (self.x**2 + self.y**2) <= 1
@classmethod
def random_unit_square(cls):
"""
Generates a point randomly drawn from the unit square [0, 1) x [0, 1).
"""
return cls(x=random.random(), y=random.random())
def estimate_pi(number_of_simulations: int) -> float:
"""
Generates an estimate of the mathematical constant PI.
See https://en.wikipedia.org/wiki/Monte_Carlo_method#Overview
The estimate is generated by Monte Carlo simulations. Let U be uniformly drawn from
the unit square [0, 1) x [0, 1). The probability that U lies in the unit circle is:
P[U in unit circle] = 1/4 PI
and therefore
PI = 4 * P[U in unit circle]
We can get an estimate of the probability P[U in unit circle].
See https://en.wikipedia.org/wiki/Empirical_probability by:
1. Draw a point uniformly from the unit square.
2. Repeat the first step n times and count the number of points in the unit
circle, which is called m.
3. An estimate of P[U in unit circle] is m/n
"""
if number_of_simulations < 1:
raise ValueError("At least one simulation is necessary to estimate PI.")
number_in_unit_circle = 0
for _ in range(number_of_simulations):
random_point = Point.random_unit_square()
if random_point.is_in_unit_circle():
number_in_unit_circle += 1
return 4 * number_in_unit_circle / number_of_simulations
if __name__ == "__main__":
# import doctest
# doctest.testmod()
from math import pi
prompt = "Please enter the desired number of Monte Carlo simulations: "
my_pi = estimate_pi(int(input(prompt).strip()))
print(f"An estimate of PI is {my_pi} with an error of {abs(my_pi - pi)}")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Disjoint set.
Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
"""
class Node:
def __init__(self, data: int) -> None:
self.data = data
self.rank: int
self.parent: Node
def make_set(x: Node) -> None:
"""
Make x as a set.
"""
# rank is the distance from x to its' parent
# root's rank is 0
x.rank = 0
x.parent = x
def union_set(x: Node, y: Node) -> None:
"""
Union of two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x == y:
return
elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
def find_set(x: Node) -> Node:
"""
Return the parent of x
"""
if x != x.parent:
x.parent = find_set(x.parent)
return x.parent
def find_python_set(node: Node) -> set:
"""
Return a Python Standard Library set that contains i.
"""
sets = ({0, 1, 2}, {3, 4, 5})
for s in sets:
if node.data in s:
return s
raise ValueError(f"{node.data} is not in {sets}")
def test_disjoint_set() -> None:
"""
>>> test_disjoint_set()
"""
vertex = [Node(i) for i in range(6)]
for v in vertex:
make_set(v)
union_set(vertex[0], vertex[1])
union_set(vertex[1], vertex[2])
union_set(vertex[3], vertex[4])
union_set(vertex[3], vertex[5])
for node0 in vertex:
for node1 in vertex:
if find_python_set(node0).isdisjoint(find_python_set(node1)):
assert find_set(node0) != find_set(node1)
else:
assert find_set(node0) == find_set(node1)
if __name__ == "__main__":
test_disjoint_set()
| """
Disjoint set.
Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
"""
class Node:
def __init__(self, data: int) -> None:
self.data = data
self.rank: int
self.parent: Node
def make_set(x: Node) -> None:
"""
Make x as a set.
"""
# rank is the distance from x to its' parent
# root's rank is 0
x.rank = 0
x.parent = x
def union_set(x: Node, y: Node) -> None:
"""
Union of two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x == y:
return
elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
def find_set(x: Node) -> Node:
"""
Return the parent of x
"""
if x != x.parent:
x.parent = find_set(x.parent)
return x.parent
def find_python_set(node: Node) -> set:
"""
Return a Python Standard Library set that contains i.
"""
sets = ({0, 1, 2}, {3, 4, 5})
for s in sets:
if node.data in s:
return s
raise ValueError(f"{node.data} is not in {sets}")
def test_disjoint_set() -> None:
"""
>>> test_disjoint_set()
"""
vertex = [Node(i) for i in range(6)]
for v in vertex:
make_set(v)
union_set(vertex[0], vertex[1])
union_set(vertex[1], vertex[2])
union_set(vertex[3], vertex[4])
union_set(vertex[3], vertex[5])
for node0 in vertex:
for node1 in vertex:
if find_python_set(node0).isdisjoint(find_python_set(node1)):
assert find_set(node0) != find_set(node1)
else:
assert find_set(node0) == find_set(node1)
if __name__ == "__main__":
test_disjoint_set()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from pathlib import Path
import cv2
import numpy as np
from matplotlib import pyplot as plt
def get_rotation(
img: np.ndarray, pt1: np.ndarray, pt2: np.ndarray, rows: int, cols: int
) -> np.ndarray:
"""
Get image rotation
:param img: np.array
:param pt1: 3x2 list
:param pt2: 3x2 list
:param rows: columns image shape
:param cols: rows image shape
:return: np.array
"""
matrix = cv2.getAffineTransform(pt1, pt2)
return cv2.warpAffine(img, matrix, (rows, cols))
if __name__ == "__main__":
# read original image
image = cv2.imread(
str(Path(__file__).resolve().parent.parent / "image_data" / "lena.jpg")
)
# turn image in gray scale value
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# get image shape
img_rows, img_cols = gray_img.shape
# set different points to rotate image
pts1 = np.array([[50, 50], [200, 50], [50, 200]], np.float32)
pts2 = np.array([[10, 100], [200, 50], [100, 250]], np.float32)
pts3 = np.array([[50, 50], [150, 50], [120, 200]], np.float32)
pts4 = np.array([[10, 100], [80, 50], [180, 250]], np.float32)
# add all rotated images in a list
images = [
gray_img,
get_rotation(gray_img, pts1, pts2, img_rows, img_cols),
get_rotation(gray_img, pts2, pts3, img_rows, img_cols),
get_rotation(gray_img, pts2, pts4, img_rows, img_cols),
]
# plot different image rotations
fig = plt.figure(1)
titles = ["Original", "Rotation 1", "Rotation 2", "Rotation 3"]
for i, image in enumerate(images):
plt.subplot(2, 2, i + 1), plt.imshow(image, "gray")
plt.title(titles[i])
plt.axis("off")
plt.subplots_adjust(left=0.0, bottom=0.05, right=1.0, top=0.95)
plt.show()
| from pathlib import Path
import cv2
import numpy as np
from matplotlib import pyplot as plt
def get_rotation(
img: np.ndarray, pt1: np.ndarray, pt2: np.ndarray, rows: int, cols: int
) -> np.ndarray:
"""
Get image rotation
:param img: np.array
:param pt1: 3x2 list
:param pt2: 3x2 list
:param rows: columns image shape
:param cols: rows image shape
:return: np.array
"""
matrix = cv2.getAffineTransform(pt1, pt2)
return cv2.warpAffine(img, matrix, (rows, cols))
if __name__ == "__main__":
# read original image
image = cv2.imread(
str(Path(__file__).resolve().parent.parent / "image_data" / "lena.jpg")
)
# turn image in gray scale value
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# get image shape
img_rows, img_cols = gray_img.shape
# set different points to rotate image
pts1 = np.array([[50, 50], [200, 50], [50, 200]], np.float32)
pts2 = np.array([[10, 100], [200, 50], [100, 250]], np.float32)
pts3 = np.array([[50, 50], [150, 50], [120, 200]], np.float32)
pts4 = np.array([[10, 100], [80, 50], [180, 250]], np.float32)
# add all rotated images in a list
images = [
gray_img,
get_rotation(gray_img, pts1, pts2, img_rows, img_cols),
get_rotation(gray_img, pts2, pts3, img_rows, img_cols),
get_rotation(gray_img, pts2, pts4, img_rows, img_cols),
]
# plot different image rotations
fig = plt.figure(1)
titles = ["Original", "Rotation 1", "Rotation 2", "Rotation 3"]
for i, image in enumerate(images):
plt.subplot(2, 2, i + 1), plt.imshow(image, "gray")
plt.title(titles[i])
plt.axis("off")
plt.subplots_adjust(left=0.0, bottom=0.05, right=1.0, top=0.95)
plt.show()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is an implementation of odd-even transposition sort.
It works by performing a series of parallel swaps between odd and even pairs of
variables in the list.
This implementation represents each variable in the list with a process and
each process communicates with its neighboring processes in the list to perform
comparisons.
They are synchronized with locks and message passing but other forms of
synchronization could be used.
"""
from multiprocessing import Lock, Pipe, Process
# lock used to ensure that two processes do not access a pipe at the same time
process_lock = Lock()
"""
The function run by the processes that sorts the list
position = the position in the list the process represents, used to know which
neighbor we pass our value to
value = the initial value at list[position]
LSend, RSend = the pipes we use to send to our left and right neighbors
LRcv, RRcv = the pipes we use to receive from our left and right neighbors
resultPipe = the pipe used to send results back to main
"""
def oe_process(position, value, l_send, r_send, lr_cv, rr_cv, result_pipe):
global process_lock
# we perform n swaps since after n swaps we know we are sorted
# we *could* stop early if we are sorted already, but it takes as long to
# find out we are sorted as it does to sort the list with this algorithm
for i in range(0, 10):
if (i + position) % 2 == 0 and r_send is not None:
# send your value to your right neighbor
process_lock.acquire()
r_send[1].send(value)
process_lock.release()
# receive your right neighbor's value
process_lock.acquire()
temp = rr_cv[0].recv()
process_lock.release()
# take the lower value since you are on the left
value = min(value, temp)
elif (i + position) % 2 != 0 and l_send is not None:
# send your value to your left neighbor
process_lock.acquire()
l_send[1].send(value)
process_lock.release()
# receive your left neighbor's value
process_lock.acquire()
temp = lr_cv[0].recv()
process_lock.release()
# take the higher value since you are on the right
value = max(value, temp)
# after all swaps are performed, send the values back to main
result_pipe[1].send(value)
"""
the function which creates the processes that perform the parallel swaps
arr = the list to be sorted
"""
def odd_even_transposition(arr):
process_array_ = []
result_pipe = []
# initialize the list of pipes where the values will be retrieved
for _ in arr:
result_pipe.append(Pipe())
# creates the processes
# the first and last process only have one neighbor so they are made outside
# of the loop
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(0, arr[0], None, temp_rs, None, temp_rr, result_pipe[0]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
for i in range(1, len(arr) - 1):
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(i, arr[i], temp_ls, temp_rs, temp_lr, temp_rr, result_pipe[i]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
process_array_.append(
Process(
target=oe_process,
args=(
len(arr) - 1,
arr[len(arr) - 1],
temp_ls,
None,
temp_lr,
None,
result_pipe[len(arr) - 1],
),
)
)
# start the processes
for p in process_array_:
p.start()
# wait for the processes to end and write their values to the list
for p in range(0, len(result_pipe)):
arr[p] = result_pipe[p][0].recv()
process_array_[p].join()
return arr
# creates a reverse sorted list and sorts it
def main():
arr = list(range(10, 0, -1))
print("Initial List")
print(*arr)
arr = odd_even_transposition(arr)
print("Sorted List\n")
print(*arr)
if __name__ == "__main__":
main()
| """
This is an implementation of odd-even transposition sort.
It works by performing a series of parallel swaps between odd and even pairs of
variables in the list.
This implementation represents each variable in the list with a process and
each process communicates with its neighboring processes in the list to perform
comparisons.
They are synchronized with locks and message passing but other forms of
synchronization could be used.
"""
from multiprocessing import Lock, Pipe, Process
# lock used to ensure that two processes do not access a pipe at the same time
process_lock = Lock()
"""
The function run by the processes that sorts the list
position = the position in the list the process represents, used to know which
neighbor we pass our value to
value = the initial value at list[position]
LSend, RSend = the pipes we use to send to our left and right neighbors
LRcv, RRcv = the pipes we use to receive from our left and right neighbors
resultPipe = the pipe used to send results back to main
"""
def oe_process(position, value, l_send, r_send, lr_cv, rr_cv, result_pipe):
global process_lock
# we perform n swaps since after n swaps we know we are sorted
# we *could* stop early if we are sorted already, but it takes as long to
# find out we are sorted as it does to sort the list with this algorithm
for i in range(0, 10):
if (i + position) % 2 == 0 and r_send is not None:
# send your value to your right neighbor
process_lock.acquire()
r_send[1].send(value)
process_lock.release()
# receive your right neighbor's value
process_lock.acquire()
temp = rr_cv[0].recv()
process_lock.release()
# take the lower value since you are on the left
value = min(value, temp)
elif (i + position) % 2 != 0 and l_send is not None:
# send your value to your left neighbor
process_lock.acquire()
l_send[1].send(value)
process_lock.release()
# receive your left neighbor's value
process_lock.acquire()
temp = lr_cv[0].recv()
process_lock.release()
# take the higher value since you are on the right
value = max(value, temp)
# after all swaps are performed, send the values back to main
result_pipe[1].send(value)
"""
the function which creates the processes that perform the parallel swaps
arr = the list to be sorted
"""
def odd_even_transposition(arr):
process_array_ = []
result_pipe = []
# initialize the list of pipes where the values will be retrieved
for _ in arr:
result_pipe.append(Pipe())
# creates the processes
# the first and last process only have one neighbor so they are made outside
# of the loop
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(0, arr[0], None, temp_rs, None, temp_rr, result_pipe[0]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
for i in range(1, len(arr) - 1):
temp_rs = Pipe()
temp_rr = Pipe()
process_array_.append(
Process(
target=oe_process,
args=(i, arr[i], temp_ls, temp_rs, temp_lr, temp_rr, result_pipe[i]),
)
)
temp_lr = temp_rs
temp_ls = temp_rr
process_array_.append(
Process(
target=oe_process,
args=(
len(arr) - 1,
arr[len(arr) - 1],
temp_ls,
None,
temp_lr,
None,
result_pipe[len(arr) - 1],
),
)
)
# start the processes
for p in process_array_:
p.start()
# wait for the processes to end and write their values to the list
for p in range(0, len(result_pipe)):
arr[p] = result_pipe[p][0].recv()
process_array_[p].join()
return arr
# creates a reverse sorted list and sorts it
def main():
arr = list(range(10, 0, -1))
print("Initial List")
print(*arr)
arr = odd_even_transposition(arr)
print("Sorted List\n")
print(*arr)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
def get_bit(number: int, position: int) -> int:
"""
Get the bit at the given position
Details: perform bitwise and for the given number and X,
Where X is a number with all the bits – zeroes and bit on given position – one.
If the result is not equal to 0, then the bit on the given position is 1, else 0.
>>> get_bit(0b1010, 0)
0
>>> get_bit(0b1010, 1)
1
>>> get_bit(0b1010, 2)
0
>>> get_bit(0b1010, 3)
1
"""
return int((number & (1 << position)) != 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
def get_bit(number: int, position: int) -> int:
"""
Get the bit at the given position
Details: perform bitwise and for the given number and X,
Where X is a number with all the bits – zeroes and bit on given position – one.
If the result is not equal to 0, then the bit on the given position is 1, else 0.
>>> get_bit(0b1010, 0)
0
>>> get_bit(0b1010, 1)
1
>>> get_bit(0b1010, 2)
0
>>> get_bit(0b1010, 3)
1
"""
return int((number & (1 << position)) != 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The stock span problem is a financial problem where we have a series of n daily
price quotes for a stock and we need to calculate span of stock's price for all n days.
The span Si of the stock's price on a given day i is defined as the maximum
number of consecutive days just before the given day, for which the price of the stock
on the current day is less than or equal to its price on the given day.
"""
def calculation_span(price, s):
n = len(price)
# Create a stack and push index of fist element to it
st = []
st.append(0)
# Span value of first element is always 1
s[0] = 1
# Calculate span values for rest of the elements
for i in range(1, n):
# Pop elements from stack while stack is not
# empty and top of stack is smaller than price[i]
while len(st) > 0 and price[st[0]] <= price[i]:
st.pop()
# If stack becomes empty, then price[i] is greater
# than all elements on left of it, i.e. price[0],
# price[1], ..price[i-1]. Else the price[i] is
# greater than elements after top of stack
s[i] = i + 1 if len(st) <= 0 else (i - st[0])
# Push this element to stack
st.append(i)
# A utility function to print elements of array
def print_array(arr, n):
for i in range(0, n):
print(arr[i], end=" ")
# Driver program to test above function
price = [10, 4, 5, 90, 120, 80]
S = [0 for i in range(len(price) + 1)]
# Fill the span values in array S[]
calculation_span(price, S)
# Print the calculated span values
print_array(S, len(price))
| """
The stock span problem is a financial problem where we have a series of n daily
price quotes for a stock and we need to calculate span of stock's price for all n days.
The span Si of the stock's price on a given day i is defined as the maximum
number of consecutive days just before the given day, for which the price of the stock
on the current day is less than or equal to its price on the given day.
"""
def calculation_span(price, s):
n = len(price)
# Create a stack and push index of fist element to it
st = []
st.append(0)
# Span value of first element is always 1
s[0] = 1
# Calculate span values for rest of the elements
for i in range(1, n):
# Pop elements from stack while stack is not
# empty and top of stack is smaller than price[i]
while len(st) > 0 and price[st[0]] <= price[i]:
st.pop()
# If stack becomes empty, then price[i] is greater
# than all elements on left of it, i.e. price[0],
# price[1], ..price[i-1]. Else the price[i] is
# greater than elements after top of stack
s[i] = i + 1 if len(st) <= 0 else (i - st[0])
# Push this element to stack
st.append(i)
# A utility function to print elements of array
def print_array(arr, n):
for i in range(0, n):
print(arr[i], end=" ")
# Driver program to test above function
price = [10, 4, 5, 90, 120, 80]
S = [0 for i in range(len(price) + 1)]
# Fill the span values in array S[]
calculation_span(price, S)
# Print the calculated span values
print_array(S, len(price))
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Created by sarathkaul on 12/11/19
import requests
def send_slack_message(message_body: str, slack_url: str) -> None:
headers = {"Content-Type": "application/json"}
response = requests.post(slack_url, json={"text": message_body}, headers=headers)
if response.status_code != 200:
raise ValueError(
f"Request to slack returned an error {response.status_code}, "
f"the response is:\n{response.text}"
)
if __name__ == "__main__":
# Set the slack url to the one provided by Slack when you create the webhook at
# https://my.slack.com/services/new/incoming-webhook/
send_slack_message("<YOUR MESSAGE BODY>", "<SLACK CHANNEL URL>")
| # Created by sarathkaul on 12/11/19
import requests
def send_slack_message(message_body: str, slack_url: str) -> None:
headers = {"Content-Type": "application/json"}
response = requests.post(slack_url, json={"text": message_body}, headers=headers)
if response.status_code != 200:
raise ValueError(
f"Request to slack returned an error {response.status_code}, "
f"the response is:\n{response.text}"
)
if __name__ == "__main__":
# Set the slack url to the one provided by Slack when you create the webhook at
# https://my.slack.com/services/new/incoming-webhook/
send_slack_message("<YOUR MESSAGE BODY>", "<SLACK CHANNEL URL>")
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 56: https://projecteuler.net/problem=56
A googol (10^100) is a massive number: one followed by one-hundred zeros;
100^100 is almost unimaginably large: one followed by two-hundred zeros.
Despite their size, the sum of the digits in each number is only 1.
Considering natural numbers of the form, ab, where a, b < 100,
what is the maximum digital sum?
"""
def solution(a: int = 100, b: int = 100) -> int:
"""
Considering natural numbers of the form, a**b, where a, b < 100,
what is the maximum digital sum?
:param a:
:param b:
:return:
>>> solution(10,10)
45
>>> solution(100,100)
972
>>> solution(100,200)
1872
"""
# RETURN the MAXIMUM from the list of SUMs of the list of INT converted from STR of
# BASE raised to the POWER
return max(
sum(int(x) for x in str(base**power))
for base in range(a)
for power in range(b)
)
# Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Project Euler Problem 56: https://projecteuler.net/problem=56
A googol (10^100) is a massive number: one followed by one-hundred zeros;
100^100 is almost unimaginably large: one followed by two-hundred zeros.
Despite their size, the sum of the digits in each number is only 1.
Considering natural numbers of the form, ab, where a, b < 100,
what is the maximum digital sum?
"""
def solution(a: int = 100, b: int = 100) -> int:
"""
Considering natural numbers of the form, a**b, where a, b < 100,
what is the maximum digital sum?
:param a:
:param b:
:return:
>>> solution(10,10)
45
>>> solution(100,100)
972
>>> solution(100,200)
1872
"""
# RETURN the MAXIMUM from the list of SUMs of the list of INT converted from STR of
# BASE raised to the POWER
return max(
sum(int(x) for x in str(base**power))
for base in range(a)
for power in range(b)
)
# Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,732 | Correct ruff failures | ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| CaedenPH | "2023-05-14T18:14:30Z" | "2023-05-14T21:03:13Z" | 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 | 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 | Correct ruff failures. ### Describe your change:
Fixes #8723
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Sieve of Eratosthones
The sieve of Eratosthenes is an algorithm used to find prime numbers, less than or
equal to a given value.
Illustration:
https://upload.wikimedia.org/wikipedia/commons/b/b9/Sieve_of_Eratosthenes_animation.gif
Reference: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
doctest provider: Bruno Simas Hadlich (https://github.com/brunohadlich)
Also thanks to Dmitry (https://github.com/LizardWizzard) for finding the problem
"""
from __future__ import annotations
import math
def prime_sieve(num: int) -> list[int]:
"""
Returns a list with all prime numbers up to n.
>>> prime_sieve(50)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
>>> prime_sieve(25)
[2, 3, 5, 7, 11, 13, 17, 19, 23]
>>> prime_sieve(10)
[2, 3, 5, 7]
>>> prime_sieve(9)
[2, 3, 5, 7]
>>> prime_sieve(2)
[2]
>>> prime_sieve(1)
[]
"""
if num <= 0:
raise ValueError(f"{num}: Invalid input, please enter a positive integer.")
sieve = [True] * (num + 1)
prime = []
start = 2
end = int(math.sqrt(num))
while start <= end:
# If start is a prime
if sieve[start] is True:
prime.append(start)
# Set multiples of start be False
for i in range(start * start, num + 1, start):
if sieve[i] is True:
sieve[i] = False
start += 1
for j in range(end + 1, num + 1):
if sieve[j] is True:
prime.append(j)
return prime
if __name__ == "__main__":
print(prime_sieve(int(input("Enter a positive integer: ").strip())))
| """
Sieve of Eratosthones
The sieve of Eratosthenes is an algorithm used to find prime numbers, less than or
equal to a given value.
Illustration:
https://upload.wikimedia.org/wikipedia/commons/b/b9/Sieve_of_Eratosthenes_animation.gif
Reference: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
doctest provider: Bruno Simas Hadlich (https://github.com/brunohadlich)
Also thanks to Dmitry (https://github.com/LizardWizzard) for finding the problem
"""
from __future__ import annotations
import math
def prime_sieve(num: int) -> list[int]:
"""
Returns a list with all prime numbers up to n.
>>> prime_sieve(50)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
>>> prime_sieve(25)
[2, 3, 5, 7, 11, 13, 17, 19, 23]
>>> prime_sieve(10)
[2, 3, 5, 7]
>>> prime_sieve(9)
[2, 3, 5, 7]
>>> prime_sieve(2)
[2]
>>> prime_sieve(1)
[]
"""
if num <= 0:
raise ValueError(f"{num}: Invalid input, please enter a positive integer.")
sieve = [True] * (num + 1)
prime = []
start = 2
end = int(math.sqrt(num))
while start <= end:
# If start is a prime
if sieve[start] is True:
prime.append(start)
# Set multiples of start be False
for i in range(start * start, num + 1, start):
if sieve[i] is True:
sieve[i] = False
start += 1
for j in range(end + 1, num + 1):
if sieve[j] is True:
prime.append(j)
return prime
if __name__ == "__main__":
print(prime_sieve(int(input("Enter a positive integer: ").strip())))
| -1 |
Subsets and Splits